WorldWideScience

Sample records for analysis benchmarks phase

  1. Discussion of OECD LWR Uncertainty Analysis in Modelling Benchmark

    International Nuclear Information System (INIS)

    Ivanov, K.; Avramova, M.; Royer, E.; Gillford, J.

    2013-01-01

    The demand for best estimate calculations in nuclear reactor design and safety evaluations has increased in recent years. Uncertainty quantification has been highlighted as part of the best estimate calculations. The modelling aspects of uncertainty and sensitivity analysis are to be further developed and validated on scientific grounds in support of their performance and application to multi-physics reactor simulations. The Organization for Economic Co-operation and Development (OECD) / Nuclear Energy Agency (NEA) Nuclear Science Committee (NSC) has endorsed the creation of an Expert Group on Uncertainty Analysis in Modelling (EGUAM). Within the framework of activities of EGUAM/NSC the OECD/NEA initiated the Benchmark for Uncertainty Analysis in Modelling for Design, Operation, and Safety Analysis of Light Water Reactor (OECD LWR UAM benchmark). The general objective of the benchmark is to propagate the predictive uncertainties of code results through complex coupled multi-physics and multi-scale simulations. The benchmark is divided into three phases with Phase I highlighting the uncertainty propagation in stand-alone neutronics calculations, while Phase II and III are focused on uncertainty analysis of reactor core and system respectively. This paper discusses the progress made in Phase I calculations, the Specifications for Phase II and the incoming challenges in defining Phase 3 exercises. The challenges of applying uncertainty quantification to complex code systems, in particular the time-dependent coupled physics models are the large computational burden and the utilization of non-linear models (expected due to the physics coupling). (authors)

  2. Benchmark problems for numerical implementations of phase field models

    International Nuclear Information System (INIS)

    Jokisaari, A. M.; Voorhees, P. W.; Guyer, J. E.; Warren, J.; Heinonen, O. G.

    2016-01-01

    Here, we present the first set of benchmark problems for phase field models that are being developed by the Center for Hierarchical Materials Design (CHiMaD) and the National Institute of Standards and Technology (NIST). While many scientific research areas use a limited set of well-established software, the growing phase field community continues to develop a wide variety of codes and lacks benchmark problems to consistently evaluate the numerical performance of new implementations. Phase field modeling has become significantly more popular as computational power has increased and is now becoming mainstream, driving the need for benchmark problems to validate and verify new implementations. We follow the example set by the micromagnetics community to develop an evolving set of benchmark problems that test the usability, computational resources, numerical capabilities and physical scope of phase field simulation codes. In this paper, we propose two benchmark problems that cover the physics of solute diffusion and growth and coarsening of a second phase via a simple spinodal decomposition model and a more complex Ostwald ripening model. We demonstrate the utility of benchmark problems by comparing the results of simulations performed with two different adaptive time stepping techniques, and we discuss the needs of future benchmark problems. The development of benchmark problems will enable the results of quantitative phase field models to be confidently incorporated into integrated computational materials science and engineering (ICME), an important goal of the Materials Genome Initiative.

  3. Analysis of an OECD/NEA high-temperature reactor benchmark

    International Nuclear Information System (INIS)

    Hosking, J. G.; Newton, T. D.; Koeberl, O.; Morris, P.; Goluoglu, S.; Tombakoglu, T.; Colak, U.; Sartori, E.

    2006-01-01

    This paper describes analyses of the OECD/NEA HTR benchmark organized by the 'Working Party on the Scientific Issues of Reactor Systems (WPRS)', formerly the 'Working Party on the Physics of Plutonium Fuels and Innovative Fuel Cycles'. The benchmark was specifically designed to provide inter-comparisons for plutonium and thorium fuels when used in HTR systems. Calculations considering uranium fuel have also been included in the benchmark, in order to identify any increased uncertainties when using plutonium or thorium fuels. The benchmark consists of five phases, which include cell and whole-core calculations. Analysis of the benchmark has been performed by a number of international participants, who have used a range of deterministic and Monte Carlo code schemes. For each of the benchmark phases, neutronics parameters have been evaluated. Comparisons are made between the results of the benchmark participants, as well as comparisons between the predictions of the deterministic calculations and those from detailed Monte Carlo calculations. (authors)

  4. OECD/NEA Burnup Credit Calculational Criticality Benchmark Phase I-B Results

    International Nuclear Information System (INIS)

    DeHart, M.D.

    1993-01-01

    Burnup credit is an ongoing technical concern for many countries that operate commercial nuclear power reactors. In a multinational cooperative effort to resolve burnup credit issues, a Burnup Credit Working Group has been formed under the auspices of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development. This working group has established a set of well-defined calculational benchmarks designed to study significant aspects of burnup credit computational methods. These benchmarks are intended to provide a means for the intercomparison of computer codes, methods, and data applied in spent fuel analysis. The benchmarks have been divided into multiple phases, each phase focusing on a particular feature of burnup credit analysis. This report summarizes the results and findings of the Phase I-B benchmark, which was proposed to provide a comparison of the ability of different code systems and data libraries to perform depletion analysis for the prediction of spent fuel isotopic concentrations. Results included here represent 21 different sets of calculations submitted by 16 different organizations worldwide, and are based on a limited set of nuclides determined to have the most important effect on the neutron multiplication factor of light-water-reactor spent fuel. A comparison of all sets of results demonstrates that most methods are in agreement to within 10% in the ability to estimate the spent fuel concentrations of most actinides. All methods are within 11% agreement about the average for all fission products studied. Furthermore, most deviations are less than 10%, and many are less than 5%. The exceptions are 149 Sm, 151 Sm, and 155 Gd

  5. OECD/NEA BENCHMARK FOR UNCERTAINTY ANALYSIS IN MODELING (UAM FOR LWRS – SUMMARY AND DISCUSSION OF NEUTRONICS CASES (PHASE I

    Directory of Open Access Journals (Sweden)

    RYAN N. BRATTON

    2014-06-01

    Full Text Available A Nuclear Energy Agency (NEA, Organization for Economic Co-operation and Development (OECD benchmark for Uncertainty Analysis in Modeling (UAM is defined in order to facilitate the development and validation of available uncertainty analysis and sensitivity analysis methods for best-estimate Light water Reactor (LWR design and safety calculations. The benchmark has been named the OECD/NEA UAM-LWR benchmark, and has been divided into three phases each of which focuses on a different portion of the uncertainty propagation in LWR multi-physics and multi-scale analysis. Several different reactor cases are modeled at various phases of a reactor calculation. This paper discusses Phase I, known as the “Neutronics Phase”, which is devoted mostly to the propagation of nuclear data (cross-section uncertainty throughout steady-state stand-alone neutronics core calculations. Three reactor systems (for which design, operation and measured data are available are rigorously studied in this benchmark: Peach Bottom Unit 2 BWR, Three Mile Island Unit 1 PWR, and VVER-1000 Kozloduy-6/Kalinin-3. Additional measured data is analyzed such as the KRITZ LEU criticality experiments and the SNEAK-7A and 7B experiments of the Karlsruhe Fast Critical Facility. Analyzed results include the top five neutron-nuclide reactions, which contribute the most to the prediction uncertainty in keff, as well as the uncertainty in key parameters of neutronics analysis such as microscopic and macroscopic cross-sections, six-group decay constants, assembly discontinuity factors, and axial and radial core power distributions. Conclusions are drawn regarding where further studies should be done to reduce uncertainties in key nuclide reaction uncertainties (i.e.: 238U radiative capture and inelastic scattering (n, n’ as well as the average number of neutrons released per fission event of 239Pu.

  6. OECD/NEA Burnup Credit Calculational Criticality Benchmark Phase I-B Results

    Energy Technology Data Exchange (ETDEWEB)

    DeHart, M.D.

    1993-01-01

    Burnup credit is an ongoing technical concern for many countries that operate commercial nuclear power reactors. In a multinational cooperative effort to resolve burnup credit issues, a Burnup Credit Working Group has been formed under the auspices of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development. This working group has established a set of well-defined calculational benchmarks designed to study significant aspects of burnup credit computational methods. These benchmarks are intended to provide a means for the intercomparison of computer codes, methods, and data applied in spent fuel analysis. The benchmarks have been divided into multiple phases, each phase focusing on a particular feature of burnup credit analysis. This report summarizes the results and findings of the Phase I-B benchmark, which was proposed to provide a comparison of the ability of different code systems and data libraries to perform depletion analysis for the prediction of spent fuel isotopic concentrations. Results included here represent 21 different sets of calculations submitted by 16 different organizations worldwide, and are based on a limited set of nuclides determined to have the most important effect on the neutron multiplication factor of light-water-reactor spent fuel. A comparison of all sets of results demonstrates that most methods are in agreement to within 10% in the ability to estimate the spent fuel concentrations of most actinides. All methods are within 11% agreement about the average for all fission products studied. Furthermore, most deviations are less than 10%, and many are less than 5%. The exceptions are {sup 149}Sm, {sup 151}Sm, and {sup 155}Gd.

  7. Space Weather Action Plan Solar Radio Burst Phase 1 Benchmarks and the Steps to Phase 2

    Science.gov (United States)

    Biesecker, D. A.; White, S. M.; Gopalswamy, N.; Black, C.; Love, J. J.; Pierson, J.

    2017-12-01

    Solar radio bursts, when at the right frequency and when strong enough, can interfere with radar, communication, and tracking signals. In severe cases, radio bursts can inhibit the successful use of radio communications and disrupt a wide range of systems that are reliant on Position, Navigation, and Timing services on timescales ranging from minutes to hours across wide areas on the dayside of Earth. The White House's Space Weather Action Plan asked for solar radio burst intensity benchmarks for an event occurrence frequency of 1 in 100 years and also a theoretical maximum intensity benchmark. The benchmark team has developed preliminary (phase 1) benchmarks for the VHF (30-300 MHz), UHF (300-3000 MHz), GPS (1176-1602 MHz), F10.7 (2800 MHz), and Microwave (4000-20000) bands. The preliminary benchmarks were derived based on previously published work. Limitations in the published work will be addressed in phase 2 of the benchmark process. In addition, deriving theoretical maxima requires additional work, where it is even possible to, in order to meet the Action Plan objectives. In this presentation, we will present the phase 1 benchmarks, the basis used to derive them, and the limitations of that work. We will also discuss the work that needs to be done to complete the phase 2 benchmarks.

  8. OECD/NEA burnup credit calculational criticality benchmark Phase I-B results

    Energy Technology Data Exchange (ETDEWEB)

    DeHart, M.D.; Parks, C.V. [Oak Ridge National Lab., TN (United States); Brady, M.C. [Sandia National Labs., Las Vegas, NV (United States)

    1996-06-01

    In most countries, criticality analysis of LWR fuel stored in racks and casks has assumed that the fuel is fresh with the maximum allowable initial enrichment. This assumption has led to the design of widely spaced and/or highly poisoned storage and transport arrays. If credit is assumed for fuel burnup, initial enrichment limitations can be raised in existing systems, and more compact and economical arrays can be designed. Such reliance on the reduced reactivity of spent fuel for criticality control is referred to as burnup credit. The Burnup Credit Working Group, formed under the auspices of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development, has established a set of well-defined calculational benchmarks designed to study significant aspects of burnup credit computational methods. These benchmarks are intended to provide a means for the intercomparison of computer codes, methods, and data applied in spent fuel analysis. The benchmarks have been divided into multiple phases, each phase focusing on a particular feature of burnup credit analysis. This report summarizes the results and findings of the Phase I-B benchmark, which was proposed to provide a comparison of the ability of different code systems and data libraries to perform depletion analysis for the prediction of spent fuel isotopic concentrations. Results included here represent 21 different sets of calculations submitted by 16 different organizations worldwide and are based on a limited set of nuclides determined to have the most important effect on the neutron multiplication factor of light-water-reactor spent fuel. A comparison of all sets of results demonstrates that most methods agree to within 10% in the ability to estimate the spent fuel concentrations of most actinides. All methods agree within 11% about the average for all fission products studied. Most deviations are less than 10%, and many are less than 5%. The exceptions are Sm 149, Sm 151, and Gd 155.

  9. OECD/NEA burnup credit calculational criticality benchmark Phase I-B results

    International Nuclear Information System (INIS)

    DeHart, M.D.; Parks, C.V.; Brady, M.C.

    1996-06-01

    In most countries, criticality analysis of LWR fuel stored in racks and casks has assumed that the fuel is fresh with the maximum allowable initial enrichment. This assumption has led to the design of widely spaced and/or highly poisoned storage and transport arrays. If credit is assumed for fuel burnup, initial enrichment limitations can be raised in existing systems, and more compact and economical arrays can be designed. Such reliance on the reduced reactivity of spent fuel for criticality control is referred to as burnup credit. The Burnup Credit Working Group, formed under the auspices of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development, has established a set of well-defined calculational benchmarks designed to study significant aspects of burnup credit computational methods. These benchmarks are intended to provide a means for the intercomparison of computer codes, methods, and data applied in spent fuel analysis. The benchmarks have been divided into multiple phases, each phase focusing on a particular feature of burnup credit analysis. This report summarizes the results and findings of the Phase I-B benchmark, which was proposed to provide a comparison of the ability of different code systems and data libraries to perform depletion analysis for the prediction of spent fuel isotopic concentrations. Results included here represent 21 different sets of calculations submitted by 16 different organizations worldwide and are based on a limited set of nuclides determined to have the most important effect on the neutron multiplication factor of light-water-reactor spent fuel. A comparison of all sets of results demonstrates that most methods agree to within 10% in the ability to estimate the spent fuel concentrations of most actinides. All methods agree within 11% about the average for all fission products studied. Most deviations are less than 10%, and many are less than 5%. The exceptions are Sm 149, Sm 151, and Gd 155

  10. Benchmarking of small-signal dynamics of single-phase PLLs

    DEFF Research Database (Denmark)

    Zhang, Chong; Wang, Xiongfei; Blaabjerg, Frede

    2015-01-01

    Phase-looked Loop (PLL) is a critical component for the control and grid synchronization of grid-connected power converters. This paper presents a benchmarking study on the small-signal dynamics of three commonly used PLLs for single-phase converters, including enhanced PLL, second......-order generalized integrator based PLL, and the inverse-PLL. First, a unified small-signal model of those PLLs is established for comparing their dynamics. Then, a systematic design guideline for parameters tuning of the PLLs is formulated. To confirm the validity of theoretical analysis, nonlinear time...

  11. KAERI results for BN600 full MOX benchmark (Phase 4)

    International Nuclear Information System (INIS)

    Lee, Kibog Lee

    2003-01-01

    The purpose of this document is to report the results of KAERI's calculation for the Phase-4 of BN-600 full MOX fueled core benchmark analyses according to the RCM report of IAEA CRP Action on U pdated Codes and Methods to Reduce the Calculational Uncertainties of the LMFR Reactivity Effects. T he BN-600 full MOX core model is based on the specification in the document, F ull MOX Model (Phase4. doc ) . This document addresses the calculational methods employed in the benchmark analyses and benchmark results carried out by KAERI

  12. Benchmarks for Uncertainty Analysis in Modelling (UAM) for the Design, Operation and Safety Analysis of LWRs - Volume I: Specification and Support Data for Neutronics Cases (Phase I)

    International Nuclear Information System (INIS)

    Ivanov, K.; Avramova, M.; Kamerow, S.; Kodeli, I.; Sartori, E.; Ivanov, E.; Cabellos, O.

    2013-01-01

    released. This report presents benchmark specifications for Phase I (Neutronics Phase) of the OECD LWR UAM benchmark in a format similar to the previous OECD/NRC benchmark specifications. Phase I consists of the following exercises: - Exercise 1 (I-1): 'Cell Physics' focused on the derivation of the multi-group microscopic cross-section libraries and their uncertainties. - Exercise 2 (I-2): 'Lattice Physics' focused on the derivation of the few-group macroscopic cross-section libraries and their uncertainties. - Exercise 3 (I-3): 'Core Physics' focused on the core steady-state stand-alone neutronics calculations and their uncertainties. These exercises follow those established in the industry and regulation routine calculation scheme for LWR design and safety analysis. This phase is focused on understanding uncertainties in the prediction of key reactor core parameters associated with LWR stand-alone neutronics core simulation. Such uncertainties occur due to input data uncertainties, modelling errors, and numerical approximations. The chosen approach in Phase I is to select/propagate the most important contributors for each exercise which can be treated in a practical manner. The cross-section uncertainty information is considered as the most important source of input uncertainty for Phase I. The cross-section related uncertainties are propagated through the 3 Exercises of Phase I. In Exercise I-1 these are the variance and covariance data associated with continuous energy cross-sections in evaluated nuclear data files. In Exercise I-2 these are the variance and covariance data associated with multi-group cross-sections used as input in the lattice physics codes. In Exercise I-3 these are the variance and covariance data associated with few-group cross-sections used as input in the core simulators. Depending on the availability of different methods in the computer code of choice for a given exercise, the related methodological uncertainties can play a smaller or larger

  13. INL Results for Phases I and III of the OECD/NEA MHTGR-350 Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Gerhard Strydom; Javier Ortensi; Sonat Sen; Hans Hammer

    2013-09-01

    The Idaho National Laboratory (INL) Very High Temperature Reactor (VHTR) Technology Development Office (TDO) Methods Core Simulation group led the construction of the Organization for Economic Cooperation and Development (OECD) Modular High Temperature Reactor (MHTGR) 350 MW benchmark for comparing and evaluating prismatic VHTR analysis codes. The benchmark is sponsored by the OECD's Nuclear Energy Agency (NEA), and the project will yield a set of reference steady-state, transient, and lattice depletion problems that can be used by the Department of Energy (DOE), the Nuclear Regulatory Commission (NRC), and vendors to assess their code suits. The Methods group is responsible for defining the benchmark specifications, leading the data collection and comparison activities, and chairing the annual technical workshops. This report summarizes the latest INL results for Phase I (steady state) and Phase III (lattice depletion) of the benchmark. The INSTANT, Pronghorn and RattleSnake codes were used for the standalone core neutronics modeling of Exercise 1, and the results obtained from these codes are compared in Section 4. Exercise 2 of Phase I requires the standalone steady-state thermal fluids modeling of the MHTGR-350 design, and the results for the systems code RELAP5-3D are discussed in Section 5. The coupled neutronics and thermal fluids steady-state solution for Exercise 3 are reported in Section 6, utilizing the newly developed Parallel and Highly Innovative Simulation for INL Code System (PHISICS)/RELAP5-3D code suit. Finally, the lattice depletion models and results obtained for Phase III are compared in Section 7. The MHTGR-350 benchmark proved to be a challenging simulation set of problems to model accurately, and even with the simplifications introduced in the benchmark specification this activity is an important step in the code-to-code verification of modern prismatic VHTR codes. A final OECD/NEA comparison report will compare the Phase I and III

  14. Joint European contribution to phase 5 of the BN600 hybrid reactor benchmark core analysis (European ERANOS formulaire for fast reactor core analysis)

    International Nuclear Information System (INIS)

    Rimpault, G.

    2004-01-01

    Hybrid UOX/MOX fueled core of the BN-600 reactor was endorsed as an international benchmark. BFS-2 critical facility was designed for full size simulation of core and shielding of large fast reactors (up tp 3000 MWe). Wide experimental programme including measurements of criticality, fission rates, rod worths, and SVRE was established. Four BFS-62 critical assemblies have been designed to study changes in BN-600 reactor physics-when moving to a hybrid MOX core. BFS-62-3A assembly is a full scale model of the BN-600 reactor hybrid core. it consists of three regions of UO 2 fuel, axial and radial fertile blankets, MOX fuel added in a ring between MC and OC zones, 120 deg sector of stainless steel reflector included within radial blanket. Joint European contribution to the Phase 5 benchmark analysis was performed by Serco Assurance Winfrith (UK) and CEA Cadarache (France). Analysis was carried out using Version 1.2 of the ERANOS code; and data system for advanced and fast reactor core applications. Nuclear data is based on the JEF2.2 nuclear data evaluation (including sodium). Results for Phase 5 of the BN-600 benchmark have been determined for criticality and SVRE in both diffusion and transport theory. Full details of the results are presented in a paper posted on the IAEA Business Collaborator website nad a brief summary is provided in this paper

  15. OECD/NEA expert group on uncertainty analysis for criticality safety assessment: Results of benchmark on sensitivity calculation (phase III)

    Energy Technology Data Exchange (ETDEWEB)

    Ivanova, T.; Laville, C. [Institut de Radioprotection et de Surete Nucleaire IRSN, BP 17, 92262 Fontenay aux Roses (France); Dyrda, J. [Atomic Weapons Establishment AWE, Aldermaston, Reading, RG7 4PR (United Kingdom); Mennerdahl, D. [E Mennerdahl Systems EMS, Starvaegen 12, 18357 Taeby (Sweden); Golovko, Y.; Raskach, K.; Tsiboulia, A. [Inst. for Physics and Power Engineering IPPE, 1, Bondarenko sq., 249033 Obninsk (Russian Federation); Lee, G. S.; Woo, S. W. [Korea Inst. of Nuclear Safety KINS, 62 Gwahak-ro, Yuseong-gu, Daejeon 305-338 (Korea, Republic of); Bidaud, A.; Sabouri, P. [Laboratoire de Physique Subatomique et de Cosmologie LPSC, CNRS-IN2P3/UJF/INPG, Grenoble (France); Patel, A. [U.S. Nuclear Regulatory Commission (NRC), Washington, DC 20555-0001 (United States); Bledsoe, K.; Rearden, B. [Oak Ridge National Laboratory ORNL, M.S. 6170, P.O. Box 2008, Oak Ridge, TN 37831 (United States); Gulliford, J.; Michel-Sendis, F. [OECD/NEA, 12, Bd des Iles, 92130 Issy-les-Moulineaux (France)

    2012-07-01

    The sensitivities of the k{sub eff} eigenvalue to neutron cross sections have become commonly used in similarity studies and as part of the validation algorithm for criticality safety assessments. To test calculations of the sensitivity coefficients, a benchmark study (Phase III) has been established by the OECD-NEA/WPNCS/EG UACSA (Expert Group on Uncertainty Analysis for Criticality Safety Assessment). This paper presents some sensitivity results generated by the benchmark participants using various computational tools based upon different computational methods: SCALE/TSUNAMI-3D and -1D, MONK, APOLLO2-MORET 5, DRAGON-SUSD3D and MMKKENO. The study demonstrates the performance of the tools. It also illustrates how model simplifications impact the sensitivity results and demonstrates the importance of 'implicit' (self-shielding) sensitivities. This work has been a useful step towards verification of the existing and developed sensitivity analysis methods. (authors)

  16. Sensitivity and Uncertainty Analysis of IAEA CRP HTGR Benchmark Using McCARD

    International Nuclear Information System (INIS)

    Jang, Sang Hoon; Shim, Hyung Jin

    2016-01-01

    The benchmark consists of 4 phases starting from the local standalone modeling (Phase I) to the safety calculation of coupled system with transient situation (Phase IV). As a preliminary study of UAM on HTGR, this paper covers the exercise 1 and 2 of Phase I which defines the unit cell and lattice geometry of MHTGR-350 (General Atomics). The objective of these exercises is to quantify the uncertainty of the multiplication factor induced by perturbing nuclear data as well as to analyze the specific features of HTGR such as double heterogeneity and self-shielding treatment. The uncertainty quantification of IAEA CRP HTGR UAM benchmarks were conducted using first-order AWP method in McCARD. Uncertainty of the multiplication factor was estimated only for the microscopic cross section perturbation. To reduce the computation time and memory shortage, recently implemented uncertainty analysis module in MC wielandt calculation was adjusted. The covariance data of cross section was generated by NJOY/ERRORR module with ENDF/B-VII.1. The numerical result was compared with evaluation result of DeCART/MUSAD code system developed by KAERI. IAEA CRP HTGR UAM benchmark problems were analyzed using McCARD. The numerical results were compared with Serpent for eigenvalue calculation and DeCART/MUSAD for S/U analysis. In eigenvalue calculation, inconsistencies were found in the result with ENDF/B-VII.1 cross section library and it was found to be the effect of thermal scattering data of graphite. As to S/U analysis, McCARD results matched well with DeCART/MUSAD, but showed some discrepancy in 238U capture regarding implicit uncertainty.

  17. OECD/NEA Sandia Fuel Project phase I: Benchmark of the ignition testing

    Energy Technology Data Exchange (ETDEWEB)

    Adorni, Martina, E-mail: martina_adorni@hotmail.it [UNIPI (Italy); Herranz, Luis E. [CIEMAT (Spain); Hollands, Thorsten [GRS (Germany); Ahn, Kwang-II [KAERI (Korea, Republic of); Bals, Christine [GRS (Germany); D' Auria, Francesco [UNIPI (Italy); Horvath, Gabor L. [NUBIKI (Hungary); Jaeckel, Bernd S. [PSI (Switzerland); Kim, Han-Chul; Lee, Jung-Jae [KINS (Korea, Republic of); Ogino, Masao [JNES (Japan); Techy, Zsolt [NUBIKI (Hungary); Velazquez-Lozad, Alexander; Zigh, Abdelghani [USNRC (United States); Rehacek, Radomir [OECD/NEA (France)

    2016-10-15

    Highlights: • A unique PWR spent fuel pool experimental project is analytically investigated. • Predictability of fuel clad ignition in case of a complete loss of coolant in SFPs is assessed. • Computer codes reasonably estimate peak cladding temperature and time of ignition. - Abstract: The OECD/NEA Sandia Fuel Project provided unique thermal-hydraulic experimental data associated with Spent Fuel Pool (SFP) complete drain down. The study conducted at Sandia National Laboratories (SNL) was successfully completed (July 2009 to February 2013). The accident conditions of interest for the SFP were simulated in a full scale prototypic fashion (electrically heated, prototypic assemblies in a prototypic SFP rack) so that the experimental results closely represent actual fuel assembly responses. A major impetus for this work was to facilitate severe accident code validation and to reduce modeling uncertainties within the codes. Phase I focused on axial heating and burn propagation in a single PWR 17 × 17 assembly (i.e. “hot neighbors” configuration). Phase II addressed axial and radial heating and zirconium fire propagation including effects of fuel rod ballooning in a 1 × 4 assembly configuration (i.e. single, hot center assembly and four, “cooler neighbors”). This paper summarizes the comparative analysis regarding the final destructive ignition test of the phase I of the project. The objective of the benchmark is to evaluate and compare the predictive capabilities of computer codes concerning the ignition testing of PWR fuel assemblies. Nine institutions from eight different countries were involved in the benchmark calculations. The time to ignition and the maximum temperature are adequately captured by the calculations. It is believed that the benchmark constitutes an enlargement of the validation range for the codes to the conditions tested, thus enhancing the code applicability to other fuel assembly designs and configurations. The comparison of

  18. Benchmark calculation of subchannel analysis codes

    International Nuclear Information System (INIS)

    1996-02-01

    In order to evaluate the analysis capabilities of various subchannel codes used in thermal-hydraulic design of light water reactors, benchmark calculations were performed. The selected benchmark problems and major findings obtained by the calculations were as follows: (1)As for single-phase flow mixing experiments between two channels, the calculated results of water temperature distribution along the flow direction were agreed with experimental results by tuning turbulent mixing coefficients properly. However, the effect of gap width observed in the experiments could not be predicted by the subchannel codes. (2)As for two-phase flow mixing experiments between two channels, in high water flow rate cases, the calculated distributions of air and water flows in each channel were well agreed with the experimental results. In low water flow cases, on the other hand, the air mixing rates were underestimated. (3)As for two-phase flow mixing experiments among multi-channels, the calculated mass velocities at channel exit under steady-state condition were agreed with experimental values within about 10%. However, the predictive errors of exit qualities were as high as 30%. (4)As for critical heat flux(CHF) experiments, two different results were obtained. A code indicated that the calculated CHF's using KfK or EPRI correlations were well agreed with the experimental results, while another code suggested that the CHF's were well predicted by using WSC-2 correlation or Weisman-Pei mechanistic model. (5)As for droplets entrainment and deposition experiments, it was indicated that the predictive capability was significantly increased by improving correlations. On the other hand, a remarkable discrepancy between codes was observed. That is, a code underestimated the droplet flow rate and overestimated the liquid film flow rate in high quality cases, while another code overestimated the droplet flow rate and underestimated the liquid film flow rate in low quality cases. (J.P.N.)

  19. Shielding Benchmark Computational Analysis

    International Nuclear Information System (INIS)

    Hunter, H.T.; Slater, C.O.; Holland, L.B.; Tracz, G.; Marshall, W.J.; Parsons, J.L.

    2000-01-01

    Over the past several decades, nuclear science has relied on experimental research to verify and validate information about shielding nuclear radiation for a variety of applications. These benchmarks are compared with results from computer code models and are useful for the development of more accurate cross-section libraries, computer code development of radiation transport modeling, and building accurate tests for miniature shielding mockups of new nuclear facilities. When documenting measurements, one must describe many parts of the experimental results to allow a complete computational analysis. Both old and new benchmark experiments, by any definition, must provide a sound basis for modeling more complex geometries required for quality assurance and cost savings in nuclear project development. Benchmarks may involve one or many materials and thicknesses, types of sources, and measurement techniques. In this paper the benchmark experiments of varying complexity are chosen to study the transport properties of some popular materials and thicknesses. These were analyzed using three-dimensional (3-D) models and continuous energy libraries of MCNP4B2, a Monte Carlo code developed at Los Alamos National Laboratory, New Mexico. A shielding benchmark library provided the experimental data and allowed a wide range of choices for source, geometry, and measurement data. The experimental data had often been used in previous analyses by reputable groups such as the Cross Section Evaluation Working Group (CSEWG) and the Organization for Economic Cooperation and Development/Nuclear Energy Agency Nuclear Science Committee (OECD/NEANSC)

  20. Cross-section sensitivity and uncertainty analysis of the FNG copper benchmark experiment

    Energy Technology Data Exchange (ETDEWEB)

    Kodeli, I., E-mail: ivan.kodeli@ijs.si [Jožef Stefan Institute, Jamova 39, SI-1000 Ljubljana (Slovenia); Kondo, K. [Karlsruhe Institute of Technology, Postfach 3640, D-76021 Karlsruhe (Germany); Japan Atomic Energy Agency, Rokkasho-mura (Japan); Perel, R.L. [Racah Institute of Physics, Hebrew University of Jerusalem, IL-91904 Jerusalem (Israel); Fischer, U. [Karlsruhe Institute of Technology, Postfach 3640, D-76021 Karlsruhe (Germany)

    2016-11-01

    A neutronics benchmark experiment on copper assembly was performed end 2014–beginning 2015 at the 14-MeV Frascati neutron generator (FNG) of ENEA Frascati with the objective to provide the experimental database required for the validation of the copper nuclear data relevant for ITER design calculations, including the related uncertainties. The paper presents the pre- and post-analysis of the experiment performed using cross-section sensitivity and uncertainty codes, both deterministic (SUSD3D) and Monte Carlo (MCSEN5). Cumulative reaction rates and neutron flux spectra, their sensitivity to the cross sections, as well as the corresponding uncertainties were estimated for different selected detector positions up to ∼58 cm in the copper assembly. This permitted in the pre-analysis phase to optimize the geometry, the detector positions and the choice of activation reactions, and in the post-analysis phase to interpret the results of the measurements and the calculations, to conclude on the quality of the relevant nuclear cross-section data, and to estimate the uncertainties in the calculated nuclear responses and fluxes. Large uncertainties in the calculated reaction rates and neutron spectra of up to 50%, rarely observed at this level in the benchmark analysis using today's nuclear data, were predicted, particularly high for fast reactions. Observed C/E (dis)agreements with values as low as 0.5 partly confirm these predictions. Benchmark results are therefore expected to contribute to the improvement of both cross section as well as covariance data evaluations.

  1. Analyses and results of the OECD/NEA WPNCS EGUNF benchmark phase II. Technical report; Analysen und Ergebnisse zum OECD/NEA WPNCS EGUNF Benchmark Phase II. Technischer Bericht

    Energy Technology Data Exchange (ETDEWEB)

    Hannstein, Volker; Sommer, Fabian

    2017-05-15

    The report summarizes the performed studies and results in the frame of the phase II benchmarks of the expert group of used nuclear fuel (EGUNF) of the working party of nuclear criticality safety (WPNCS) of the nuclear energy agency (NEA) of the organization for economic co-operation and development (OECD). The studies specified within the benchmarks have been realized to the full extent. The scope of the benchmarks was the comparison of a generic BWR fuel element with gadolinium containing fuel rods with several computer codes and cross section libraries of different international working groups and institutions. The used computational model allows the evaluation of the accuracy of fuel rod and their influence of the inventory calculations and the respective influence on BWR burnout credit calculations.

  2. Selected examples on multi physics researches at KFKI AEKI-results for phase I of the OECD/NEA UAM benchmark

    International Nuclear Information System (INIS)

    Panka, I.; Kereszturi, A.; Maraczy, C.

    2010-01-01

    Nowadays, there is a tendency to use best estimate plus uncertainty methods in the field of nuclear energy. This implies the application of best estimate code systems and the determination of the corresponding uncertainties. For the latter one an OECD benchmark was set up. The objective of the OECD/NEA Uncertainty Analysis in Best-Estimate Modeling (UAM) LWR benchmark is to determine the uncertainties of the coupled reactor physics/thermal hydraulics LWR calculations at all stages. In this paper the AEKI participation in Phase I will be presented. This Phase is dealing with the evaluation of the uncertainties of the neutronic calculations starting from the pin cell spectral calculations up to the stand-alone neutronics core simulations. (Authors)

  3. The IAEA Coordinated Research Program on HTGR Reactor Physics, Thermal-hydraulics and Depletion Uncertainty Analysis: Description of the Benchmark Test Cases and Phases

    Energy Technology Data Exchange (ETDEWEB)

    Frederik Reitsma; Gerhard Strydom; Bismark Tyobeka; Kostadin Ivanov

    2012-10-01

    The continued development of High Temperature Gas Cooled Reactors (HTGRs) requires verification of design and safety features with reliable high fidelity physics models and robust, efficient, and accurate codes. The uncertainties in the HTR analysis tools are today typically assessed with sensitivity analysis and then a few important input uncertainties (typically based on a PIRT process) are varied in the analysis to find a spread in the parameter of importance. However, one wish to apply a more fundamental approach to determine the predictive capability and accuracies of coupled neutronics/thermal-hydraulics and depletion simulations used for reactor design and safety assessment. Today there is a broader acceptance of the use of uncertainty analysis even in safety studies and it has been accepted by regulators in some cases to replace the traditional conservative analysis. Finally, there is also a renewed focus in supplying reliable covariance data (nuclear data uncertainties) that can then be used in uncertainty methods. Uncertainty and sensitivity studies are therefore becoming an essential component of any significant effort in data and simulation improvement. In order to address uncertainty in analysis and methods in the HTGR community the IAEA launched a Coordinated Research Project (CRP) on the HTGR Uncertainty Analysis in Modelling early in 2012. The project is built on the experience of the OECD/NEA Light Water Reactor (LWR) Uncertainty Analysis in Best-Estimate Modelling (UAM) benchmark activity, but focuses specifically on the peculiarities of HTGR designs and its simulation requirements. Two benchmark problems were defined with the prismatic type design represented by the MHTGR-350 design from General Atomics (GA) while a 250 MW modular pebble bed design, similar to the INET (China) and indirect-cycle PBMR (South Africa) designs are also included. In the paper more detail on the benchmark cases, the different specific phases and tasks and the latest

  4. Uncertainty and sensitivity analysis in reactivity-initiated accident fuel modeling: synthesis of organisation for economic co-operation and development (OECD/nuclear energy agency (NEA benchmark on reactivity-initiated accident codes phase-II

    Directory of Open Access Journals (Sweden)

    Olivier Marchand

    2018-03-01

    Full Text Available In the framework of OECD/NEA Working Group on Fuel Safety, a RIA fuel-rod-code Benchmark Phase I was organized in 2010–2013. It consisted of four experiments on highly irradiated fuel rodlets tested under different experimental conditions. This benchmark revealed the need to better understand the basic models incorporated in each code for realistic simulation of the complicated integral RIA tests with high burnup fuel rods. A second phase of the benchmark (Phase II was thus launched early in 2014, which has been organized in two complementary activities: (1 comparison of the results of different simulations on simplified cases in order to provide additional bases for understanding the differences in modelling of the concerned phenomena; (2 assessment of the uncertainty of the results. The present paper provides a summary and conclusions of the second activity of the Benchmark Phase II, which is based on the input uncertainty propagation methodology. The main conclusion is that uncertainties cannot fully explain the difference between the code predictions. Finally, based on the RIA benchmark Phase-I and Phase-II conclusions, some recommendations are made. Keywords: RIA, Codes Benchmarking, Fuel Modelling, OECD

  5. Benchmarking of thermalhydraulic loop models for lead-alloy-cooled advanced nuclear energy systems. Phase I: Isothermal forced convection case

    International Nuclear Information System (INIS)

    2012-06-01

    Under the auspices of the NEA Nuclear Science Committee (NSC), the Working Party on Scientific Issues of the Fuel Cycle (WPFC) has been established to co-ordinate scientific activities regarding various existing and advanced nuclear fuel cycles, including advanced reactor systems, associated chemistry and flowsheets, development and performance of fuel and materials and accelerators and spallation targets. The WPFC has different expert groups to cover a wide range of scientific issues in the field of nuclear fuel cycle. The Task Force on Lead-Alloy-Cooled Advanced Nuclear Energy Systems (LACANES) was created in 2006 to study thermal-hydraulic characteristics of heavy liquid metal coolant loop. The objectives of the task force are to (1) validate thermal-hydraulic loop models for application to LACANES design analysis in participating organisations, by benchmarking with a set of well-characterised lead-alloy coolant loop test data, (2) establish guidelines for quantifying thermal-hydraulic modelling parameters related to friction and heat transfer by lead-alloy coolant and (3) identify specific issues, either in modelling and/or in loop testing, which need to be addressed via possible future work. Nine participants from seven different institutes participated in the first phase of the benchmark. This report provides details of the benchmark specifications, method and code characteristics and results of the preliminary study: pressure loss coefficient and Phase-I. A comparison and analysis of the results will be performed together with Phase-II

  6. Thermal hydraulics-II. 2. Benchmarking of the TRIO Two-Phase-Flow Module

    International Nuclear Information System (INIS)

    Helton, Donald; Kumbaro, Anela; Hassan, Yassin

    2001-01-01

    The Commissariat a l'Energie Atomique (CEA) is currently developing a two-phase-flow module for the Trio-U CFD computer program. Work in the area of advanced numerical technique application to two-phase flow is being carried out by the SYSCO division at the CEA Saclay center. Recently, this division implemented several advanced numerical solvers, including approximate Riemann solvers and flux vector splitting schemes. As a test of these new advances, several benchmark tests were executed. This paper describes the pertinent results of this study. The first benchmark problem was the Ransom faucet problem. This problem consists of a vertical column of water acting under the gravity force. The appeal of this problem is that it tests the program's handling of the body force term and it has an analytical solution. The Trio results [based on a two-fluid, two-dimensional (2-D) simulation] for this problem were very encouraging. The two-phase-flow module was able to reproduce the analytical velocity and void fraction profiles. A reasonable amount of numerical diffusion was observed, and the numerical solution converged to the analytical solution as the grid size was refined, as shown in Fig. 1. A second series of benchmark problems is concerned with the employment of a drag force term. In a first approach, we test the capability of the code to take account of this source term, using a flux scheme solution technique. For this test, a rectangular duct was utilized. As shown in Fig. 2, mesh refinement results in an approach to the analytical solution. Next, a convergent/divergent nozzle problem is proposed. The nozzle is characterized by a brief contraction section and a long expansion section. A two-phase, 2-D, non-condensing model is used in conjunction with the Rieman solver. Figure 3 shows a comparison of the pressure profile for the experimental case and for the values calculated by the TRIO U two-phase-flow module. Trio was able to handle the drag force term and

  7. Impact testing and analysis for structural code benchmarking

    International Nuclear Information System (INIS)

    Glass, R.E.

    1989-01-01

    Sandia National Laboratories, in cooperation with industry and other national laboratories, has been benchmarking computer codes (''Structural Code Benchmarking for the Analysis of Impact Response of Nuclear Material Shipping Cask,'' R.E. Glass, Sandia National Laboratories, 1985; ''Sample Problem Manual for Benchmarking of Cask Analysis Codes,'' R.E. Glass, Sandia National Laboratories, 1988; ''Standard Thermal Problem Set for the Evaluation of Heat Transfer Codes Used in the Assessment of Transportation Packages, R.E. Glass, et al., Sandia National Laboratories, 1988) used to predict the structural, thermal, criticality, and shielding behavior of radioactive materials packages. The first step in the benchmarking of the codes was to develop standard problem sets and to compare the results from several codes and users. This step for structural analysis codes has been completed as described in ''Structural Code Benchmarking for the Analysis of Impact Response of Nuclear Material Shipping Casks,'' R.E. Glass, Sandia National Laboratories, 1985. The problem set is shown in Fig. 1. This problem set exercised the ability of the codes to predict the response to end (axisymmetric) and side (plane strain) impacts with both elastic and elastic/plastic materials. The results from these problems showed that there is good agreement in predicting elastic response. Significant differences occurred in predicting strains for the elastic/plastic models. An example of the variation in predicting plastic behavior is given, which shows the hoop strain as a function of time at the impacting end of Model B. These differences in predicting plastic strains demonstrated a need for benchmark data for a cask-like problem. 6 refs., 5 figs

  8. Free allocations in EU ETS Phase 3: The impact of emissions performance benchmarking for carbon-intensive industry - Working Paper No. 2013-14

    International Nuclear Information System (INIS)

    Lecourt, S.; Palliere, C.; Sartor, O.

    2013-02-01

    From Phase 3 (2013-20) of the European Union Emissions Trading Scheme, carbon-intensive industrial emitters will receive free allocations based on harmonised, EU-wide benchmarks. This paper analyses the impacts of these new rules on allocations to key energy-intensive sectors across Europe. It explores an original dataset that combines recent data from the National Implementing Measures of 20 EU Member States with the Community Independent Transaction Log and other EU documents. The analysis reveals that free allocations to benchmarked sectors will be reduced significantly compared to Phase 2 (2008-12). This reduction should both increase public revenues from carbon auctions and has the potential to enhance the economic efficiency of the carbon market. The analysis also shows that changes in allocation vary mostly across installations within countries, raising the possibility that the carbon-cost competitiveness impacts may be more intense within rather than across countries. Lastly, the analysis finds evidence that the new benchmarking rules will, as intended, reward installations with better emissions performance and will improve harmonisation of free allocations in the EU ETS by reducing differences in allocation levels across countries with similar carbon intensities of production. (authors)

  9. International Benchmark on Pressurised Water Reactor Sub-channel and Bundle Tests. Volume II: Benchmark Results of Phase I: Void Distribution

    International Nuclear Information System (INIS)

    Rubin, Adam; Avramova, Maria; Velazquez-Lozada, Alexander

    2016-03-01

    This report summarised the first phase of the Nuclear Energy Agency (NEA) and the US Nuclear Regulatory Commission Benchmark based on NUPEC PWR Sub-channel and Bundle Tests (PSBT), which was intended to provide data for the verification of void distribution models in participants' codes. This phase was composed of four exercises; Exercise 1: steady-state single sub-channel benchmark, Exercise 2: steady-state rod bundle benchmark, Exercise 3: transient rod bundle benchmark and Exercise 4: a pressure drop benchmark. The experimental data provided to the participants of this benchmark is from a series of void measurement tests using full-size mock-up tests for both Boiling Water Reactors (BWRs) and Pressurised Water Reactors (PWRs). These tests were performed from 1987 to 1995 by the Nuclear Power Engineering Corporation (NUPEC) in Japan and made available by the Japan Nuclear Energy Safety Organisation (JNES) for the purposes of this benchmark, which was organised by Pennsylvania State University. Twenty-one institutions from nine countries participated in this benchmark. Seventeen different computer codes were used in Exercises 1, 2, 3 and 4. Among the computer codes were porous media, sub-channel, systems thermal-hydraulic code and Computational Fluid Dynamics (CFD) codes. It was observed that the codes tended to overpredict the thermal equilibrium quality at lower elevations and under predict it at higher elevations. There was also a tendency to overpredict void fraction at lower elevations and underpredict it at high elevations for the bundle test cases. The overprediction of void fraction at low elevations is likely caused by the x-ray densitometer measurement method used. Under sub-cooled boiling conditions, the voids accumulate at heated surfaces (and are therefore not seen in the centre of the sub-channel, where the measurements are being taken), so the experimentally-determined void fractions will be lower than the actual void fraction. Some of the best

  10. Yucca Mountain Project thermal and mechanical codes first benchmark exercise: Part 3, Jointed rock mass analysis

    International Nuclear Information System (INIS)

    Costin, L.S.; Bauer, S.J.

    1991-10-01

    Thermal and mechanical models for intact and jointed rock mass behavior are being developed, verified, and validated at Sandia National Laboratories for the Yucca Mountain Site Characterization Project. Benchmarking is an essential part of this effort and is one of the tools used to demonstrate verification of engineering software used to solve thermomechanical problems. This report presents the results of the third (and final) phase of the first thermomechanical benchmark exercise. In the first phase of this exercise, nonlinear heat conduction code were used to solve the thermal portion of the benchmark problem. The results from the thermal analysis were then used as input to the second and third phases of the exercise, which consisted of solving the structural portion of the benchmark problem. In the second phase of the exercise, a linear elastic rock mass model was used. In the third phase of the exercise, two different nonlinear jointed rock mass models were used to solve the thermostructural problem. Both models, the Sandia compliant joint model and the RE/SPEC joint empirical model, explicitly incorporate the effect of the joints on the response of the continuum. Three different structural codes, JAC, SANCHO, and SPECTROM-31, were used with the above models in the third phase of the study. Each model was implemented in two different codes so that direct comparisons of results from each model could be made. The results submitted by the participants showed that the finite element solutions using each model were in reasonable agreement. Some consistent differences between the solutions using the two different models were noted but are not considered important to verification of the codes. 9 refs., 18 figs., 8 tabs

  11. Classification of criticality calculations with correlation coefficient method and its application to OECD/NEA burnup credit benchmarks phase III-A and II-A

    International Nuclear Information System (INIS)

    Okuno, Hiroshi

    2003-01-01

    A method for classifying benchmark results of criticality calculations according to similarity was proposed in this paper. After formulation of the method utilizing correlation coefficients, it was applied to burnup credit criticality benchmarks Phase III-A and II-A, which were conducted by the Expert Group on Burnup Credit Criticality Safety under auspices of the Nuclear Energy Agency of the Organisation for Economic Cooperation and Development (OECD/NEA). Phase III-A benchmark was a series of criticality calculations for irradiated Boiling Water Reactor (BWR) fuel assemblies, whereas Phase II-A benchmark was a suite of criticality calculations for irradiated Pressurized Water Reactor (PWR) fuel pins. These benchmark problems and their results were summarized. The correlation coefficients were calculated and sets of benchmark calculation results were classified according to the criterion that the values of the correlation coefficients were no less than 0.15 for Phase III-A and 0.10 for Phase II-A benchmarks. When a couple of benchmark calculation results belonged to the same group, one calculation result was found predictable from the other. An example was shown for each of the Benchmarks. While the evaluated nuclear data seemed the main factor for the classification, further investigations were required for finding other factors. (author)

  12. JNC results of BN-600 benchmark calculation (phase 4)

    International Nuclear Information System (INIS)

    Ishikawa, Makoto

    2003-01-01

    The present work is the results of JNC, Japan, for the Phase 4 of the BN-600 core benchmark problem (Hex-Z fully MOX fuelled core model) organized by IAEA. The benchmark specification is based on 1) the RCM report of IAEA CRP on 'Updated Codes and Methods to Reduce the Calculational Uncertainties of LMFR Reactivity Effects, Action 3.12' (Calculations for BN-600 fully fuelled MOX core for subsequent transient analyses). JENDL-3.2 nuclear data library was used for calculating 70 group ABBN-type group constants. Cell models for fuel assembly and control rod calculations were applied: homogeneous and heterogeneous (cylindrical supercell) model. Basic diffusion calculation was three-dimensional Hex-Z model, 18 group (Citation code). Transport calculations were 18 group, three-dimensional (NSHEC code) based on Sn-transport nodal method developed at JNC. The generated thermal power per fission was based on Sher's data corrected on the basis of ENDF/B-IV data library. Calculation results are presented in Tables for intercomparison

  13. Clean Energy Manufacturing Analysis Center Benchmark Report: Framework and Methodologies

    Energy Technology Data Exchange (ETDEWEB)

    Sandor, Debra [National Renewable Energy Lab. (NREL), Golden, CO (United States); Chung, Donald [National Renewable Energy Lab. (NREL), Golden, CO (United States); Keyser, David [National Renewable Energy Lab. (NREL), Golden, CO (United States); Mann, Margaret [National Renewable Energy Lab. (NREL), Golden, CO (United States); Engel-Cox, Jill [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2017-05-23

    This report documents the CEMAC methodologies for developing and reporting annual global clean energy manufacturing benchmarks. The report reviews previously published manufacturing benchmark reports and foundational data, establishes a framework for benchmarking clean energy technologies, describes the CEMAC benchmark analysis methodologies, and describes the application of the methodologies to the manufacturing of four specific clean energy technologies.

  14. OECD/DOE/CEA VVER-1000 coolant transient (V1000CT) benchmark - a consistent approach for assessing coupled codes for RIA analysis

    International Nuclear Information System (INIS)

    Boyan D Ivanov; Kostadin N Ivanov; Eric Royer; Sylvie Aniel; Nikola Kolev; Pavlin Groudev

    2005-01-01

    Full text of publication follows: The Rod Ejection Accident (REA) and Main Steam Line Break (MSLB) are two of the most important Design Basis Accidents (DBA) for VVER-1000 exhibiting significant localized space-time effects. A consistent approach for assessing coupled three-dimensional (3-D) neutron kinetics/thermal hydraulics codes for these Reactivity Insertion Accidents (RIA) is to first validate the codes using the available plant test (measured) data and after that perform cross code comparative analysis for REA and MSLB scenarios. In the framework of joint effort between the Nuclear Energy Agency (NEA) of OECD, the United States Department of Energy (US DOE), and the Commissariat a l'Energie Atomique (CEA), France a coupled 3-D neutron kinetics/thermal hydraulics benchmark was defined. The benchmark is based on data from the Unit 6 of the Bulgarian Kozloduy Nuclear Power Plant (NPP). In performing this work the PSU, USA and CEA-Saclay, France have collaborated with Bulgarian organizations, in particular with the KNPP and the INRNE. The benchmark consists of two phases: Phase 1: Main Coolant Pump Switching On; Phase 2: Coolant Mixing Tests and MSLB. In addition to the measured (experiment) scenario, an extreme calculation scenario was defined for better testing 3-D neutronics/thermal-hydraulics techniques: rod ejection simulation with control rod being ejected in the core sector cooled by the switched on MCP. Since the previous coupled code benchmarks indicated that further development of the mixing computation models in the integrated codes is necessary, a coolant mixing experiment and MSLB transients are selected for simulation in Phase 2 of the benchmark. The MSLB event is characterized by a large asymmetric cooling of the core, stuck rods and a large primary coolant flow variation. Two scenarios are defined in Phase 2: the first scenario is taken from the current licensing practice and the second one is derived from the original one using aggravating

  15. Analysis of the VVER-1000 coolant transient benchmark phase 1 with the code system RELAP5/PARCS

    International Nuclear Information System (INIS)

    Victor Hugo Sanchez Espinoza

    2005-01-01

    Full text of publication follows: As part of the reactor dynamics activities of FZK/IRS, the qualification of best-estimate coupled code systems for reactor safety evaluations is a key step toward improving their prediction capability and acceptability. The VVER-1000 Coolant Transient Benchmark Phase 1 represents an excellent opportunity to validate the simulation capability of the coupled code system RELAP5/PACRS regarding both the thermal hydraulic plant response (RELAP5) using measured data obtained during commissioning tests at the Kozloduy nuclear power plant unit 6 and the neutron kinetics models of PARCS for hexagonal geometries. The Phase 1 is devoted to the analysis of the switching on of one main coolant pump while the other three pumps are in operation. It includes the following exercises: (a) investigation of the integral plant response using a best-estimate thermal hydraulic system code with a point kinetics model (b) analysis of the core response for given initial and transient thermal hydraulic boundary conditions using a coupled code system with 3D-neutron kinetics model and (c) investigation of the integral plant response using a best-estimate coupled code system with 3D-neutron kinetics. Already before the test, complex flow conditions exist within the RPV e.g. coolant mixing in the upper plenum caused by the reverse flow through the loop-3 with the stopped pump. The test is initiated by switching on the main coolant pump of loop-3 that leads to a reversal of the flow through the respective piping. After about 13 s the mass flow rate through this loop reaches values comparable with the one of the other loops. During this time period, the increased primary coolant flow causes a reduction of the core averaged coolant temperature and thus an increase of the core power. Later on, the power stabilizes at a level higher than the initial power. In this analysis, special attention is paid on the prediction of the spatial asymmetrical core cooling during

  16. Tendances Carbone no. 79 'Free allocations under Phase 3 benchmarks: early evidence of what has changed'

    International Nuclear Information System (INIS)

    Sartor, Oliver

    2013-01-01

    Among the publications of CDC Climat Research, 'Tendances Carbone' bulletin specifically studies the developments of the European market for CO 2 allowances. This issue addresses the following points: One of the most controversial changes to the EU ETS in Phase 3 (2013-2020) has been the introduction of emissions-performance benchmarks for determining free allocations to non-electricity producers. Phases 1 and 2 used National Allocation Plans (NAPs). For practical reasons NAPs were drawn up by each Member State, but this led to problems, including over-generous allowance allocation, insufficiently harmonised allocations across countries and distorted incentives to reduce emissions. Benchmarking tries to fix things by allocating the equivalent of 100% of allowances needed if every installation used the best available technology. But this is not universally popular and industries say that they might lose international competitiveness. So a new study by CDC Climat and the Climate Economics Chair examined the data from the preliminary Phase 3 free allocations of 20 EU Member States and asked: how much are free allocations actually going to change with benchmarking?

  17. WWER-1000 Burnup Credit Benchmark (CB5)

    International Nuclear Information System (INIS)

    Manolova, M.A.

    2002-01-01

    In the paper the specification of WWER-1000 Burnup Credit Benchmark first phase (depletion calculations), given. The second phase - criticality calculations for the WWER-1000 fuel pin cell, will be given after the evaluation of the results, obtained at the first phase. The proposed benchmark is a continuation of the WWER benchmark activities in this field (Author)

  18. JNC results of BN-600 benchmark calculation (phase 3)

    International Nuclear Information System (INIS)

    Ishikawa, M.

    2002-01-01

    The present work is the result of phase 3 BN-600 core benchmark problem, meaning burnup and heterogeneity. Analytical method applied consisted of: JENDL-3.2 nuclear data library, group constants (70 group, ABBN type self shielding transport factors), heterogeneous cell model for fuel and control rod, basic diffusion calculation (CITATION code), transport theory and mesh size correction (NSHEX code based on SN transport nodal method developed by JNC). Burnup and heterogeneity calculation results are presented obtained by applying both diffusion and transport approach for beginning and end of cycle

  19. Sensitivity Analysis of OECD Benchmark Tests in BISON

    Energy Technology Data Exchange (ETDEWEB)

    Swiler, Laura Painton [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Gamble, Kyle [Idaho National Lab. (INL), Idaho Falls, ID (United States); Schmidt, Rodney C. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Williamson, Richard [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-09-01

    This report summarizes a NEAMS (Nuclear Energy Advanced Modeling and Simulation) project focused on sensitivity analysis of a fuels performance benchmark problem. The benchmark problem was defined by the Uncertainty Analysis in Modeling working group of the Nuclear Science Committee, part of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development (OECD ). The benchmark problem involv ed steady - state behavior of a fuel pin in a Pressurized Water Reactor (PWR). The problem was created in the BISON Fuels Performance code. Dakota was used to generate and analyze 300 samples of 17 input parameters defining core boundary conditions, manuf acturing tolerances , and fuel properties. There were 24 responses of interest, including fuel centerline temperatures at a variety of locations and burnup levels, fission gas released, axial elongation of the fuel pin, etc. Pearson and Spearman correlatio n coefficients and Sobol' variance - based indices were used to perform the sensitivity analysis. This report summarizes the process and presents results from this study.

  20. List of benchmarks for simulation tools of steam-water two-phase flows

    International Nuclear Information System (INIS)

    Mimouni, S.; Serre, G.

    2001-01-01

    A physical-numerical benchmarks matrix was drawn up in the context of the ECUME co-development action. Its purpose is to test the different potentialities required for the numerical methods to be used in the codes of the future which will benefit from advanced physics simulations. This benchmarks matrix is to be used for each numerical method in order to answer the following questions: What is the two-phase flow field that the combination of physics model + numerical scheme can process? What is the accuracy of the scheme for each type of physics situation? What is the numerical efficiency (computing time) of the numerical scheme for each type of physics situation? (author)

  1. List of benchmarks for simulation tools of steam-water two-phase flows

    Energy Technology Data Exchange (ETDEWEB)

    Mimouni, S. [Electricite de France (EDF), Div. R and D, 78 - Chatou (France); Serre, G. [CEA Grenoble, Dept. de Thermohydraulique et de Physique, DTP, 38 (France)

    2001-07-01

    A physical-numerical benchmarks matrix was drawn up in the context of the ECUME co-development action. Its purpose is to test the different potentialities required for the numerical methods to be used in the codes of the future which will benefit from advanced physics simulations. This benchmarks matrix is to be used for each numerical method in order to answer the following questions: What is the two-phase flow field that the combination of physics model + numerical scheme can process? What is the accuracy of the scheme for each type of physics situation? What is the numerical efficiency (computing time) of the numerical scheme for each type of physics situation? (author)

  2. BN-600 MOX Core Benchmark Analysis. Results from Phases 4 and 6 of a Coordinated Research Project on Updated Codes and Methods to Reduce the Calculational Uncertainties of the LMFR Reactivity Effects

    International Nuclear Information System (INIS)

    2013-12-01

    For those Member States that have or have had significant fast reactor development programmes, it is of utmost importance that they have validated up to date codes and methods for fast reactor physics analysis in support of R and D and core design activities in the area of actinide utilization and incineration. In particular, some Member States have recently focused on fast reactor systems for minor actinide transmutation and on cores optimized for consuming rather than breeding plutonium; the physics of the breeder reactor cycle having already been widely investigated. Plutonium burning systems may have an important role in managing plutonium stocks until the time when major programmes of self-sufficient fast breeder reactors are established. For assessing the safety of these systems, it is important to determine the prediction accuracy of transient simulations and their associated reactivity coefficients. In response to Member States' expressed interest, the IAEA sponsored a coordinated research project (CRP) on Updated Codes and Methods to Reduce the Calculational Uncertainties of the LMFR Reactivity Effects. The CRP started in November 1999 and, at the first meeting, the members of the CRP endorsed a benchmark on the BN-600 hybrid core for consideration in its first studies. Benchmark analyses of the BN-600 hybrid core were performed during the first three phases of the CRP, investigating different nuclear data and levels of approximation in the calculation of safety related reactivity effects and their influence on uncertainties in transient analysis prediction. In an additional phase of the benchmark studies, experimental data were used for the verification and validation of nuclear data libraries and methods in support of the previous three phases. The results of phases 1, 2, 3 and 5 of the CRP are reported in IAEA-TECDOC-1623, BN-600 Hybrid Core Benchmark Analyses, Results from a Coordinated Research Project on Updated Codes and Methods to Reduce the

  3. Impact testing and analysis for structural code benchmarking

    International Nuclear Information System (INIS)

    Glass, R.E.

    1989-01-01

    Sandia National Laboratories, in cooperation with industry and other national laboratories, has been benchmarking computer codes used to predict the structural, thermal, criticality, and shielding behavior of radioactive materials packages. The first step in the benchmarking of the codes was to develop standard problem sets and to compare the results from several codes and users. This step for structural analysis codes has been completed as described in Structural Code Benchmarking for the Analysis of Impact Response of Nuclear Material Shipping Casks, R.E. Glass, Sandia National Laboratories, 1985. The problem set is shown in Fig. 1. This problem set exercised the ability of the codes to predict the response to end (axisymmetric) and side (plane strain) impacts with both elastic and elastic/plastic materials. The results from these problems showed that there is good agreement in predicting elastic response. Significant differences occurred in predicting strains for the elastic/plastic models. An example of the variation in predicting plastic behavior is given, which shows the hoop strain as a function of time at the impacting end of Model B. These differences in predicting plastic strains demonstrated a need for benchmark data for a cask-like problem

  4. Analysis of the impact of correlated benchmark experiments on the validation of codes for criticality safety analysis

    International Nuclear Information System (INIS)

    Bock, M.; Stuke, M.; Behler, M.

    2013-01-01

    The validation of a code for criticality safety analysis requires the recalculation of benchmark experiments. The selected benchmark experiments are chosen such that they have properties similar to the application case that has to be assessed. A common source of benchmark experiments is the 'International Handbook of Evaluated Criticality Safety Benchmark Experiments' (ICSBEP Handbook) compiled by the 'International Criticality Safety Benchmark Evaluation Project' (ICSBEP). In order to take full advantage of the information provided by the individual benchmark descriptions for the application case, the recommended procedure is to perform an uncertainty analysis. The latter is based on the uncertainties of experimental results included in most of the benchmark descriptions. They can be performed by means of the Monte Carlo sampling technique. The consideration of uncertainties is also being introduced in the supplementary sheet of DIN 25478 'Application of computer codes in the assessment of criticality safety'. However, for a correct treatment of uncertainties taking into account the individual uncertainties of the benchmark experiments is insufficient. In addition, correlations between benchmark experiments have to be handled correctly. For example, these correlations can arise due to different cases of a benchmark experiment sharing the same components like fuel pins or fissile solutions. Thus, manufacturing tolerances of these components (e.g. diameter of the fuel pellets) have to be considered in a consistent manner in all cases of the benchmark experiment. At the 2012 meeting of the Expert Group on 'Uncertainty Analysis for Criticality Safety Assessment' (UACSA) of the OECD/NEA a benchmark proposal was outlined that aimed for the determination of the impact on benchmark correlations on the estimation of the computational bias of the neutron multiplication factor (k eff ). The analysis presented here is based on this proposal. (orig.)

  5. Sustaining knowledge in the neutron generator community and benchmarking study. Phase II.

    Energy Technology Data Exchange (ETDEWEB)

    Huff, Tameka B.; Stubblefield, William Anthony; Cole, Benjamin Holland, II; Baldonado, Esther

    2010-08-01

    This report documents the second phase of work under the Sustainable Knowledge Management (SKM) project for the Neutron Generator organization at Sandia National Laboratories. Previous work under this project is documented in SAND2008-1777, Sustaining Knowledge in the Neutron Generator Community and Benchmarking Study. Knowledge management (KM) systems are necessary to preserve critical knowledge within organizations. A successful KM program should focus on people and the process for sharing, capturing, and applying knowledge. The Neutron Generator organization is developing KM systems to ensure knowledge is not lost. A benchmarking study involving site visits to outside industry plus additional resource research was conducted during this phase of the SKM project. The findings presented in this report are recommendations for making an SKM program successful. The recommendations are activities that promote sharing, capturing, and applying knowledge. The benchmarking effort, including the site visits to Toyota and Halliburton, provided valuable information on how the SEA KM team could incorporate a KM solution for not just the neutron generators (NG) community but the entire laboratory. The laboratory needs a KM program that allows members of the workforce to access, share, analyze, manage, and apply knowledge. KM activities, such as communities of practice (COP) and sharing best practices, provide a solution towards creating an enabling environment for KM. As more and more people leave organizations through retirement and job transfer, the need to preserve knowledge is essential. Creating an environment for the effective use of knowledge is vital to achieving the laboratory's mission.

  6. Sustaining knowledge in the neutron generator community and benchmarking study. Phase II

    International Nuclear Information System (INIS)

    Huff, Tameka B.; Stubblefield, William Anthony; Cole, Benjamin Holland II; Baldonado, Esther

    2010-01-01

    This report documents the second phase of work under the Sustainable Knowledge Management (SKM) project for the Neutron Generator organization at Sandia National Laboratories. Previous work under this project is documented in SAND2008-1777, Sustaining Knowledge in the Neutron Generator Community and Benchmarking Study. Knowledge management (KM) systems are necessary to preserve critical knowledge within organizations. A successful KM program should focus on people and the process for sharing, capturing, and applying knowledge. The Neutron Generator organization is developing KM systems to ensure knowledge is not lost. A benchmarking study involving site visits to outside industry plus additional resource research was conducted during this phase of the SKM project. The findings presented in this report are recommendations for making an SKM program successful. The recommendations are activities that promote sharing, capturing, and applying knowledge. The benchmarking effort, including the site visits to Toyota and Halliburton, provided valuable information on how the SEA KM team could incorporate a KM solution for not just the neutron generators (NG) community but the entire laboratory. The laboratory needs a KM program that allows members of the workforce to access, share, analyze, manage, and apply knowledge. KM activities, such as communities of practice (COP) and sharing best practices, provide a solution towards creating an enabling environment for KM. As more and more people leave organizations through retirement and job transfer, the need to preserve knowledge is essential. Creating an environment for the effective use of knowledge is vital to achieving the laboratory's mission.

  7. Benchmark validation of statistical models: Application to mediation analysis of imagery and memory.

    Science.gov (United States)

    MacKinnon, David P; Valente, Matthew J; Wurpts, Ingrid C

    2018-03-29

    This article describes benchmark validation, an approach to validating a statistical model. According to benchmark validation, a valid model generates estimates and research conclusions consistent with a known substantive effect. Three types of benchmark validation-(a) benchmark value, (b) benchmark estimate, and (c) benchmark effect-are described and illustrated with examples. Benchmark validation methods are especially useful for statistical models with assumptions that are untestable or very difficult to test. Benchmark effect validation methods were applied to evaluate statistical mediation analysis in eight studies using the established effect that increasing mental imagery improves recall of words. Statistical mediation analysis led to conclusions about mediation that were consistent with established theory that increased imagery leads to increased word recall. Benchmark validation based on established substantive theory is discussed as a general way to investigate characteristics of statistical models and a complement to mathematical proof and statistical simulation. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  8. Swiss electricity grid - Benchmarking pilot project

    International Nuclear Information System (INIS)

    2001-01-01

    This article is a short version of the ENET number 210369. This report for the Swiss Federal Office of Energy (SFOE) describes a benchmarking pilot project carried out as a second phase in the development of a formula for the regulation of an open electricity market in Switzerland. It follows on from an initial phase involving the definition of a 'blue print' and a basic concept. The aims of the pilot project - to check out the practicability of the concept - are discussed. The collection of anonymised data for the benchmarking model from over 30 electricity utilities operating on all 7 Swiss grid levels and their integration in the three areas 'Technology', 'Grid Costs' and 'Capital Invested' are discussed in detail. In particular, confidentiality and data protection aspects are looked at. The methods used in the analysis of the data are described and the results of an efficiency analysis of various utilities are presented. The report is concluded with a listing of questions concerning data collection and analysis as well as operational and capital costs that are still to be answered

  9. Benchmark Analysis of Subcritical Noise Measurements on a Nickel-Reflected Plutonium Metal Sphere

    Energy Technology Data Exchange (ETDEWEB)

    John D. Bess; Jesson Hutchinson

    2009-09-01

    Subcritical experiments using californium source-driven noise analysis (CSDNA) and Feynman variance-to-mean methods were performed with an alpha-phase plutonium sphere reflected by nickel shells, up to a maximum thickness of 7.62 cm. Both methods provide means of determining the subcritical multiplication of a system containing nuclear material. A benchmark analysis of the experiments was performed for inclusion in the 2010 edition of the International Handbook of Evaluated Criticality Safety Benchmark Experiments. Benchmark models have been developed that represent these subcritical experiments. An analysis of the computed eigenvalues and the uncertainty in the experiment and methods was performed. The eigenvalues computed using the CSDNA method were very close to those calculated using MCNP5; however, computed eigenvalues are used in the analysis of the CSDNA method. Independent calculations using KENO-VI provided similar eigenvalues to those determined using the CSDNA method and MCNP5. A slight trend with increasing nickel-reflector thickness was seen when comparing MCNP5 and KENO-VI results. For the 1.27-cm-thick configuration the MCNP eigenvalue was approximately 300 pcm greater. The calculated KENO eigenvalue was about 300 pcm greater for the 7.62-cm-thick configuration. The calculated results were approximately the same for a 5-cm-thick shell. The eigenvalues determined using the Feynman method are up to approximately 2.5% lower than those determined using either the CSDNA method or the Monte Carlo codes. The uncertainty in the results from either method was not large enough to account for the bias between the two experimental methods. An ongoing investigation is being performed to assess what potential uncertainties and/or biases exist that have yet to be properly accounted for. The dominant uncertainty in the CSDNA analysis was the uncertainty in selecting a neutron cross-section library for performing the analysis of the data. The uncertainty in the

  10. A biosegmentation benchmark for evaluation of bioimage analysis methods

    Directory of Open Access Journals (Sweden)

    Kvilekval Kristian

    2009-11-01

    Full Text Available Abstract Background We present a biosegmentation benchmark that includes infrastructure, datasets with associated ground truth, and validation methods for biological image analysis. The primary motivation for creating this resource comes from the fact that it is very difficult, if not impossible, for an end-user to choose from a wide range of segmentation methods available in the literature for a particular bioimaging problem. No single algorithm is likely to be equally effective on diverse set of images and each method has its own strengths and limitations. We hope that our benchmark resource would be of considerable help to both the bioimaging researchers looking for novel image processing methods and image processing researchers exploring application of their methods to biology. Results Our benchmark consists of different classes of images and ground truth data, ranging in scale from subcellular, cellular to tissue level, each of which pose their own set of challenges to image analysis. The associated ground truth data can be used to evaluate the effectiveness of different methods, to improve methods and to compare results. Standard evaluation methods and some analysis tools are integrated into a database framework that is available online at http://bioimage.ucsb.edu/biosegmentation/. Conclusion This online benchmark will facilitate integration and comparison of image analysis methods for bioimages. While the primary focus is on biological images, we believe that the dataset and infrastructure will be of interest to researchers and developers working with biological image analysis, image segmentation and object tracking in general.

  11. Results of the event sequence reliability benchmark exercise

    International Nuclear Information System (INIS)

    Silvestri, E.

    1990-01-01

    The Event Sequence Reliability Benchmark Exercise is the fourth of a series of benchmark exercises on reliability and risk assessment, with specific reference to nuclear power plant applications, and is the logical continuation of the previous benchmark exercises on System Analysis Common Cause Failure and Human Factors. The reference plant is the Nuclear Power Plant at Grohnde Federal Republic of Germany a 1300 MW PWR plant of KWU design. The specific objective of the Exercise is to model, to quantify and to analyze such event sequences initiated by the occurrence of a loss of offsite power that involve the steam generator feed. The general aim is to develop a segment of a risk assessment, which ought to include all the specific aspects and models of quantification, such as common canal failure, Human Factors and System Analysis, developed in the previous reliability benchmark exercises, with the addition of the specific topics of dependences between homologous components belonging to different systems featuring in a given event sequence and of uncertainty quantification, to end up with an overall assessment of: - the state of the art in risk assessment and the relative influences of quantification problems in a general risk assessment framework. The Exercise has been carried out in two phases, both requiring modelling and quantification, with the second phase adopting more restrictive rules and fixing certain common data, as emerged necessary from the first phase. Fourteen teams have participated in the Exercise mostly from EEC countries, with one from Sweden and one from the USA. (author)

  12. Application of the Relap5-3D to phase 1 and 3 of the OECD-CSNI/NSC PWR MSLB benchmark related to TMI-1

    International Nuclear Information System (INIS)

    D'Auria, F.; Galassi, G.; Spadoni, A.; Hassan, Y.

    2001-01-01

    The Relap5-3D, the latest in the series of the Relap5 code, distinguishes from the previous versions by the fully integrated, multi-dimensional thermalhydraulic and kinetic modeling capability. It has been applied to Phase I and III of OECD-CSNI/ NSC PWR MSLB Benchmark adopting the same thermalhydraulic input deck already used with Relap5/Parcs and Relap5/Quabbox coupled codes during the previous MSLB analysis. The OECD jointly with the US NRC proposed the PWR MSLB Benchmark in order to gather a common understanding about the coupling between thermal hydraulics and neutronics, and evaluating the behavior of this transient with different coupled codes, giving emphasis to the 3-D modeling. This paper deals with the application of Relap5-3D code to phase I and III of the PWR MSLB Benchmark. The Relap5-3D is a thermal hydraulics-neutronics internally coupled code, the thermal hydraulics module is the INEEL version of Relap and the neutronics module is derived from NESTLE multi-dimension kinetics code. (author)

  13. Developing a benchmark for emotional analysis of music.

    Science.gov (United States)

    Aljanaki, Anna; Yang, Yi-Hsuan; Soleymani, Mohammad

    2017-01-01

    Music emotion recognition (MER) field rapidly expanded in the last decade. Many new methods and new audio features are developed to improve the performance of MER algorithms. However, it is very difficult to compare the performance of the new methods because of the data representation diversity and scarcity of publicly available data. In this paper, we address these problems by creating a data set and a benchmark for MER. The data set that we release, a MediaEval Database for Emotional Analysis in Music (DEAM), is the largest available data set of dynamic annotations (valence and arousal annotations for 1,802 songs and song excerpts licensed under Creative Commons with 2Hz time resolution). Using DEAM, we organized the 'Emotion in Music' task at MediaEval Multimedia Evaluation Campaign from 2013 to 2015. The benchmark attracted, in total, 21 active teams to participate in the challenge. We analyze the results of the benchmark: the winning algorithms and feature-sets. We also describe the design of the benchmark, the evaluation procedures and the data cleaning and transformations that we suggest. The results from the benchmark suggest that the recurrent neural network based approaches combined with large feature-sets work best for dynamic MER.

  14. Benchmarking of Grid Fault Modes in Single-Phase Grid-Connected Photovoltaic Systems

    DEFF Research Database (Denmark)

    Yang, Yongheng; Blaabjerg, Frede; Zou, Zhixiang

    2013-01-01

    Pushed by the booming installations of singlephase photovoltaic (PV) systems, the grid demands regarding the integration of PV systems are expected to be modified. Hence, the future PV systems should become more active with functionalities of Low Voltage Ride-Through (LVRT) and grid support...... phase systems under grid faults. The intent of this paper is to present a benchmarking of grid fault modes that might come in future single-phase PV systems. In order to map future challenges, the relevant synchronization and control strategies are discussed. Some faulty modes are studied experimentally...... and provided at the end of this paper. It is concluded that there are extensive control possibilities in single-phase PV systems under grid faults. The Second Order General Integral based PLL technique might be the most promising candidate for future single-phase PV systems because of its fast adaptive...

  15. NRC-BNL Benchmark Program on Evaluation of Methods for Seismic Analysis of Coupled Systems

    International Nuclear Information System (INIS)

    Chokshi, N.; DeGrassi, G.; Xu, J.

    1999-01-01

    A NRC-BNL benchmark program for evaluation of state-of-the-art analysis methods and computer programs for seismic analysis of coupled structures with non-classical damping is described. The program includes a series of benchmarking problems designed to investigate various aspects of complexities, applications and limitations associated with methods for analysis of non-classically damped structures. Discussions are provided on the benchmarking process, benchmark structural models, and the evaluation approach, as well as benchmarking ground rules. It is expected that the findings and insights, as well as recommendations from this program will be useful in developing new acceptance criteria and providing guidance for future regulatory activities involving licensing applications of these alternate methods to coupled systems

  16. OECD/NEA benchmark for time-dependent neutron transport calculations without spatial homogenization

    Energy Technology Data Exchange (ETDEWEB)

    Hou, Jason, E-mail: jason.hou@ncsu.edu [Department of Nuclear Engineering, North Carolina State University, Raleigh, NC 27695 (United States); Ivanov, Kostadin N. [Department of Nuclear Engineering, North Carolina State University, Raleigh, NC 27695 (United States); Boyarinov, Victor F.; Fomichenko, Peter A. [National Research Centre “Kurchatov Institute”, Kurchatov Sq. 1, Moscow (Russian Federation)

    2017-06-15

    Highlights: • A time-dependent homogenization-free neutron transport benchmark was created. • The first phase, known as the kinetics phase, was described in this work. • Preliminary results for selected 2-D transient exercises were presented. - Abstract: A Nuclear Energy Agency (NEA), Organization for Economic Co-operation and Development (OECD) benchmark for the time-dependent neutron transport calculations without spatial homogenization has been established in order to facilitate the development and assessment of numerical methods for solving the space-time neutron kinetics equations. The benchmark has been named the OECD/NEA C5G7-TD benchmark, and later extended with three consecutive phases each corresponding to one modelling stage of the multi-physics transient analysis of the nuclear reactor core. This paper provides a detailed introduction of the benchmark specification of Phase I, known as the “kinetics phase”, including the geometry description, supporting neutron transport data, transient scenarios in both two-dimensional (2-D) and three-dimensional (3-D) configurations, as well as the expected output parameters from the participants. Also presented are the preliminary results for the initial state 2-D core and selected transient exercises that have been obtained using the Monte Carlo method and the Surface Harmonic Method (SHM), respectively.

  17. Thought Experiment to Examine Benchmark Performance for Fusion Nuclear Data

    Science.gov (United States)

    Murata, Isao; Ohta, Masayuki; Kusaka, Sachie; Sato, Fuminobu; Miyamaru, Hiroyuki

    2017-09-01

    There are many benchmark experiments carried out so far with DT neutrons especially aiming at fusion reactor development. These integral experiments seemed vaguely to validate the nuclear data below 14 MeV. However, no precise studies exist now. The author's group thus started to examine how well benchmark experiments with DT neutrons can play a benchmarking role for energies below 14 MeV. Recently, as a next phase, to generalize the above discussion, the energy range was expanded to the entire region. In this study, thought experiments with finer energy bins have thus been conducted to discuss how to generally estimate performance of benchmark experiments. As a result of thought experiments with a point detector, the sensitivity for a discrepancy appearing in the benchmark analysis is "equally" due not only to contribution directly conveyed to the deterctor, but also due to indirect contribution of neutrons (named (A)) making neutrons conveying the contribution, indirect controbution of neutrons (B) making the neutrons (A) and so on. From this concept, it would become clear from a sensitivity analysis in advance how well and which energy nuclear data could be benchmarked with a benchmark experiment.

  18. Thought Experiment to Examine Benchmark Performance for Fusion Nuclear Data

    Directory of Open Access Journals (Sweden)

    Murata Isao

    2017-01-01

    Full Text Available There are many benchmark experiments carried out so far with DT neutrons especially aiming at fusion reactor development. These integral experiments seemed vaguely to validate the nuclear data below 14 MeV. However, no precise studies exist now. The author’s group thus started to examine how well benchmark experiments with DT neutrons can play a benchmarking role for energies below 14 MeV. Recently, as a next phase, to generalize the above discussion, the energy range was expanded to the entire region. In this study, thought experiments with finer energy bins have thus been conducted to discuss how to generally estimate performance of benchmark experiments. As a result of thought experiments with a point detector, the sensitivity for a discrepancy appearing in the benchmark analysis is “equally” due not only to contribution directly conveyed to the deterctor, but also due to indirect contribution of neutrons (named (A making neutrons conveying the contribution, indirect controbution of neutrons (B making the neutrons (A and so on. From this concept, it would become clear from a sensitivity analysis in advance how well and which energy nuclear data could be benchmarked with a benchmark experiment.

  19. HPC Benchmark Suite NMx, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Intelligent Automation Inc., (IAI) and University of Central Florida (UCF) propose to develop a comprehensive numerical test suite for benchmarking current and...

  20. OECD/DOE/CEA VVER-1000 coolant transient (V1000CT) benchmark for assessing coupled neutronics/thermal-hydraulics system codes for VVER-1000 RIA analysis

    International Nuclear Information System (INIS)

    Ivanov, B.; Ivanov, K.; Aniel, S.; Royer, E.; Kolev, N.; Groudev, P.

    2004-01-01

    The present paper describes the two phases of the OECD/DOE/CEA VVER-1000 coolant transient benchmark labeled as V1000CT. This benchmark is based on a data from the Bulgarian Kozloduy NPP Unit 6. The first phase of the benchmark was designed for the purpose of assessing neutron kinetics and thermal-hydraulic modeling for a VVER-1000 reactor, and specifically for their use in analyzing reactivity transients in a VVER-1000 reactor. Most of the results of Phase 1 will be compared against experimental data and the rest of the results will be used for code-to-code comparison. The second phase of the benchmark is planned for evaluation and improvement of the mixing computational models. Code-to-code and code-to-data comparisons will be done based on data of a mixing experiment conducted at Kozloduy-6. Main steam line break will be also analyzed in the second phase of the V1000CT benchmark. The results from it will be used for code-to-code comparison. The benchmark team has been involved in analyzing different aspects and performing sensitivity studies of the different benchmark exercises. The paper presents a comparison of selected results, obtained with two different system thermal-hydraulics codes, with the plant data for the Exercise 1 of Phase 1 of the benchmark as well as some results for Exercises 2 and 3. Overall, this benchmark has been well accepted internationally, with many organizations representing 11 countries participating in the first phase of the benchmark. (authors)

  1. Benchmarking the x-ray phase contrast imaging for ICF DT ice characterization using roughened surrogates

    Energy Technology Data Exchange (ETDEWEB)

    Dewald, E; Kozioziemski, B; Moody, J; Koch, J; Mapoles, E; Montesanti, R; Youngblood, K; Letts, S; Nikroo, A; Sater, J; Atherton, J

    2008-06-26

    We use x-ray phase contrast imaging to characterize the inner surface roughness of DT ice layers in capsules planned for future ignition experiments. It is therefore important to quantify how well the x-ray data correlates with the actual ice roughness. We benchmarked the accuracy of our system using surrogates with fabricated roughness characterized with high precision standard techniques. Cylindrical artifacts with azimuthally uniform sinusoidal perturbations with 100 um period and 1 um amplitude demonstrated 0.02 um accuracy limited by the resolution of the imager and the source size of our phase contrast system. Spherical surrogates with random roughness close to that required for the DT ice for a successful ignition experiment were used to correlate the actual surface roughness to that obtained from the x-ray measurements. When comparing average power spectra of individual measurements, the accuracy mode number limits of the x-ray phase contrast system benchmarked against surface characterization performed by Atomic Force Microscopy are 60 and 90 for surrogates smoother and rougher than the required roughness for the ice. These agreement mode number limits are >100 when comparing matching individual measurements. We will discuss the implications for interpreting DT ice roughness data derived from phase-contrast x-ray imaging.

  2. The Concepts "Benchmarks and Benchmarking" Used in Education Planning: Teacher Education as Example

    Science.gov (United States)

    Steyn, H. J.

    2015-01-01

    Planning in education is a structured activity that includes several phases and steps that take into account several kinds of information (Steyn, Steyn, De Waal & Wolhuter, 2002: 146). One of the sets of information that are usually considered is the (so-called) "benchmarks" and "benchmarking" regarding the focus of a…

  3. Power loss benchmark of nine-switch converters in three-phase online-UPS application

    DEFF Research Database (Denmark)

    Qin, Zian; Loh, Poh Chiang; Blaabjerg, Frede

    2014-01-01

    Three-phase online-UPS is an appropriate application for the nine-switch converter, where its high voltage stress of the power device caused by the reduced switch feature can be relieved significantly. Its power loss and loss distribution still have the flexibility from the control point of view...... as parameters like modulation index and phase angle of the load are taken into account. The benchmark of power loss will become a guidance for the users to make best use of the advantages and bypass the disadvantages of nine-switch converters. The results are finally verified on a 1.5 kW prototype....

  4. Benchmarking Analysis of Institutional University Autonomy in Denmark, Lithuania, Romania, Scotland, and Sweden

    DEFF Research Database (Denmark)

    This book presents a benchmark, comparative analysis of institutional university autonomy in Denmark, Lithuania, Romania, Scotland and Sweden. These countries are partners in a EU TEMPUS funded project 'Enhancing University Autonomy in Moldova' (EUniAM). This benchmark analysis was conducted...... by the EUniAM Lead Task Force team that collected and analysed secondary and primary data in each of these countries and produced four benchmark reports that are part of this book. For each dimension and interface of institutional university autonomy, the members of the Lead Task Force team identified...... respective evaluation criteria and searched for similarities and differences in approaches to higher education sectors and respective autonomy regimes in these countries. The consolidated report that precedes the benchmark reports summarises the process and key findings from the four benchmark reports...

  5. Issues in benchmarking human reliability analysis methods: A literature review

    International Nuclear Information System (INIS)

    Boring, Ronald L.; Hendrickson, Stacey M.L.; Forester, John A.; Tran, Tuan Q.; Lois, Erasmia

    2010-01-01

    There is a diversity of human reliability analysis (HRA) methods available for use in assessing human performance within probabilistic risk assessments (PRA). Due to the significant differences in the methods, including the scope, approach, and underlying models, there is a need for an empirical comparison investigating the validity and reliability of the methods. To accomplish this empirical comparison, a benchmarking study comparing and evaluating HRA methods in assessing operator performance in simulator experiments is currently underway. In order to account for as many effects as possible in the construction of this benchmarking study, a literature review was conducted, reviewing past benchmarking studies in the areas of psychology and risk assessment. A number of lessons learned through these studies is presented in order to aid in the design of future HRA benchmarking endeavors.

  6. Issues in benchmarking human reliability analysis methods : a literature review.

    Energy Technology Data Exchange (ETDEWEB)

    Lois, Erasmia (US Nuclear Regulatory Commission); Forester, John Alan; Tran, Tuan Q. (Idaho National Laboratory, Idaho Falls, ID); Hendrickson, Stacey M. Langfitt; Boring, Ronald L. (Idaho National Laboratory, Idaho Falls, ID)

    2008-04-01

    There is a diversity of human reliability analysis (HRA) methods available for use in assessing human performance within probabilistic risk assessment (PRA). Due to the significant differences in the methods, including the scope, approach, and underlying models, there is a need for an empirical comparison investigating the validity and reliability of the methods. To accomplish this empirical comparison, a benchmarking study is currently underway that compares HRA methods with each other and against operator performance in simulator studies. In order to account for as many effects as possible in the construction of this benchmarking study, a literature review was conducted, reviewing past benchmarking studies in the areas of psychology and risk assessment. A number of lessons learned through these studies are presented in order to aid in the design of future HRA benchmarking endeavors.

  7. Benchmarking Benchmarks

    NARCIS (Netherlands)

    D.C. Blitz (David)

    2011-01-01

    textabstractBenchmarking benchmarks is a bundle of six studies that are inspired by the prevalence of benchmarking in academic finance research as well as in investment practice. Three studies examine if current benchmark asset pricing models adequately describe the cross-section of stock returns.

  8. Importance Performance Analysis as a Trade Show Performance Evaluation and Benchmarking Tool

    OpenAIRE

    Tafesse, Wondwesen; Skallerud, Kåre; Korneliussen, Tor

    2010-01-01

    Author's accepted version (post-print). The purpose of this study is to introduce importance performance analysis as a trade show performance evaluation and benchmarking tool. Importance performance analysis considers exhibitors’ performance expectation and perceived performance in unison to evaluate and benchmark trade show performance. The present study uses data obtained from exhibitors of an international trade show to demonstrate how importance performance analysis can be used to eval...

  9. The University of Pisa calculations for the Phase I of the OECD/NEA UAM Benchmark

    International Nuclear Information System (INIS)

    Ball, M.; Parisi, C.; D'Auria, F.

    2009-01-01

    In this paper we present the Univ. of Pisa preliminary results for the first exercise of the Phase I of the OECD/NEA Benchmark on the Uncertainty in Analysis and Modeling. The scope of exercise one is to address the uncertainties due to the basic nuclear data as well as the impact of processing the nuclear and covariance data, selection of multi-group structure and self-shielding treatment. DRAGON code and TSUNAMI code were employed, using the available covariance data matrix. The execution of DRAGON calculations required the use of ANGELO and LAMBDA codes for the extension of the covariance matrix from the original SCALE 44 group structure to DRAGON 69 group structure. The uncertainties for the main cross sections were evaluated and are presented here. (authors)

  10. International benchmark study of advanced thermal hydraulic safety analysis codes against measurements on IEA-R1 research reactor

    Energy Technology Data Exchange (ETDEWEB)

    Hainoun, A., E-mail: pscientific2@aec.org.sy [Atomic Energy Commission of Syria (AECS), Nuclear Engineering Department, P.O. Box 6091, Damascus (Syrian Arab Republic); Doval, A. [Nuclear Engineering Department, Av. Cmdt. Luis Piedrabuena 4950, C.P. 8400 S.C de Bariloche, Rio Negro (Argentina); Umbehaun, P. [Centro de Engenharia Nuclear – CEN, IPEN-CNEN/SP, Av. Lineu Prestes 2242-Cidade Universitaria, CEP-05508-000 São Paulo, SP (Brazil); Chatzidakis, S. [School of Nuclear Engineering, Purdue University, West Lafayette, IN 47907 (United States); Ghazi, N. [Atomic Energy Commission of Syria (AECS), Nuclear Engineering Department, P.O. Box 6091, Damascus (Syrian Arab Republic); Park, S. [Research Reactor Design and Engineering Division, Basic Science Project Operation Dept., Korea Atomic Energy Research Institute (Korea, Republic of); Mladin, M. [Institute for Nuclear Research, Campului Street No. 1, P.O. Box 78, 115400 Mioveni, Arges (Romania); Shokr, A. [Division of Nuclear Installation Safety, Research Reactor Safety Section, International Atomic Energy Agency, A-1400 Vienna (Austria)

    2014-12-15

    analysis codes that comprise of CATHARE, RELAP5, MERSAT and PARET. The code RELAP5 was used independently by four of the participating teams and therefore the user effect and its impact on the code results can be characterized. The benchmark results demonstrate that most of the codes have the capability to correctly predict the SS case. However, for the LOFA case the simulation results show discrepancies to the measurement although the majority of the applied codes predict a qualitative correct time evolution of the corresponding transients for the coolant and clad temperatures. It is noted that the peak temperatures and the gradients around them are predicted conservatively. The quantitative assessments of benchmark results indicate different amounts of discrepancy between predictions and measurements ranging between 7% and 20% for peak clad temperatures during LOFA. The comparative prediction capability of the employed codes is addressed by additional code-to-code comparisons based on selected TH parameters that comprise flow rate, pressure drop and heat transfer coefficient during natural circulation phase.

  11. Benchmarking in University Toolbox

    Directory of Open Access Journals (Sweden)

    Katarzyna Kuźmicz

    2015-06-01

    Full Text Available In the face of global competition and rising challenges that higher education institutions (HEIs meet, it is imperative to increase innovativeness and efficiency of their management. Benchmarking can be the appropriate tool to search for a point of reference necessary to assess institution’s competitive position and learn from the best in order to improve. The primary purpose of the paper is to present in-depth analysis of benchmarking application in HEIs worldwide. The study involves indicating premises of using benchmarking in HEIs. It also contains detailed examination of types, approaches and scope of benchmarking initiatives. The thorough insight of benchmarking applications enabled developing classification of benchmarking undertakings in HEIs. The paper includes review of the most recent benchmarking projects and relating them to the classification according to the elaborated criteria (geographical range, scope, type of data, subject, support and continuity. The presented examples were chosen in order to exemplify different approaches to benchmarking in higher education setting. The study was performed on the basis of the published reports from benchmarking projects, scientific literature and the experience of the author from the active participation in benchmarking projects. The paper concludes with recommendations for university managers undertaking benchmarking, derived on the basis of the conducted analysis.

  12. Investigation on method of elasto-plastic analysis for piping system (benchmark analysis)

    International Nuclear Information System (INIS)

    Kabaya, Takuro; Kojima, Nobuyuki; Arai, Masashi

    2015-01-01

    This paper provides method of an elasto-plastic analysis for practical seismic design of nuclear piping system. JSME started up the task to establish method of an elasto-plastic analysis for nuclear piping system. The benchmark analyses have been performed in the task to investigate on method of an elasto-plastic analysis. And our company has participated in the benchmark analyses. As a result, we have settled on the method which simulates the result of piping exciting test accurately. Therefore the recommended method of an elasto-plastic analysis is shown as follows; 1) An elasto-plastic analysis is composed of dynamic analysis of piping system modeled by using beam elements and static analysis of deformed elbow modeled by using shell elements. 2) Bi-linear is applied as an elasto-plastic property. Yield point is standardized yield point multiplied by 1.2 times, and second gradient is 1/100 young's modulus. Kinematic hardening is used as a hardening rule. 3) The fatigue life is evaluated on strain ranges obtained by elasto-plastic analysis, by using the rain flow method and the fatigue curve of previous studies. (author)

  13. FENDL neutronics benchmark: Specifications for the calculational neutronics and shielding benchmark

    International Nuclear Information System (INIS)

    Sawan, M.E.

    1994-12-01

    During the IAEA Advisory Group Meeting on ''Improved Evaluations and Integral Data Testing for FENDL'' held in Garching near Munich, Germany in the period 12-16 September 1994, the Working Group II on ''Experimental and Calculational Benchmarks on Fusion Neutronics for ITER'' recommended that a calculational benchmark representative of the ITER design should be developed. This report describes the neutronics and shielding calculational benchmark available for scientists interested in performing analysis for this benchmark. (author)

  14. Analysis of Benchmark 2 results

    International Nuclear Information System (INIS)

    Bacha, F.; Lefievre, B.; Maillard, J.; Silva, J.

    1994-01-01

    The code GEANT315 has been compared to different codes in two benchmarks. We analyze its performances through our results, especially in the thick target case. In spite of gaps in nucleus-nucleus interaction theories at intermediate energies, benchmarks allow possible improvements of physical models used in our codes. Thereafter, a scheme of radioactive waste burning system is studied. (authors). 4 refs., 7 figs., 1 tab

  15. OECD/DOE/CEA VVER-1000 Coolant Transient Benchmark. Summary Record of the First Workshop (V1000-CT1)

    International Nuclear Information System (INIS)

    2003-01-01

    The first workshop for the VVER-1000 Coolant Transient Benchmark TT Benchmark was hosted by the Commissariat a l'Energie Atomique, Centre d'Etudes de Saclay, France. The V1000CT benchmark defines standard problems for validation of coupled three-dimensional (3-D) neutron-kinetics/system thermal-hydraulics codes for application to Soviet-designed VVER-1000 reactors using actual plant data without any scaling. The overall objective is to access computer codes used in the safety analysis of VVER power plants, specifically for their use in reactivity transient simulations in a VVER-1000. The V1000CT benchmark consists of two phases: V1000CT-1 - simulation of the switching on of one main coolant pump (MCP) while the other three MCP are in operation, and V1000CT- 2 - calculation of coolant mixing tests and Main Steam Line Break (MSLB) scenario. Further background information on this benchmark can be found at the OECD/NEA benchmark web site . The purpose of the first workshop was to review the benchmark activities after the Starter Meeting held last year in Dresden, Germany: to discuss the participants' feedback and modifications introduced in the Benchmark Specifications on Phase 1; to present and to discuss modelling issues and preliminary results from the three exercises of Phase 1; to discuss the modelling issues of Exercise 1 of Phase 2; and to define work plan and schedule in order to complete the two phases

  16. Benchmarking Sustainability Practices Use throughout Industrial Construction Project Delivery

    Directory of Open Access Journals (Sweden)

    Sungmin Yun

    2017-06-01

    Full Text Available Despite the efforts for sustainability studies in building and infrastructure construction, the sustainability issues in industrial construction remain understudied. Further, few studies evaluate sustainability and benchmark sustainability issues in industrial construction from a management perspective. This study presents a phase-based benchmarking framework for evaluating sustainability practices use focusing on industrial facilities project. Based on the framework, this study quantifies and assesses sustainability practices use, and further sorts the results by project phase and major project characteristics, including project type, project nature, and project delivery method. The results show that sustainability practices were implemented higher in the construction and startup phases relative to other phases, with a very broad range. An assessment by project type and project nature showed significant differences in sustainability practices use, but no significant difference in practices use by project delivery method. This study contributes to providing a benchmarking method for sustainability practices in industrial facilities projects at the project phase level. This study also discusses and provides an application of phase-based benchmarking for sustainability in industrial construction.

  17. Benchmarking local healthcare-associated infections: Available benchmarks and interpretation challenges

    Directory of Open Access Journals (Sweden)

    Aiman El-Saed

    2013-10-01

    Full Text Available Summary: Growing numbers of healthcare facilities are routinely collecting standardized data on healthcare-associated infection (HAI, which can be used not only to track internal performance but also to compare local data to national and international benchmarks. Benchmarking overall (crude HAI surveillance metrics without accounting or adjusting for potential confounders can result in misleading conclusions. Methods commonly used to provide risk-adjusted metrics include multivariate logistic regression analysis, stratification, indirect standardization, and restrictions. The characteristics of recognized benchmarks worldwide, including the advantages and limitations are described. The choice of the right benchmark for the data from the Gulf Cooperation Council (GCC states is challenging. The chosen benchmark should have similar data collection and presentation methods. Additionally, differences in surveillance environments including regulations should be taken into consideration when considering such a benchmark. The GCC center for infection control took some steps to unify HAI surveillance systems in the region. GCC hospitals still need to overcome legislative and logistic difficulties in sharing data to create their own benchmark. The availability of a regional GCC benchmark may better enable health care workers and researchers to obtain more accurate and realistic comparisons. Keywords: Benchmarking, Comparison, Surveillance, Healthcare-associated infections

  18. Regulatory Benchmarking

    DEFF Research Database (Denmark)

    Agrell, Per J.; Bogetoft, Peter

    2017-01-01

    Benchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators. The appli......Benchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators....... The application of bench-marking in regulation, however, requires specific steps in terms of data validation, model specification and outlier detection that are not systematically documented in open publications, leading to discussions about regulatory stability and economic feasibility of these techniques...

  19. Regulatory Benchmarking

    DEFF Research Database (Denmark)

    Agrell, Per J.; Bogetoft, Peter

    2017-01-01

    Benchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators. The appli......Benchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators....... The application of benchmarking in regulation, however, requires specific steps in terms of data validation, model specification and outlier detection that are not systematically documented in open publications, leading to discussions about regulatory stability and economic feasibility of these techniques...

  20. Analysis of the OECD main steam line break benchmark using ANC-K/MIDAC code

    International Nuclear Information System (INIS)

    Aoki, Shigeaki; Tahara, Yoshihisa; Suemura, Takayuki; Ogawa, Junto

    2004-01-01

    A three-dimensional (3D) neutronics and thermal-and-hydraulics (T/H) coupling code ANC-K/MIDAC has been developed. It is the combination of the 3D nodal kinetic code ANC-K and the 3D drift flux thermal hydraulic code MIDAC. In order to verify the adequacy of this code, we have performed several international benchmark problems. In this paper, we show the calculation results of ''OECD Main Steam Line Break Benchmark (MSLB benchmark)'', which gives the typical local power peaking problem. And we calculated the return-to-power scenario of the Phase II problem. The comparison of the results shows the very good agreement of important core parameters between the ANC-K/MIDAC and other participant codes. (author)

  1. RETRAN-3D Analysis Of The OECD/NRC Peach Bottom 2 Turbine Trip Benchmark

    International Nuclear Information System (INIS)

    Barten, W.; Coddington, P.

    2003-01-01

    This paper presents the PSI results on the different Phases of the Peach Bottom BWR Turbine Trip Benchmark using the RETRAN-3D code. In the first part of the paper, the analysis of Phase 1 is presented, in which the system pressure is predicted based on a pre-defined core power distribution. These calculations demonstrate the importance of accurate modelling of the non-equilibrium effects within the steam separator region. In the second part, a selection of the RETRAN-3D results for Phase 2 are given, where the power is predicted using a 3-D core with pre-defined core flow and pressure boundary conditions. A comparison of calculations using the different (Benchmark-specified) boundary conditions illustrates the sensitivity of the power maximum on the various resultant system parameters. In the third part of the paper, the results of the Phase 3 calculation are presented. This phase, which is a combination of the analytical work of Phases 1 and 2, gives good agreement with the measured data. The coupling of the pressure and flow oscillations in the steam line, the mass balance in the core, the (void) reactivity and the core power are all discussed. It is shown that the reactivity effects resulting from the change in the core void can explain the overall behaviour of the transient prior to the reactor scram. The time-dependent, normalized power for different thermal-hydraulic channels in the core is discussed in some detail. Up to the time of reactor scram, the power change was similar in all channels, with differences of the order of only a few percent. The axial shape of the channel powers at the time of maximum (overall) power increased in the core centre (compared with the shape at time zero). These changes occur as a consequence of the relative change in the channel void, which is largest in the region of the onset of boiling, and the influence on the different fuel assemblies of the complex ring pattern of the control rods. (author)

  2. RETRAN-3D Analysis Of The OECD/NRC Peach Bottom 2 Turbine Trip Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Barten, W.; Coddington, P

    2003-03-01

    This paper presents the PSI results on the different Phases of the Peach Bottom BWR Turbine Trip Benchmark using the RETRAN-3D code. In the first part of the paper, the analysis of Phase 1 is presented, in which the system pressure is predicted based on a pre-defined core power distribution. These calculations demonstrate the importance of accurate modelling of the non-equilibrium effects within the steam separator region. In the second part, a selection of the RETRAN-3D results for Phase 2 are given, where the power is predicted using a 3-D core with pre-defined core flow and pressure boundary conditions. A comparison of calculations using the different (Benchmark-specified) boundary conditions illustrates the sensitivity of the power maximum on the various resultant system parameters. In the third part of the paper, the results of the Phase 3 calculation are presented. This phase, which is a combination of the analytical work of Phases 1 and 2, gives good agreement with the measured data. The coupling of the pressure and flow oscillations in the steam line, the mass balance in the core, the (void) reactivity and the core power are all discussed. It is shown that the reactivity effects resulting from the change in the core void can explain the overall behaviour of the transient prior to the reactor scram. The time-dependent, normalized power for different thermal-hydraulic channels in the core is discussed in some detail. Up to the time of reactor scram, the power change was similar in all channels, with differences of the order of only a few percent. The axial shape of the channel powers at the time of maximum (overall) power increased in the core centre (compared with the shape at time zero). These changes occur as a consequence of the relative change in the channel void, which is largest in the region of the onset of boiling, and the influence on the different fuel assemblies of the complex ring pattern of the control rods. (author)

  3. Benchmark calculations of power distribution within fuel assemblies. Phase 2: comparison of data reduction and power reconstruction methods in production codes

    International Nuclear Information System (INIS)

    2000-01-01

    Systems loaded with plutonium in the form of mixed-oxide (MOX) fuel show somewhat different neutronic characteristics compared with those using conventional uranium fuels. In order to maintain adequate safety standards, it is essential to accurately predict the characteristics of MOX-fuelled systems and to further validate both the nuclear data and the computation methods used. A computation benchmark on power distribution within fuel assemblies to compare different techniques used in production codes for fine flux prediction in systems partially loaded with MOX fuel was carried out at an international level. It addressed first the numerical schemes for pin power reconstruction, then investigated the global performance including cross-section data reduction methods. This report provides the detailed results of this second phase of the benchmark. The analysis of the results revealed that basic data still need to be improved, primarily for higher plutonium isotopes and minor actinides. (author)

  4. Two-phase flow characteristics analysis code: MINCS

    International Nuclear Information System (INIS)

    Watanabe, Tadashi; Hirano, Masashi; Akimoto, Masayuki; Tanabe, Fumiya; Kohsaka, Atsuo.

    1992-03-01

    Two-phase flow characteristics analysis code: MINCS (Modularized and INtegrated Code System) has been developed to provide a computational tool for analyzing two-phase flow phenomena in one-dimensional ducts. In MINCS, nine types of two-phase flow models-from a basic two-fluid nonequilibrium (2V2T) model to a simple homogeneous equilibrium (1V1T) model-can be used under the same numerical solution method. The numerical technique is based on the implicit finite difference method to enhance the numerical stability. The code structure is highly modularized, so that new constitutive relations and correlations can be easily implemented into the code and hence evaluated. A flow pattern can be fixed regardless of flow conditions, and state equations or steam tables can be selected. It is, therefore, easy to calculate physical or numerical benchmark problems. (author)

  5. Analysis of a molten salt reactor benchmark

    International Nuclear Information System (INIS)

    Ghosh, Biplab; Bajpai, Anil; Degweker, S.B.

    2013-01-01

    This paper discusses results of our studies of an IAEA molten salt reactor (MSR) benchmark. The benchmark, proposed by Japan, involves burnup calculations of a single lattice cell of a MSR for burning plutonium and other minor actinides. We have analyzed this cell with in-house developed burnup codes BURNTRAN and McBURN. This paper also presents a comparison of the results of our codes and those obtained by the proposers of the benchmark. (author)

  6. Space Weather Action Plan Ionizing Radiation Benchmarks: Phase 1 update and plans for Phase 2

    Science.gov (United States)

    Talaat, E. R.; Kozyra, J.; Onsager, T. G.; Posner, A.; Allen, J. E., Jr.; Black, C.; Christian, E. R.; Copeland, K.; Fry, D. J.; Johnston, W. R.; Kanekal, S. G.; Mertens, C. J.; Minow, J. I.; Pierson, J.; Rutledge, R.; Semones, E.; Sibeck, D. G.; St Cyr, O. C.; Xapsos, M.

    2017-12-01

    Changes in the near-Earth radiation environment can affect satellite operations, astronauts in space, commercial space activities, and the radiation environment on aircraft at relevant latitudes or altitudes. Understanding the diverse effects of increased radiation is challenging, but producing ionizing radiation benchmarks will help address these effects. The following areas have been considered in addressing the near-Earth radiation environment: the Earth's trapped radiation belts, the galactic cosmic ray background, and solar energetic-particle events. The radiation benchmarks attempt to account for any change in the near-Earth radiation environment, which, under extreme cases, could present a significant risk to critical infrastructure operations or human health. The goal of these ionizing radiation benchmarks and associated confidence levels will define at least the radiation intensity as a function of time, particle type, and energy for an occurrence frequency of 1 in 100 years and an intensity level at the theoretical maximum for the event. In this paper, we present the benchmarks that address radiation levels at all applicable altitudes and latitudes in the near-Earth environment, the assumptions made and the associated uncertainties, and the next steps planned for updating the benchmarks.

  7. Benchmarking study and its application for shielding analysis of large accelerator facilities

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Hee-Seock; Kim, Dong-hyun; Oranj, Leila Mokhtari; Oh, Joo-Hee; Lee, Arim; Jung, Nam-Suk [POSTECH, Pohang (Korea, Republic of)

    2015-10-15

    Shielding Analysis is one of subjects which are indispensable to construct large accelerator facility. Several methods, such as the Monte Carlo, discrete ordinate, and simplified calculation, have been used for this purpose. The calculation precision is overcome by increasing the trial (history) numbers. However its accuracy is still a big issue in the shielding analysis. To secure the accuracy in the Monte Carlo calculation, the benchmarking study using experimental data and the code comparison are adopted fundamentally. In this paper, the benchmarking result for electrons, protons, and heavy ions are presented as well as the proper application of the results is discussed. The benchmarking calculations, which are indispensable in the shielding analysis were performed for different particles: proton, heavy ion and electron. Four different multi-particle Monte Carlo codes, MCNPX, FLUKA, PHITS, and MARS, were examined for higher energy range equivalent to large accelerator facility. The degree of agreement between the experimental data including the SINBAD database and the calculated results were estimated in the terms of secondary neutron production and attenuation through the concrete and iron shields. The degree of discrepancy and the features of Monte Carlo codes were investigated and the application way of the benchmarking results are discussed in the view of safety margin and selecting the code for the shielding analysis. In most cases, the tested Monte Carlo codes give proper credible results except of a few limitation of each codes.

  8. Benchmarking study and its application for shielding analysis of large accelerator facilities

    International Nuclear Information System (INIS)

    Lee, Hee-Seock; Kim, Dong-hyun; Oranj, Leila Mokhtari; Oh, Joo-Hee; Lee, Arim; Jung, Nam-Suk

    2015-01-01

    Shielding Analysis is one of subjects which are indispensable to construct large accelerator facility. Several methods, such as the Monte Carlo, discrete ordinate, and simplified calculation, have been used for this purpose. The calculation precision is overcome by increasing the trial (history) numbers. However its accuracy is still a big issue in the shielding analysis. To secure the accuracy in the Monte Carlo calculation, the benchmarking study using experimental data and the code comparison are adopted fundamentally. In this paper, the benchmarking result for electrons, protons, and heavy ions are presented as well as the proper application of the results is discussed. The benchmarking calculations, which are indispensable in the shielding analysis were performed for different particles: proton, heavy ion and electron. Four different multi-particle Monte Carlo codes, MCNPX, FLUKA, PHITS, and MARS, were examined for higher energy range equivalent to large accelerator facility. The degree of agreement between the experimental data including the SINBAD database and the calculated results were estimated in the terms of secondary neutron production and attenuation through the concrete and iron shields. The degree of discrepancy and the features of Monte Carlo codes were investigated and the application way of the benchmarking results are discussed in the view of safety margin and selecting the code for the shielding analysis. In most cases, the tested Monte Carlo codes give proper credible results except of a few limitation of each codes

  9. Benchmarking

    OpenAIRE

    Meylianti S., Brigita

    1999-01-01

    Benchmarking has different meaning to different people. There are five types of benchmarking, namely internal benchmarking, competitive benchmarking, industry / functional benchmarking, process / generic benchmarking and collaborative benchmarking. Each type of benchmarking has its own advantages as well as disadvantages. Therefore it is important to know what kind of benchmarking is suitable to a specific application. This paper will discuss those five types of benchmarking in detail, includ...

  10. Numisheet2005 Benchmark Analysis on Forming of an Automotive Underbody Cross Member: Benchmark 2

    International Nuclear Information System (INIS)

    Buranathiti, Thaweepat; Cao Jian

    2005-01-01

    This report presents an international cooperation benchmark effort focusing on simulations of a sheet metal stamping process. A forming process of an automotive underbody cross member using steel and aluminum blanks is used as a benchmark. Simulation predictions from each submission are analyzed via comparison with the experimental results. A brief summary of various models submitted for this benchmark study is discussed. Prediction accuracy of each parameter of interest is discussed through the evaluation of cumulative errors from each submission

  11. WLUP benchmarks

    International Nuclear Information System (INIS)

    Leszczynski, Francisco

    2002-01-01

    The IAEA-WIMS Library Update Project (WLUP) is on the end stage. The final library will be released on 2002. It is a result of research and development made by more than ten investigators during 10 years. The organization of benchmarks for testing and choosing the best set of data has been coordinated by the author of this paper. It is presented the organization, name conventions, contents and documentation of WLUP benchmarks, and an updated list of the main parameters for all cases. First, the benchmarks objectives and types are given. Then, comparisons of results from different WIMSD libraries are included. Finally it is described the program QVALUE for analysis and plot of results. Some examples are given. The set of benchmarks implemented on this work is a fundamental tool for testing new multigroup libraries. (author)

  12. Policy analysis of the English graduation benchmark in Taiwan ...

    African Journals Online (AJOL)

    To nudge students to study English and to improve their English proficiency, many universities in Taiwan have imposed an English graduation benchmark on their students. This article reviews this policy, using the theoretic framework for education policy analysis proposed by Haddad and Demsky (1995). The author ...

  13. RELAP5-3D Results for Phase I (Exercise 2) of the OECD/NEA MHTGR-350 MW Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Gerhard Strydom

    2012-06-01

    The coupling of the PHISICS code suite to the thermal hydraulics system code RELAP5-3D has recently been initiated at the Idaho National Laboratory (INL) to provide a fully coupled prismatic Very High Temperature Reactor (VHTR) system modeling capability as part of the NGNP methods development program. The PHISICS code consists of three modules: INSTANT (performing 3D nodal transport core calculations), MRTAU (depletion and decay heat generation) and a perturbation/mixer module. As part of the verification and validation activities, steady state results have been obtained for Exercise 2 of Phase I of the newly-defined OECD/NEA MHTGR-350 MW Benchmark. This exercise requires participants to calculate a steady-state solution for an End of Equilibrium Cycle 350 MW Modular High Temperature Reactor (MHTGR), using the provided geometry, material, and coolant bypass flow description. The paper provides an overview of the MHTGR Benchmark and presents typical steady state results (e.g. solid and gas temperatures, thermal conductivities) for Phase I Exercise 2. Preliminary results are also provided for the early test phase of Exercise 3 using a two-group cross-section library and the Relap5-3D model developed for Exercise 2.

  14. CIEMAT’s contribution to the phase II of the OECD-NEA RIA benchmark on thermo-mechanical fuel codes performance

    Energy Technology Data Exchange (ETDEWEB)

    Sagrado, I.C.; Vallejo, I.; Herranz, L.E.

    2015-07-01

    As a part of the international efforts devoted to validate and/or update the current fuel safety criteria, the OECD-NEA has launched a second phase of the RIA benchmark on thermomechanical fuel codes performance. CIEMAT contributes simulating the ten scenarios proposed with FRAPTRAN and SCANAIR. Both codes lead to similar predictions during the heating-up; however, during the cooling-down significant deviations may appear. They are mainly caused by the estimations of gap closure and re-opening and the clad to water heat exchange approaches. The uncertainty analysis performed for the SCANAIR estimations leads to uncertainty ranges below 15% and 28% for maximum temperatures and deformations, respectively. The corresponding sensitivity analysis shows that, in addition to the injected energy, special attention should be paid to fuel thermal expansion and clad yield stress models. (Author)

  15. A new algorithm for benchmarking in integer data envelopment analysis

    Directory of Open Access Journals (Sweden)

    M. M. Omran

    2012-08-01

    Full Text Available The aim of this study is to investigate the effect of integer data in data envelopment analysis (DEA. The inputs and outputs in different types of DEA are considered to be continuous. In most application-oriented problems, some or all data are integers; and subsequently, the continuous condition of the values is omitted. For example, situations in which the inputs/outputs are representatives of the number of cars, people, etc. In fact, the benchmark unit is artificial and does not contain integer inputs/outputs after projection on the efficiency frontier. By rounding off the projection point, we may lose the feasibility or end up having inefficient DMU. In such cases, it is required to provide a benchmark unit such that the considered unit reaches the efficiency. In the present short communication, by proposing a novel algorithm, the projecting of an inefficient DMU is carried out in such a way that produced benchmarking takes values with fully integer inputs/outputs.

  16. Policy Analysis of the English Graduation Benchmark in Taiwan

    Science.gov (United States)

    Shih, Chih-Min

    2012-01-01

    To nudge students to study English and to improve their English proficiency, many universities in Taiwan have imposed an English graduation benchmark on their students. This article reviews this policy, using the theoretic framework for education policy analysis proposed by Haddad and Demsky (1995). The author presents relevant research findings,…

  17. Benchmark analysis of MCNP trademark ENDF/B-VI iron

    International Nuclear Information System (INIS)

    Court, J.D.; Hendricks, J.S.

    1994-12-01

    The MCNP ENDF/B-VI iron cross-section data was subjected to four benchmark studies as part of the Hiroshima/Nagasaki dose re-evaluation for the National Academy of Science and the Defense Nuclear Agency. The four benchmark studies were: (1) the iron sphere benchmarks from the Lawrence Livermore Pulsed Spheres; (2) the Oak Ridge National Laboratory Fusion Reactor Shielding Benchmark; (3) a 76-cm diameter iron sphere benchmark done at the University of Illinois; (4) the Oak Ridge National Laboratory Benchmark for Neutron Transport through Iron. MCNP4A was used to model each benchmark and computational results from the ENDF/B-VI iron evaluations were compared to ENDF/B-IV, ENDF/B-V, the MCNP Recommended Data Set (which includes Los Alamos National Laboratory Group T-2 evaluations), and experimental data. The results show that the ENDF/B-VI iron evaluations are as good as, or better than, previous data sets

  18. Repeated Results Analysis for Middleware Regression Benchmarking

    Czech Academy of Sciences Publication Activity Database

    Bulej, Lubomír; Kalibera, T.; Tůma, P.

    2005-01-01

    Roč. 60, - (2005), s. 345-358 ISSN 0166-5316 R&D Projects: GA ČR GA102/03/0672 Institutional research plan: CEZ:AV0Z10300504 Keywords : middleware benchmarking * regression benchmarking * regression testing Subject RIV: JD - Computer Applications, Robotics Impact factor: 0.756, year: 2005

  19. Post-BEMUSE Reflood Model Input Uncertainty Methods (PREMIUM) Benchmark Phase II: Identification of Influential Parameters

    International Nuclear Information System (INIS)

    Kovtonyuk, A.; Petruzzi, A.; D'Auria, F.

    2015-01-01

    heat transfer coefficients, a qualitative (but not quantitative) agreement between different codes is observed. - For other parameters, like interphase friction coefficient and droplet diameter, a contrary behaviour (i.e. in correspondence of one of the extreme of the IP range, the direction of change of the responses is different) between different codes and even between different selected models within the same code can be observed. This suggests that the effect of such parameters on the cladding temperatures is quite complex, probably because it involves a lot of physical models (e.g., via interphase friction and interphase heat transfer coefficients for the droplet diameter). It shall be noted that the analysis of differences between the reflood models of different codes is out of scope of the PREMIUM benchmark. Nevertheless, it is recommended to take into account the physical models/ input parameters found as influential by the other participants in order to select the influential input parameters for which uncertainties are to be quantified within the Phase III of PREMIUM. In particular, input parameters identified as influential by other participants using the same code should be considered

  20. Analysis of a computational benchmark for a high-temperature reactor using SCALE

    International Nuclear Information System (INIS)

    Goluoglu, S.

    2006-01-01

    Several proposed advanced reactor concepts require methods to address effects of double heterogeneity. In doubly heterogeneous systems, heterogeneous fuel particles in a moderator matrix form the fuel region of the fuel element and thus constitute the first level of heterogeneity. Fuel elements themselves are also heterogeneous with fuel and moderator or reflector regions, forming the second level of heterogeneity. The fuel elements may also form regular or irregular lattices. A five-phase computational benchmark for a high-temperature reactor (HTR) fuelled with uranium or reactor-grade plutonium has been defined by the Organization for Economic Cooperation and Development, Nuclear Energy Agency (OECD NEA), Nuclear Science Committee, Working Party on the Physics of Plutonium Fuels and Innovative Fuel Cycles. This paper summarizes the analysis results using the latest SCALE code system (to be released in CY 2006 as SCALE 5.1). (authors)

  1. Benchmarking, benchmarks, or best practices? Applying quality improvement principles to decrease surgical turnaround time.

    Science.gov (United States)

    Mitchell, L

    1996-01-01

    The processes of benchmarking, benchmark data comparative analysis, and study of best practices are distinctly different. The study of best practices is explained with an example based on the Arthur Andersen & Co. 1992 "Study of Best Practices in Ambulatory Surgery". The results of a national best practices study in ambulatory surgery were used to provide our quality improvement team with the goal of improving the turnaround time between surgical cases. The team used a seven-step quality improvement problem-solving process to improve the surgical turnaround time. The national benchmark for turnaround times between surgical cases in 1992 was 13.5 minutes. The initial turnaround time at St. Joseph's Medical Center was 19.9 minutes. After the team implemented solutions, the time was reduced to an average of 16.3 minutes, an 18% improvement. Cost-benefit analysis showed a potential enhanced revenue of approximately $300,000, or a potential savings of $10,119. Applying quality improvement principles to benchmarking, benchmarks, or best practices can improve process performance. Understanding which form of benchmarking the institution wishes to embark on will help focus a team and use appropriate resources. Communicating with professional organizations that have experience in benchmarking will save time and money and help achieve the desired results.

  2. Higgs Pair Production: Choosing Benchmarks With Cluster Analysis

    CERN Document Server

    Carvalho, Alexandra; Dorigo, Tommaso; Goertz, Florian; Gottardo, Carlo A.; Tosi, Mia

    2016-01-01

    New physics theories often depend on a large number of free parameters. The precise values of those parameters in some cases drastically affect the resulting phenomenology of fundamental physics processes, while in others finite variations can leave it basically invariant at the level of detail experimentally accessible. When designing a strategy for the analysis of experimental data in the search for a signal predicted by a new physics model, it appears advantageous to categorize the parameter space describing the model according to the corresponding kinematical features of the final state. A multi-dimensional test statistic can be used to gauge the degree of similarity in the kinematics of different models; a clustering algorithm using that metric may then allow the division of the space into homogeneous regions, each of which can be successfully represented by a benchmark point. Searches targeting those benchmark points are then guaranteed to be sensitive to a large area of the parameter space. In this doc...

  3. CEC thermal-hydraulic benchmark exercise on Fiploc verification experiment F2 in Battelle model containment. Experimental phases 2, 3 and 4. Results of comparisons

    International Nuclear Information System (INIS)

    Fischer, K.; Schall, M.; Wolf, L.

    1993-01-01

    The present final report comprises the major results of Phase II of the CEC thermal-hydraulic benchmark exercise on Fiploc verification experiment F2 in the Battelle model containment, experimental phases 2, 3 and 4, which was organized and sponsored by the Commission of the European Communities for the purpose of furthering the understanding and analysis of long-term thermal-hydraulic phenomena inside containments during and after severe core accidents. This benchmark exercise received high European attention with eight organizations from six countries participating with eight computer codes during phase 2. Altogether 18 results from computer code runs were supplied by the participants and constitute the basis for comparisons with the experimental data contained in this publication. This reflects both the high technical interest in, as well as the complexity of, this CEC exercise. Major comparison results between computations and data are reported on all important quantities relevant for containment analyses during long-term transients. These comparisons comprise pressure, steam and air content, velocities and their directions, heat transfer coefficients and saturation ratios. Agreements and disagreements are discussed for each participating code/institution, conclusions drawn and recommendations provided. The phase 2 CEC benchmark exercise provided an up-to-date state-of-the-art status review of the thermal-hydraulic capabilities of present computer codes for containment analyses. This exercise has shown that all of the participating codes can simulate the important global features of the experiment correctly, like: temperature stratification, pressure and leakage, heat transfer to structures, relative humidity, collection of sump water. Several weaknesses of individual codes were identified, and this may help to promote their development. As a general conclusion it may be said that while there is still a wide area of necessary extensions and improvements, the

  4. Burn-up Credit Criticality Safety Benchmark-Phase II-E. Impact of Isotopic Inventory Changes due to Control Rod Insertions on Reactivity and the End Effect in PWR UO2 Fuel Assemblies

    International Nuclear Information System (INIS)

    Neuber, Jens Christian; Tippl, Wolfgang; Hemptinne, Gwendoline de; Maes, Philippe; Ranta-aho, Anssu; Peneliau, Yannick; Jutier, Ludyvine; Tardy, Marcel; Reiche, Ingo; Kroeger, Helge; Nakata, Tetsuo; Armishaw, Malcom; Miller, Thomas M.

    2015-01-01

    PWR UO 2 spent fuel assemblies was analysed. The results of the Phase II-C benchmark were used to define the two axial burn-up profiles for the Phase II-E benchmark such that the impact of the asymmetry on the reactivity and the end effect is bounded. The two profiles together with the sets of isotopic number densities related to different control rod insertion depths during depletion were provided to the participants in the Phase II-E benchmark. To enable the participants to estimate the end effects related to the profiles and the control rod insertion depths the isotopic number densities applying to uniform distributions of the two average burn-ups of 30 MWd/kg U and 50 MWd/kg U were also supplied. In the Phase II-E benchmark basically the same conceptual transport cask configuration was employed as was already used in Phase II-C: A finite transport cask made of stainless steel is used, containing 21 fuel assemblies separated by borated stainless steel plates. The cask was assumed to be fully flooded with pure light water. In total, fourteen solutions were submitted to the Phase II-E benchmark exercise, by ten companies/organisations in seven countries. The participants were asked to calculate, using the two axial burn-up profiles and the related uniform burn-up distributions, the neutron multiplication factors eff k of the cask configuration employing the sets of isotopic number densities related to preset control rod insertion depths during depletion. In addition, the optional task was suggested to the participants to calculate for both, the axial burn-up profiles as well as the related uniform burn-up distributions, the axial fission densities for the axial zones that had been used to describe the axial burn-up distributions for the different control rod insertion depths. For this optional task three solutions were submitted by three companies/organisations in three countries. The analysis of the results obtained for the Phase II-E benchmark exercise begins with

  5. Benchmarking of Constant Power Generation Strategies for Single-Phase Grid-Connected Photovoltaic Systems

    DEFF Research Database (Denmark)

    Sangwongwanich, Ariya; Yang, Yongheng; Blaabjerg, Frede

    2018-01-01

    strategies based on: 1) a power control method (P-CPG), 2) a current limit method (I-CPG) and 3) the Perturb and Observe algorithm (P&O-CPG). However, the operational mode changes (e.g., from the maximum power point tracking to a CPG operation) will affect the entire system performance. Thus, a benchmarking...... of the presented CPG strategies is also conducted on a 3-kW single-phase grid-connected PV system. Comparisons reveal that either the P-CPG or I-CPG strategies can achieve fast dynamics and satisfactory steady-state performance. In contrast, the P&O-CPG algorithm is the most suitable solution in terms of high...

  6. A Global Vision over Benchmarking Process: Benchmarking Based Enterprises

    OpenAIRE

    Sitnikov, Catalina; Giurca Vasilescu, Laura

    2008-01-01

    Benchmarking uses the knowledge and the experience of others to improve the enterprise. Starting from the analysis of the performance and underlying the strengths and weaknesses of the enterprise it should be assessed what must be done in order to improve its activity. Using benchmarking techniques, an enterprise looks at how processes in the value chain are performed. The approach based on the vision “from the whole towards the parts” (a fragmented image of the enterprise’s value chain) redu...

  7. HEATING6 analysis of international thermal benchmark problem sets 1 and 2

    International Nuclear Information System (INIS)

    Childs, K.W.; Bryan, C.B.

    1986-10-01

    In order to assess the heat transfer computer codes used in the analysis of nuclear fuel shipping casks, the Nuclear Energy Agency Committee on Reactor Physics has defined seven problems for benchmarking thermal codes. All seven of these problems have been solved using the HEATING6 heat transfer code. This report presents the results of five of the problems. The remaining two problems were used in a previous benchmarking of thermal codes used in the United States, and their solutions have been previously published

  8. Inelastic finite element analysis of a pipe-elbow assembly (benchmark problem 2)

    Energy Technology Data Exchange (ETDEWEB)

    Knapp, H P [Internationale Atomreaktorbau GmbH (INTERATOM) Bergisch Gladbach (Germany); Prij, J [Netherlands Energy Research Foundation (ECN) Petten (Netherlands)

    1979-06-01

    In the scope of the international benchmark problem effort on piping systems, benchmark problem 2 consisting of a pipe elbow assembly, subjected to a time dependent in-plane bending moment, was analysed using the finite element program MARC. Numerical results are presented and a comparison with experimental results is made. It is concluded that the main reason for the deviation between the calculated and measured values is due to the fact that creep-plasticity interaction is not taken into account in the analysis. (author)

  9. The OECD/NRC BWR full-size fine-mesh bundle tests benchmark (BFBT)-general description

    International Nuclear Information System (INIS)

    Sartori, Enrico; Hochreiter, L.E.; Ivanov, Kostadin; Utsuno, Hideaki

    2004-01-01

    The need to refine models for best-estimate calculations based on good-quality experimental data have been expressed in many recent meetings in the field of nuclear applications. The needs arising in this respect should not be limited to currently available macroscopic approaches but should be extended to next-generation approaches that focus on more microscopic processes. One most valuable database identified for the thermal-hydraulics modelling was developed by the Nuclear Power Engineering Corporation (NUPEC). Part of this database will be made available for an international benchmark exercise. This fine-mesh high-quality data encourages advancement in the insufficiently developed field of the two-phase flow theory. Considering that the present theoretical approach is relatively immature, the benchmark specification is designed so that it will systematically assess and compare the participants' numerical models on the prediction of detailed void distributions and critical powers. The development of truly mechanistic models for critical power prediction is currently underway. These innovative models should include elementary processes such as void distributions, droplet deposit, liquid film entrainment, etc. The benchmark problem includes both macroscopic and microscopic measurement data. In this context, the sub-channel grade void fraction data are regarded as the macroscopic data, and the digitized computer graphic images are the microscopic data. The proposed benchmark consists of two parts (phases), each part consisting of different exercises: Phase 1- Void distribution benchmark: Exercise 1- Steady-state sub-channel grade benchmark. Exercise 2- Steady-state microscopic grade benchmark. Exercise 3-Transient macroscopic grade benchmark. Phase 2-Critical power benchmark: Exercise 1-Steady-state benchmark. Exercise 2-Transient benchmark. (author)

  10. Benchmarking for Cost Improvement. Final report

    Energy Technology Data Exchange (ETDEWEB)

    1993-09-01

    The US Department of Energy`s (DOE) Office of Environmental Restoration and Waste Management (EM) conducted the Benchmarking for Cost Improvement initiative with three objectives: Pilot test benchmarking as an EM cost improvement tool; identify areas for cost improvement and recommend actions to address these areas; provide a framework for future cost improvement. The benchmarking initiative featured the use of four principal methods (program classification, nationwide cost improvement survey, paired cost comparison and component benchmarking). Interested parties contributed during both the design and execution phases. The benchmarking initiative was conducted on an accelerated basis. Of necessity, it considered only a limited set of data that may not be fully representative of the diverse and complex conditions found at the many DOE installations. The initiative generated preliminary data about cost differences and it found a high degree of convergence on several issues. Based on this convergence, the report recommends cost improvement strategies and actions. This report describes the steps taken as part of the benchmarking initiative and discusses the findings and recommended actions for achieving cost improvement. The results and summary recommendations, reported below, are organized by the study objectives.

  11. X447 EBR-II Experiment Benchmark for Verification of Audit Code of SFR Metal Fuel

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Yong Won; Bae, Moo-Hoon; Shin, Andong; Suh, Namduk [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of)

    2016-10-15

    In KINS (Korea Institute of Nuclear Safety), to prepare audit calculation of PGSFR licensing review, the project has been started to develop the regulatory technology for SFR system including a fuel area. To evaluate the fuel integrity and safety during an irradiation, the fuel performance code must be used for audit calculation. In this study, to verify the new code system, the benchmark analysis is performed. In the benchmark, X447 EBR-II experiment data are used. Additionally, the sensitivity analysis according to mass flux change of coolant is performed. In case of LWR fuel performance modeling, various and advanced models have been proposed and validated based on sufficient in-reactor test results. However, due to the lack of experience of SFR operation, the current understanding of SFR fuel behavior is limited. In this study, X447 EBR-II Experiment data are used for benchmark. The fuel composition of X447 assembly is U-10Zr and PGSFR also uses this composition in initial phase. So we select X447 EBR-II experiment for benchmark analysis. Due to the lack of experience of SFR operation and data, the current understanding of SFR fuel behavior is limited. However, in order to prepare the licensing of PGSFR, regulatory audit technologies of SFR must be secured. So, in this study, to verify the new audit fuel performance analysis code, the benchmark analysis is performed using X447 EBR-II experiment data. Also, the sensitivity analysis with mass flux change of coolant is performed. In terms of verification, it is considered that the results of benchmark and sensitivity analysis are reasonable.

  12. X447 EBR-II Experiment Benchmark for Verification of Audit Code of SFR Metal Fuel

    International Nuclear Information System (INIS)

    Choi, Yong Won; Bae, Moo-Hoon; Shin, Andong; Suh, Namduk

    2016-01-01

    In KINS (Korea Institute of Nuclear Safety), to prepare audit calculation of PGSFR licensing review, the project has been started to develop the regulatory technology for SFR system including a fuel area. To evaluate the fuel integrity and safety during an irradiation, the fuel performance code must be used for audit calculation. In this study, to verify the new code system, the benchmark analysis is performed. In the benchmark, X447 EBR-II experiment data are used. Additionally, the sensitivity analysis according to mass flux change of coolant is performed. In case of LWR fuel performance modeling, various and advanced models have been proposed and validated based on sufficient in-reactor test results. However, due to the lack of experience of SFR operation, the current understanding of SFR fuel behavior is limited. In this study, X447 EBR-II Experiment data are used for benchmark. The fuel composition of X447 assembly is U-10Zr and PGSFR also uses this composition in initial phase. So we select X447 EBR-II experiment for benchmark analysis. Due to the lack of experience of SFR operation and data, the current understanding of SFR fuel behavior is limited. However, in order to prepare the licensing of PGSFR, regulatory audit technologies of SFR must be secured. So, in this study, to verify the new audit fuel performance analysis code, the benchmark analysis is performed using X447 EBR-II experiment data. Also, the sensitivity analysis with mass flux change of coolant is performed. In terms of verification, it is considered that the results of benchmark and sensitivity analysis are reasonable

  13. OECD/NEZ Main Steam Line Break Benchmark Problem Exercise I Simulation Using the SPACE Code with the Point Kinetics Model

    International Nuclear Information System (INIS)

    Kim, Yohan; Kim, Seyun; Ha, Sangjun

    2014-01-01

    The Safety and Performance Analysis Code for Nuclear Power Plants (SPACE) has been developed in recent years by the Korea Nuclear Hydro and Nuclear Power Co. (KHNP) through collaborative works with other Korean nuclear industries. The SPACE is a best-estimated two-phase three-field thermal-hydraulic analysis code to analyze the safety and performance of pressurized water reactors (PWRs). The SPACE code has sufficient features to replace outdated vendor supplied codes and to be used for the safety analysis of operating PWRs and the design of advanced reactors. As a result of the second phase of the development, the 2.14 version of the code was released through the successive various V and V works. The topical reports on the code and related safety analysis methodologies have been prepared for license works. In this study, the OECD/NEA Main Steam Line Break (MSLB) Benchmark Problem Exercise I was simulated as a V and V work. The results were compared with those of the participants in the benchmark project. The OECD/NEA MSLB Benchmark Problem Exercise I was simulated using the SPACE code. The results were compared with those of the participants in the benchmark project. Through the simulation, it was concluded that the SPACE code can effectively simulate PWR MSLB accidents

  14. Plant improvements through the use of benchmarking analysis

    International Nuclear Information System (INIS)

    Messmer, J.R.

    1993-01-01

    As utilities approach the turn of the century, customer and shareholder satisfaction is threatened by rising costs. Environmental compliance expenditures, coupled with low load growth and aging plant assets are forcing utilities to operate existing resources in a more efficient and productive manner. PSI Energy set out in the spring of 1992 on a benchmarking mission to compare four major coal fired plants against others of similar size and makeup, with the goal of finding the best operations in the country. Following extensive analysis of the 'Best in Class' operation, detailed goals and objectives were established for each plant in seven critical areas. Three critical processes requiring rework were identified and required an integrated effort from all plants. The Plant Improvement process has already resulted in higher operation productivity, increased emphasis on planning, and lower costs due to effective material management. While every company seeks improvement, goals are often set in an ambiguous manner. Benchmarking aids in setting realistic goals based on others' actual accomplishments. This paper describes how the utility's short term goals will move them toward being a lower cost producer

  15. NUPEC BWR Full-size Fine-mesh Bundle Test (BFBT) Benchmark. Volume II: uncertainty and sensitivity analyses of void distribution and critical power - Specification

    International Nuclear Information System (INIS)

    Aydogan, F.; Hochreiter, L.; Ivanov, K.; Martin, M.; Utsuno, H.; Sartori, E.

    2010-01-01

    This report provides the specification for the uncertainty exercises of the international OECD/NEA, NRC and NUPEC BFBT benchmark problem including the elemental task. The specification was prepared jointly by Pennsylvania State University (PSU), USA and the Japan Nuclear Energy Safety (JNES) Organisation, in cooperation with the OECD/NEA and the Commissariat a l'energie atomique (CEA Saclay, France). The work is sponsored by the US NRC, METI-Japan, the OECD/NEA and the Nuclear Engineering Program (NEP) of Pennsylvania State University. This uncertainty specification covers the fourth exercise of Phase I (Exercise-I-4), and the third exercise of Phase II (Exercise II-3) as well as the elemental task. The OECD/NRC BFBT benchmark provides a very good opportunity to apply uncertainty analysis (UA) and sensitivity analysis (SA) techniques and to assess the accuracy of thermal-hydraulic models for two-phase flows in rod bundles. During the previous OECD benchmarks, participants usually carried out sensitivity analysis on their models for the specification (initial conditions, boundary conditions, etc.) to identify the most sensitive models or/and to improve the computed results. The comprehensive BFBT experimental database (NEA, 2006) leads us one step further in investigating modelling capabilities by taking into account the uncertainty analysis in the benchmark. The uncertainties in input data (boundary conditions) and geometry (provided in the benchmark specification) as well as the uncertainties in code models can be accounted for to produce results with calculational uncertainties and compare them with the measurement uncertainties. Therefore, uncertainty analysis exercises were defined for the void distribution and critical power phases of the BFBT benchmark. This specification is intended to provide definitions related to UA/SA methods, sensitivity/ uncertainty parameters, suggested probability distribution functions (PDF) of sensitivity parameters, and selected

  16. On the feasibility of using emergy analysis as a source of benchmarking criteria through data envelopment analysis: A case study for wind energy

    International Nuclear Information System (INIS)

    Iribarren, Diego; Vázquez-Rowe, Ian; Rugani, Benedetto; Benetto, Enrico

    2014-01-01

    The definition of criteria for the benchmarking of similar entities is often a critical issue in analytical studies because of the multiplicity of criteria susceptible to be taken into account. This issue can be aggravated by the need to handle multiple data for multiple facilities. This article presents a methodological framework, named the Em + DEA method, which combines emergy analysis with Data Envelopment Analysis (DEA) for the ecocentric benchmarking of multiple resembling entities (i.e., multiple decision making units or DMUs). Provided that the life-cycle inventories of these DMUs are available, an emergy analysis is performed through the computation of seven different indicators, which refer to the use of fossil, metal, mineral, nuclear, renewable energy, water and land resources. These independent emergy values are then implemented as inputs for DEA computation, thus providing operational emergy-based efficiency scores and, for the inefficient DMUs, target emergy flows (i.e., feasible emergy benchmarks that would turn inefficient DMUs into efficient). The use of the Em + DEA method is exemplified through a case study of wind energy farms. The potential use of CED (cumulative energy demand) and CExD (cumulative exergy demand) indicators as alternative benchmarking criteria to emergy is discussed. The combined use of emergy analysis with DEA is proven to be a valid methodological approach to provide benchmarks oriented towards the optimisation of the life-cycle performance of a set of multiple similar facilities, not being limited to the operational traits of the assessed units. - Highlights: • Combined emergy and DEA method to benchmark multiple resembling entities. • Life-cycle inventory, emergy analysis and DEA as key steps of the Em + DEA method. • Valid ecocentric benchmarking approach proven through a case study of wind farms. • Comparison with life-cycle energy-based benchmarking criteria (CED/CExD + DEA). • Analysts and decision and policy

  17. Lessons Learned on Benchmarking from the International Human Reliability Analysis Empirical Study

    International Nuclear Information System (INIS)

    Boring, Ronald L.; Forester, John A.; Bye, Andreas; Dang, Vinh N.; Lois, Erasmia

    2010-01-01

    The International Human Reliability Analysis (HRA) Empirical Study is a comparative benchmark of the prediction of HRA methods to the performance of nuclear power plant crews in a control room simulator. There are a number of unique aspects to the present study that distinguish it from previous HRA benchmarks, most notably the emphasis on a method-to-data comparison instead of a method-to-method comparison. This paper reviews seven lessons learned about HRA benchmarking from conducting the study: (1) the dual purposes of the study afforded by joining another HRA study; (2) the importance of comparing not only quantitative but also qualitative aspects of HRA; (3) consideration of both negative and positive drivers on crew performance; (4) a relatively large sample size of crews; (5) the use of multiple methods and scenarios to provide a well-rounded view of HRA performance; (6) the importance of clearly defined human failure events; and (7) the use of a common comparison language to 'translate' the results of different HRA methods. These seven lessons learned highlight how the present study can serve as a useful template for future benchmarking studies.

  18. Lessons Learned on Benchmarking from the International Human Reliability Analysis Empirical Study

    Energy Technology Data Exchange (ETDEWEB)

    Ronald L. Boring; John A. Forester; Andreas Bye; Vinh N. Dang; Erasmia Lois

    2010-06-01

    The International Human Reliability Analysis (HRA) Empirical Study is a comparative benchmark of the prediction of HRA methods to the performance of nuclear power plant crews in a control room simulator. There are a number of unique aspects to the present study that distinguish it from previous HRA benchmarks, most notably the emphasis on a method-to-data comparison instead of a method-to-method comparison. This paper reviews seven lessons learned about HRA benchmarking from conducting the study: (1) the dual purposes of the study afforded by joining another HRA study; (2) the importance of comparing not only quantitative but also qualitative aspects of HRA; (3) consideration of both negative and positive drivers on crew performance; (4) a relatively large sample size of crews; (5) the use of multiple methods and scenarios to provide a well-rounded view of HRA performance; (6) the importance of clearly defined human failure events; and (7) the use of a common comparison language to “translate” the results of different HRA methods. These seven lessons learned highlight how the present study can serve as a useful template for future benchmarking studies.

  19. OECD/NRC Benchmark Based on NUPEC PWR Sub-channel and Bundle Test (PSBT). Volume I: Experimental Database and Final Problem Specifications

    International Nuclear Information System (INIS)

    Rubin, A.; Schoedel, A.; Avramova, M.; Utsuno, H.; Bajorek, S.; Velazquez-Lozada, A.

    2012-01-01

    The need to refine models for best-estimate calculations, based on good-quality experimental data, has been expressed in many recent meetings in the field of nuclear applications. The needs arising in this respect should not be limited to the currently available macroscopic methods but should be extended to next-generation analysis techniques that focus on more microscopic processes. One of the most valuable databases identified for the thermal-hydraulics modelling was developed by the Nuclear Power Engineering Corporation (NUPEC), Japan, which includes sub-channel void fraction and departure from nucleate boiling (DNB) measurements in a representative Pressurised Water Reactor (PWR) fuel assembly. Part of this database has been made available for this international benchmark activity entitled 'NUPEC PWR Sub-channel and Bundle Tests (PSBT) benchmark'. This international project has been officially approved by the Japanese Ministry of Economy, Trade, and Industry (METI), the US Nuclear Regulatory Commission (NRC) and endorsed by the OECD/NEA. The benchmark team has been organised based on the collaboration between Japan and the USA. A large number of international experts have agreed to participate in this programme. The fine-mesh high-quality sub-channel void fraction and departure from nucleate boiling data encourages advancement in understanding and modelling complex flow behaviour in real bundles. Considering that the present theoretical approach is relatively immature, the benchmark specification is designed so that it will systematically assess and compare the participants' analytical models on the prediction of detailed void distributions and DNB. The development of truly mechanistic models for DNB prediction is currently underway. The benchmark problem includes both macroscopic and microscopic measurement data. In this context, the sub-channel grade void fraction data are regarded as the macroscopic data and the digitised computer graphic images are the

  20. Benchmarking and the laboratory

    Science.gov (United States)

    Galloway, M; Nadin, L

    2001-01-01

    This article describes how benchmarking can be used to assess laboratory performance. Two benchmarking schemes are reviewed, the Clinical Benchmarking Company's Pathology Report and the College of American Pathologists' Q-Probes scheme. The Clinical Benchmarking Company's Pathology Report is undertaken by staff based in the clinical management unit, Keele University with appropriate input from the professional organisations within pathology. Five annual reports have now been completed. Each report is a detailed analysis of 10 areas of laboratory performance. In this review, particular attention is focused on the areas of quality, productivity, variation in clinical practice, skill mix, and working hours. The Q-Probes scheme is part of the College of American Pathologists programme in studies of quality assurance. The Q-Probes scheme and its applicability to pathology in the UK is illustrated by reviewing two recent Q-Probe studies: routine outpatient test turnaround time and outpatient test order accuracy. The Q-Probes scheme is somewhat limited by the small number of UK laboratories that have participated. In conclusion, as a result of the government's policy in the UK, benchmarking is here to stay. Benchmarking schemes described in this article are one way in which pathologists can demonstrate that they are providing a cost effective and high quality service. Key Words: benchmarking • pathology PMID:11477112

  1. Benchmark Analysis Of The High Temperature Gas Cooled Reactors Using Monte Carlo Technique

    International Nuclear Information System (INIS)

    Nguyen Kien Cuong; Huda, M.Q.

    2008-01-01

    Information about several past and present experimental and prototypical facilities based on High Temperature Gas-Cooled Reactor (HTGR) concepts have been examined to assess the potential of these facilities for use in this benchmarking effort. Both reactors and critical facilities applicable to pebble-bed type cores have been considered. Two facilities - HTR-PROTEUS of Switzerland and HTR-10 of China and one conceptual design from Germany - HTR-PAP20 - appear to have the greatest potential for use in benchmarking the codes. This study presents the benchmark analysis of these reactors technologies by using MCNP4C2 and MVP/GMVP Codes to support the evaluation and future development of HTGRs. The ultimate objective of this work is to identify and develop new capabilities needed to support Generation IV initiative. (author)

  2. Use of Sensitivity and Uncertainty Analysis to Select Benchmark Experiments for the Validation of Computer Codes and Data

    International Nuclear Information System (INIS)

    Elam, K.R.; Rearden, B.T.

    2003-01-01

    Sensitivity and uncertainty analysis methodologies under development at Oak Ridge National Laboratory were applied to determine whether existing benchmark experiments adequately cover the area of applicability for the criticality code and data validation of PuO 2 and mixed-oxide (MOX) powder systems. The study examined three PuO 2 powder systems and four MOX powder systems that would be useful for establishing mass limits for a MOX fuel fabrication facility. Using traditional methods to choose experiments for criticality analysis validation, 46 benchmark critical experiments were identified as applicable to the PuO 2 powder systems. However, only 14 experiments were thought to be within the area of applicability for dry MOX powder systems.The applicability of 318 benchmark critical experiments, including the 60 experiments initially identified, was assessed. Each benchmark and powder system was analyzed using the Tools for Sensitivity and UNcertainty Analysis Methodology Implementation (TSUNAMI) one-dimensional (TSUNAMI-1D) or TSUNAMI three-dimensional (TSUNAMI-3D) sensitivity analysis sequences, which will be included in the next release of the SCALE code system. This sensitivity data and cross-section uncertainty data were then processed with TSUNAMI-IP to determine the correlation of each application to each experiment in the benchmarking set. Correlation coefficients are used to assess the similarity between systems and determine the applicability of one system for the code and data validation of another.The applicability of most of the experiments identified using traditional methods was confirmed by the TSUNAMI analysis. In addition, some PuO 2 and MOX powder systems were determined to be within the area of applicability of several other benchmarks that would not have been considered using traditional methods. Therefore, the number of benchmark experiments useful for the validation of these systems exceeds the number previously expected. The TSUNAMI analysis

  3. Analysis of the MZA/MZB benchmarks with modern nuclear data sets

    International Nuclear Information System (INIS)

    Rooijen, W.F.G. van

    2013-01-01

    Highlights: • ERANOS libraries are produced based on four modern nuclear data sets. • The MOZART MZA/MZB benchmarks are analyzed with these li- braries. • Results are generally acceptable in an academic context, but for highly accurate applications data adjustment is required. • Some discrepancies between the calculations and the benchmark results remain and cannot be readily explained. • Successful generation of ECCO libraries and covariance data for ERA- NOS. - Abstract: For fast reactor design and analysis, our laboratory uses, amongst others, the ERANOS code system. Unfortunately, the publicly available version of ERANOS does not have the most recent nuclear data. Therefore, it was decided to implement an integrated processing system to generate cross sections libraries for the ECCO cell code, as well as covariance data. Cross sections are generated from the original ENDF files. For our purposes, it is important to ascertain that the ECCO cross section libraries are of adequate quality to allow design and analysis of advanced fast reactors in an academic context. In this paper, we present an analysis of the MZA/MZB benchmarks with nuclear data from JENDL-4.0, JEFF-3.1.2 and ENDF/B-VII.1. Results are that reactivity is generally well predicted, with an uncertainty of about 1% due to covariances of the nuclear data. Reaction rate ratios are satisfactorily calculated, as well as the flux spectrum and reaction rate traverses. Some problems remain: the magnitude of the void effect is not satisfactorily calculated, and reaction rate traverses are not always satisfactorily calculated. On the whole, the ECCO libraries are sufficient for design and analysis tasks in an academic context. For high-precision calculations, such as required for licensing tasks and detailed design calculations, data adjustment is still necessary as the “native” covariance data in the ENDF files is not accurate enough

  4. Synthetic graph generation for data-intensive HPC benchmarking: Scalability, analysis and real-world application

    Energy Technology Data Exchange (ETDEWEB)

    Powers, Sarah S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Lothian, Joshua [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2014-12-01

    The benchmarking effort within the Extreme Scale Systems Center at Oak Ridge National Laboratory seeks to provide High Performance Computing benchmarks and test suites of interest to the DoD sponsor. The work described in this report is a part of the effort focusing on graph generation. A previously developed benchmark, SystemBurn, allows the emulation of a broad spectrum of application behavior profiles within a single framework. To complement this effort, similar capabilities are desired for graph-centric problems. This report described the in-depth analysis of the generated synthetic graphs' properties at a variety of scales using different generator implementations and examines their applicability to replicating real world datasets.

  5. The CEC benchmark interclay on rheological models for clays results of pilot phase (January-June 1989) about the boom clay at Mol (B)

    International Nuclear Information System (INIS)

    Come, B.

    1990-01-01

    A pilot phase of a benchmark exercise for rheological models for boom clay, called interclay, was launched by the CEC in January 1989. The purpose of the benchmark is to compare predictions of calculations made about well-defined rock-mechanical problems, similar to real cases at the Mol facilities, using existing data from laboratory tests on samples. Basically, two approaches were to be compared: one considering clay as an elasto-visco-plastic medium (rock-mechanics approach), and one isolating the role of pore-pressure dissipation (soil-mechanics approach)

  6. Analysis of the OECD/NRC BWR Turbine Trip Transient Benchmark with the Coupled Thermal-Hydraulics and Neutronics Code TRAC-M/PARCS

    International Nuclear Information System (INIS)

    Lee, Deokjung; Downar, Thomas J.; Ulses, Anthony; Akdeniz, Bedirhan; Ivanov, Kostadin N.

    2004-01-01

    An analysis of the Peach Bottom Unit 2 Turbine Trip 2 (TT2) experiment has been performed using the U.S. Nuclear Regulatory Commission coupled thermal-hydraulics and neutronics code TRAC-M/PARCS. The objective of the analysis was to assess the performance of TRAC-M/PARCS on a BWR transient with significance in two-phase flow and spatial variations of the neutron flux. TRAC-M/PARCS results are found to be in good agreement with measured plant data for both steady-state and transient phases of the benchmark. Additional analyses of four fictitious extreme scenarios are performed to provide a basis for code-to-code comparisons and comprehensive testing of the thermal-hydraulics/neutronics coupling. The obtained results of sensitivity studies on the effect of direct moderator heating on transient simulation indicate the importance of this modeling aspect

  7. Prismatic Core Coupled Transient Benchmark

    International Nuclear Information System (INIS)

    Ortensi, J.; Pope, M.A.; Strydom, G.; Sen, R.S.; DeHart, M.D.; Gougar, H.D.; Ellis, C.; Baxter, A.; Seker, V.; Downar, T.J.; Vierow, K.; Ivanov, K.

    2011-01-01

    The Prismatic Modular Reactor (PMR) is one of the High Temperature Reactor (HTR) design concepts that have existed for some time. Several prismatic units have operated in the world (DRAGON, Fort St. Vrain, Peach Bottom) and one unit is still in operation (HTTR). The deterministic neutronics and thermal-fluids transient analysis tools and methods currently available for the design and analysis of PMRs have lagged behind the state of the art compared to LWR reactor technologies. This has motivated the development of more accurate and efficient tools for the design and safety evaluations of the PMR. In addition to the work invested in new methods, it is essential to develop appropriate benchmarks to verify and validate the new methods in computer codes. The purpose of this benchmark is to establish a well-defined problem, based on a common given set of data, to compare methods and tools in core simulation and thermal hydraulics analysis with a specific focus on transient events. The benchmark-working group is currently seeking OECD/NEA sponsorship. This benchmark is being pursued and is heavily based on the success of the PBMR-400 exercise.

  8. Benchmarking lattice physics data and methods for boiling water reactor analysis

    International Nuclear Information System (INIS)

    Cacciapouti, R.J.; Edenius, M.; Harris, D.R.; Hebert, M.J.; Kapitz, D.M.; Pilat, E.E.; VerPlanck, D.M.

    1983-01-01

    The objective of the work reported was to verify the adequacy of lattice physics modeling for the analysis of the Vermont Yankee BWR using a multigroup, two-dimensional transport theory code. The BWR lattice physics methods have been benchmarked against reactor physics experiments, higher order calculations, and actual operating data

  9. OECD/NEA burnup credit criticality benchmarks phase IIIA: Criticality calculations of BWR spent fuel assemblies in storage and transport

    Energy Technology Data Exchange (ETDEWEB)

    Okuno, Hiroshi; Naito, Yoshitaka [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Ando, Yoshihira [Toshiba Corp., Kawasaki, Kanagawa (Japan)

    2000-09-01

    The report describes the final results of Phase IIIA Benchmarks conducted by the Burnup Credit Criticality Calculation Working Group under the auspices of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development (OECD/NEA). The benchmarks are intended to confirm the predictive capability of the current computer code and data library combinations for the neutron multiplication factor (k{sub eff}) of a layer of irradiated BWR fuel assembly array model. In total 22 benchmark problems are proposed for calculations of k{sub eff}. The effects of following parameters are investigated: cooling time, inclusion/exclusion of FP nuclides and axial burnup profile, and inclusion of axial profile of void fraction or constant void fractions during burnup. Axial profiles of fractional fission rates are further requested for five cases out of the 22 problems. Twenty-one sets of results are presented, contributed by 17 institutes from 9 countries. The relative dispersion of k{sub eff} values calculated by the participants from the mean value is almost within the band of {+-}1%{delta}k/k. The deviations from the averaged calculated fission rate profiles are found to be within {+-}5% for most cases. (author)

  10. Benchmarks for GADRAS performance validation

    International Nuclear Information System (INIS)

    Mattingly, John K.; Mitchell, Dean James; Rhykerd, Charles L. Jr.

    2009-01-01

    The performance of the Gamma Detector Response and Analysis Software (GADRAS) was validated by comparing GADRAS model results to experimental measurements for a series of benchmark sources. Sources for the benchmark include a plutonium metal sphere, bare and shielded in polyethylene, plutonium oxide in cans, a highly enriched uranium sphere, bare and shielded in polyethylene, a depleted uranium shell and spheres, and a natural uranium sphere. The benchmark experimental data were previously acquired and consist of careful collection of background and calibration source spectra along with the source spectra. The calibration data were fit with GADRAS to determine response functions for the detector in each experiment. A one-dimensional model (pie chart) was constructed for each source based on the dimensions of the benchmark source. The GADRAS code made a forward calculation from each model to predict the radiation spectrum for the detector used in the benchmark experiment. The comparisons between the GADRAS calculation and the experimental measurements are excellent, validating that GADRAS can correctly predict the radiation spectra for these well-defined benchmark sources.

  11. Benchmarking in Czech Higher Education

    Directory of Open Access Journals (Sweden)

    Plaček Michal

    2015-12-01

    Full Text Available The first part of this article surveys the current experience with the use of benchmarking at Czech universities specializing in economics and management. The results indicate that collaborative benchmarking is not used on this level today, but most actors show some interest in its introduction. The expression of the need for it and the importance of benchmarking as a very suitable performance-management tool in less developed countries are the impetus for the second part of our article. Based on an analysis of the current situation and existing needs in the Czech Republic, as well as on a comparison with international experience, recommendations for public policy are made, which lie in the design of a model of a collaborative benchmarking for Czech economics and management in higher-education programs. Because the fully complex model cannot be implemented immediately – which is also confirmed by structured interviews with academics who have practical experience with benchmarking –, the final model is designed as a multi-stage model. This approach helps eliminate major barriers to the implementation of benchmarking.

  12. Multi-Core Processor Memory Contention Benchmark Analysis Case Study

    Science.gov (United States)

    Simon, Tyler; McGalliard, James

    2009-01-01

    Multi-core processors dominate current mainframe, server, and high performance computing (HPC) systems. This paper provides synthetic kernel and natural benchmark results from an HPC system at the NASA Goddard Space Flight Center that illustrate the performance impacts of multi-core (dual- and quad-core) vs. single core processor systems. Analysis of processor design, application source code, and synthetic and natural test results all indicate that multi-core processors can suffer from significant memory subsystem contention compared to similar single-core processors.

  13. Tourism Destination Benchmarking: Evaluation and Selection of the Benchmarking Partners

    Directory of Open Access Journals (Sweden)

    Luštický Martin

    2012-03-01

    Full Text Available Tourism development has an irreplaceable role in regional policy of almost all countries. This is due to its undeniable benefits for the local population with regards to the economic, social and environmental sphere. Tourist destinations compete for visitors at tourism market and subsequently get into a relatively sharp competitive struggle. The main goal of regional governments and destination management institutions is to succeed in this struggle by increasing the competitiveness of their destination. The quality of strategic planning and final strategies is a key factor of competitiveness. Even though the tourism sector is not the typical field where the benchmarking methods are widely used, such approaches could be successfully applied. The paper focuses on key phases of the benchmarking process which lies in the search for suitable referencing partners. The partners are consequently selected to meet general requirements to ensure the quality if strategies. Following from this, some specific characteristics are developed according to the SMART approach. The paper tests this procedure with an expert evaluation of eight selected regional tourism strategies of regions in the Czech Republic, Slovakia and Great Britain. In this way it validates the selected criteria in the frame of the international environment. Hence, it makes it possible to find strengths and weaknesses of selected strategies and at the same time facilitates the discovery of suitable benchmarking partners.

  14. Benchmarking the energy efficiency of commercial buildings

    International Nuclear Information System (INIS)

    Chung, William; Hui, Y.V.; Lam, Y. Miu

    2006-01-01

    Benchmarking energy-efficiency is an important tool to promote the efficient use of energy in commercial buildings. Benchmarking models are mostly constructed in a simple benchmark table (percentile table) of energy use, which is normalized with floor area and temperature. This paper describes a benchmarking process for energy efficiency by means of multiple regression analysis, where the relationship between energy-use intensities (EUIs) and the explanatory factors (e.g., operating hours) is developed. Using the resulting regression model, these EUIs are then normalized by removing the effect of deviance in the significant explanatory factors. The empirical cumulative distribution of the normalized EUI gives a benchmark table (or percentile table of EUI) for benchmarking an observed EUI. The advantage of this approach is that the benchmark table represents a normalized distribution of EUI, taking into account all the significant explanatory factors that affect energy consumption. An application to supermarkets is presented to illustrate the development and the use of the benchmarking method

  15. Benchmarking in digital circuit design automation

    NARCIS (Netherlands)

    Jozwiak, L.; Gawlowski, D.M.; Slusarczyk, A.S.

    2008-01-01

    This paper focuses on benchmarking, which is the main experimental approach to the design method and EDA-tool analysis, characterization and evaluation. We discuss the importance and difficulties of benchmarking, as well as the recent research effort related to it. To resolve several serious

  16. Numisheet2005 Benchmark Analysis on Forming of an Automotive Deck Lid Inner Panel: Benchmark 1

    International Nuclear Information System (INIS)

    Buranathiti, Thaweepat; Cao Jian

    2005-01-01

    Numerical simulations in sheet metal forming processes have been a very challenging topic in industry. There are many computer codes and modeling techniques existing today. However, there are many unknowns affecting the prediction accuracy. Systematic benchmark tests are needed to accelerate the future implementations and to provide as a reference. This report presents an international cooperative benchmark effort for an automotive deck lid inner panel. Predictions from simulations are analyzed and discussed against the corresponding experimental results. The correlations between accuracy of each parameter of interest are discussed in this report

  17. Analysis of a multigroup stylized CANDU half-core benchmark

    International Nuclear Information System (INIS)

    Pounders, Justin M.; Rahnema, Farzad; Serghiuta, Dumitru

    2011-01-01

    Highlights: → This paper provides a benchmark that is a stylized model problem in more than two energy groups that is realistic with respect to the underlying physics. → An 8-group cross section library is provided to augment a previously published 2-group 3D stylized half-core CANDU benchmark problem. → Reference eigenvalues and selected pin and bundle fission rates are included. → 2-, 4- and 47-group Monte Carlo solutions are compared to analyze homogenization-free transport approximations that result from energy condensation. - Abstract: An 8-group cross section library is provided to augment a previously published 2-group 3D stylized half-core Canadian deuterium uranium (CANDU) reactor benchmark problem. Reference eigenvalues and selected pin and bundle fission rates are also included. This benchmark is intended to provide computational reactor physicists and methods developers with a stylized model problem in more than two energy groups that is realistic with respect to the underlying physics. In addition to transport theory code verification, the 8-group energy structure provides reactor physicist with an ideal problem for examining cross section homogenization and collapsing effects in a full-core environment. To this end, additional 2-, 4- and 47-group full-core Monte Carlo benchmark solutions are compared to analyze homogenization-free transport approximations incurred as a result of energy group condensation.

  18. Calculation of the Thermal Radiation Benchmark Problems for a CANDU Fuel Channel Analysis Using the CFX-10 Code

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Hyoung Tae; Park, Joo Hwan; Rhee, Bo Wook

    2006-07-15

    To justify the use of a commercial Computational Fluid Dynamics (CFD) code for a CANDU fuel channel analysis, especially for the radiation heat transfer dominant conditions, the CFX-10 code is tested against three benchmark problems which were used for the validation of a radiation heat transfer in the CANDU analysis code, a CATHENA. These three benchmark problems are representative of the CANDU fuel channel configurations from a simple geometry to whole fuel channel geometry. With assumptions of a non-participating medium completely enclosed with the diffuse, gray and opaque surfaces, the solutions of the benchmark problems are obtained by the concept of surface resistance to radiation accounting for the view factors and the emissivities. The view factors are calculated by the program MATRIX version 1.0 avoiding the difficulty of hand calculation for the complex geometries. For the solutions of the benchmark problems, the temperature or the net radiation heat flux boundary conditions are prescribed for each radiating surface to determine the radiation heat transfer rate or the surface temperature, respectively by using the network method. The Discrete Transfer Model (DTM) is used for the CFX-10 radiation model and its calculation results are compared with the solutions of the benchmark problems. The CFX-10 results for the three benchmark problems are in close agreement with these solutions, so it is concluded that the CFX-10 with a DTM radiation model can be applied to the CANDU fuel channel analysis where a surface radiation heat transfer is a dominant mode of the heat transfer.

  19. Calculation of the Thermal Radiation Benchmark Problems for a CANDU Fuel Channel Analysis Using the CFX-10 Code

    International Nuclear Information System (INIS)

    Kim, Hyoung Tae; Park, Joo Hwan; Rhee, Bo Wook

    2006-07-01

    To justify the use of a commercial Computational Fluid Dynamics (CFD) code for a CANDU fuel channel analysis, especially for the radiation heat transfer dominant conditions, the CFX-10 code is tested against three benchmark problems which were used for the validation of a radiation heat transfer in the CANDU analysis code, a CATHENA. These three benchmark problems are representative of the CANDU fuel channel configurations from a simple geometry to whole fuel channel geometry. With assumptions of a non-participating medium completely enclosed with the diffuse, gray and opaque surfaces, the solutions of the benchmark problems are obtained by the concept of surface resistance to radiation accounting for the view factors and the emissivities. The view factors are calculated by the program MATRIX version 1.0 avoiding the difficulty of hand calculation for the complex geometries. For the solutions of the benchmark problems, the temperature or the net radiation heat flux boundary conditions are prescribed for each radiating surface to determine the radiation heat transfer rate or the surface temperature, respectively by using the network method. The Discrete Transfer Model (DTM) is used for the CFX-10 radiation model and its calculation results are compared with the solutions of the benchmark problems. The CFX-10 results for the three benchmark problems are in close agreement with these solutions, so it is concluded that the CFX-10 with a DTM radiation model can be applied to the CANDU fuel channel analysis where a surface radiation heat transfer is a dominant mode of the heat transfer

  20. H.B. Robinson-2 pressure vessel benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Remec, I.; Kam, F.B.K.

    1998-02-01

    The H. B. Robinson Unit 2 Pressure Vessel Benchmark (HBR-2 benchmark) is described and analyzed in this report. Analysis of the HBR-2 benchmark can be used as partial fulfillment of the requirements for the qualification of the methodology for calculating neutron fluence in pressure vessels, as required by the U.S. Nuclear Regulatory Commission Regulatory Guide DG-1053, Calculational and Dosimetry Methods for Determining Pressure Vessel Neutron Fluence. Section 1 of this report describes the HBR-2 benchmark and provides all the dimensions, material compositions, and neutron source data necessary for the analysis. The measured quantities, to be compared with the calculated values, are the specific activities at the end of fuel cycle 9. The characteristic feature of the HBR-2 benchmark is that it provides measurements on both sides of the pressure vessel: in the surveillance capsule attached to the thermal shield and in the reactor cavity. In section 2, the analysis of the HBR-2 benchmark is described. Calculations with the computer code DORT, based on the discrete-ordinates method, were performed with three multigroup libraries based on ENDF/B-VI: BUGLE-93, SAILOR-95 and BUGLE-96. The average ratio of the calculated-to-measured specific activities (C/M) for the six dosimeters in the surveillance capsule was 0.90 {+-} 0.04 for all three libraries. The average C/Ms for the cavity dosimeters (without neptunium dosimeter) were 0.89 {+-} 0.10, 0.91 {+-} 0.10, and 0.90 {+-} 0.09 for the BUGLE-93, SAILOR-95 and BUGLE-96 libraries, respectively. It is expected that the agreement of the calculations with the measurements, similar to the agreement obtained in this research, should typically be observed when the discrete-ordinates method and ENDF/B-VI libraries are used for the HBR-2 benchmark analysis.

  1. Benchmarking of LOFT LRTS-COBRA-FRAP safety analysis model

    International Nuclear Information System (INIS)

    Hanson, G.H.; Atkinson, S.A.; Wadkins, R.P.

    1982-05-01

    The purpose of this work was to check out the LOFT LRTS/COBRA-IV/FRAP-T5 safety-analysis models against test data obtained during a LOFT operational transient in which there was a power and fuel-temperature rise. LOFT Experiment L6-3 was an excessive-load-increase anticipated transient test in which the main steam-flow-control valve was driven from its operational position to full-open in seven seconds. The resulting cooldown and reactivity-increase transients provide a good benchmark for the reactivity-and-power-prediction capability of the LRTS calculations, and for the fuel-bundle and fuel-rod temperature-response analysis capability of the LOFT COBRA-IV and FRAP-T5 models

  2. ZZ ECN-BUBEBO, ECN-Petten Burnup Benchmark Book, Inventories, Afterheat

    International Nuclear Information System (INIS)

    Kloosterman, Jan Leen

    1999-01-01

    Description of program or function: Contains experimental benchmarks which can be used for the validation of burnup code systems and accompanied data libraries. Although the benchmarks presented here are thoroughly described in literature, it is in many cases not straightforward to retrieve unambiguously the correct input data and corresponding results from the benchmark Descriptions. Furthermore, results which can easily be measured, are sometimes difficult to calculate because of conversions to be made. Therefore, emphasis has been put to clarify the input of the benchmarks and to present the benchmark results in such a way that they can easily be calculated and compared. For more thorough Descriptions of the benchmarks themselves, the literature referred to here should be consulted. This benchmark book is divided in 11 chapters/files containing the following in text and tabular form: chapter 1: Introduction; chapter 2: Burnup Credit Criticality Benchmark Phase 1-B; chapter 3: Yankee-Rowe Core V Fuel Inventory Study; chapter 4: H.B. Robinson Unit 2 Fuel Inventory Study; chapter 5: Turkey Point Unit 3 Fuel Inventory Study; chapter 6: Turkey Point Unit 3 Afterheat Power Study; chapter 7: Dickens Benchmark on Fission Product Energy Release of U-235; chapter 8: Dickens Benchmark on Fission Product Energy Release of Pu-239; chapter 9: Yarnell Benchmark on Decay Heat Measurements of U-233; chapter 10: Yarnell Benchmark on Decay Heat Measurements of U-235; chapter 11: Yarnell Benchmark on Decay Heat Measurements of Pu-239

  3. Confidential benchmarking based on multiparty computation

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Damgård, Kasper Lyneborg; Nielsen, Kurt

    We report on the design and implementation of a system that uses multiparty computation to enable banks to benchmark their customers' confidential performance data against a large representative set of confidential performance data from a consultancy house. The system ensures that both the banks......' and the consultancy house's data stays confidential, the banks as clients learn nothing but the computed benchmarking score. In the concrete business application, the developed prototype help Danish banks to find the most efficient customers among a large and challenging group of agricultural customers with too much...... debt. We propose a model based on linear programming for doing the benchmarking and implement it using the SPDZ protocol by Damgård et al., which we modify using a new idea that allows clients to supply data and get output without having to participate in the preprocessing phase and without keeping...

  4. VENUS-2 Benchmark Problem Analysis with HELIOS-1.9

    International Nuclear Information System (INIS)

    Jeong, Hyeon-Jun; Choe, Jiwon; Lee, Deokjung

    2014-01-01

    Since there are reliable results of benchmark data from the OECD/NEA report of the VENUS-2 MOX benchmark problem, by comparing benchmark results users can identify the credibility of code. In this paper, the solution of the VENUS-2 benchmark problem from HELIOS 1.9 using the ENDF/B-VI library(NJOY91.13) is compared with the result from HELIOS 1.7 with consideration of the MCNP-4B result as reference data. The comparison contains the results of pin cell calculation, assembly calculation, and core calculation. The eigenvalues from those are considered by comparing the results from other codes. In the case of UOX and MOX assemblies, the differences from the MCNP-4B results are about 10 pcm. However, there is some inaccuracy in baffle-reflector condition, and relatively large differences were found in the MOX-reflector assembly and core calculation. Although HELIOS 1.9 utilizes an inflow transport correction, it seems that it has a limited effect on the error in baffle-reflector condition

  5. Performance evaluation of tile-based Fisher Ratio analysis using a benchmark yeast metabolome dataset.

    Science.gov (United States)

    Watson, Nathanial E; Parsons, Brendon A; Synovec, Robert E

    2016-08-12

    Performance of tile-based Fisher Ratio (F-ratio) data analysis, recently developed for discovery-based studies using comprehensive two-dimensional gas chromatography coupled with time-of-flight mass spectrometry (GC×GC-TOFMS), is evaluated with a metabolomics dataset that had been previously analyzed in great detail, but while taking a brute force approach. The previously analyzed data (referred to herein as the benchmark dataset) were intracellular extracts from Saccharomyces cerevisiae (yeast), either metabolizing glucose (repressed) or ethanol (derepressed), which define the two classes in the discovery-based analysis to find metabolites that are statistically different in concentration between the two classes. Beneficially, this previously analyzed dataset provides a concrete means to validate the tile-based F-ratio software. Herein, we demonstrate and validate the significant benefits of applying tile-based F-ratio analysis. The yeast metabolomics data are analyzed more rapidly in about one week versus one year for the prior studies with this dataset. Furthermore, a null distribution analysis is implemented to statistically determine an adequate F-ratio threshold, whereby the variables with F-ratio values below the threshold can be ignored as not class distinguishing, which provides the analyst with confidence when analyzing the hit table. Forty-six of the fifty-four benchmarked changing metabolites were discovered by the new methodology while consistently excluding all but one of the benchmarked nineteen false positive metabolites previously identified. Copyright © 2016 Elsevier B.V. All rights reserved.

  6. OECD/DOE/CEA VVER-1000 Coolant Transient Benchmark. Summary Record of the Third Workshop (V1000-CT3)

    International Nuclear Information System (INIS)

    2005-01-01

    The overall objective of the VVER-1000 coolant transient (V1000CT) benchmark is to assess computer codes used in the safety analysis of VVER power plants, specifically for their use in analysis of reactivity transients in a VVER-1000. The V1000CT benchmark consists of two phases: V1000CT-1 is a simulation of the switching on of one main coolant pump (MCP) when the other three MCPs are in operation, and V1000CT-2 concerns calculation of coolant mixing tests and main steam line break (MSLB) scenarios. Each of the two phases contains three exercises. The reference problem chosen for simulation in Phase 1 is a MCP switching on when the other three main coolant pumps are in operation in a VVER-1000. This event is characterized by rapid increase in the flow through the core resulting in a coolant temperature decrease, which is spatially dependent. This leads to insertion of spatially distributed positive reactivity due to the modelled feedback mechanisms and non-symmetric power distribution. Simulation of the transient requires evaluation of core response from a multi-dimensional perspective (coupled three-dimensional neutronics/core thermal-hydraulics) supplemented by a one-dimensional simulation of the remainder of the reactor coolant system. Three exercises are defined in the framework of Phase 1: a) Exercise 1 - Point kinetics plant simulation; b) Exercise 2 - Coupled 3-D neutronics/core thermal-hydraulics response evaluation; c) Exercise 3 - Best-estimate coupled 3-D core/plant system transient modelling. In addition to the measured (experiment) scenario, extreme calculation scenarios were defined in the frame of Exercise 3 for better testing 3-D neutronics/thermal-hydraulics techniques. The proposals concerned: rod ejection simulations with scram set points at two different power levels. The technical topics presented at this workshop were: Review of the benchmark activities after the 2. Workshop; - Discussion of participant's feedback and introduced modifications

  7. Analysis of CSNI benchmark test on containment using the code CONTRAN

    International Nuclear Information System (INIS)

    Haware, S.K.; Ghosh, A.K.; Raj, V.V.; Kakodkar, A.

    1994-01-01

    A programme of experimental as well as analytical studies on the behaviour of nuclear reactor containment is being actively pursued. A large number ol' experiments on pressure and temperature transients have been carried out on a one-tenth scale model vapour suppression pool containment experimental facility, simulating the 220 MWe Indian Pressurised Heavy Water Reactors. A programme of development of computer codes is underway to enable prediction of containment behaviour under accident conditions. This includes codes for pressure and temperature transients, hydrogen behaviour, aerosol behaviour etc. As a part of this ongoing work, the code CONTRAN (CONtainment TRansient ANalysis) has been developed for predicting the thermal hydraulic transients in a multicompartment containment. For the assessment of the hydrogen behaviour, the models for hydrogen transportation in a multicompartment configuration and hydrogen combustion have been incorporated in the code CONTRAN. The code also has models for the heat and mass transfer due to condensation and convection heat transfer. The structural heat transfer is modeled using the one-dimensional transient heat conduction equation. Extensive validation exercises have been carried out with the code CONTRAN. The code CONTRAN has been successfully used for the analysis of the benchmark test devised by Committee on the Safety of Nuclear Installations (CSNI) of the Organisation for Economic Cooperation and Development (OECD), to test the numerical accuracy and convergence errors in the computation of mass and energy conservation for the fluid and in the computation of heat conduction in structural walls. The salient features of the code CONTRAN, description of the CSNI benchmark test and a comparison of the CONTRAN predictions with the benchmark test results are presented and discussed in the paper. (author)

  8. Self-benchmarking Guide for Cleanrooms: Metrics, Benchmarks, Actions

    Energy Technology Data Exchange (ETDEWEB)

    Mathew, Paul; Sartor, Dale; Tschudi, William

    2009-07-13

    This guide describes energy efficiency metrics and benchmarks that can be used to track the performance of and identify potential opportunities to reduce energy use in laboratory buildings. This guide is primarily intended for personnel who have responsibility for managing energy use in existing laboratory facilities - including facilities managers, energy managers, and their engineering consultants. Additionally, laboratory planners and designers may also use the metrics and benchmarks described in this guide for goal-setting in new construction or major renovation. This guide provides the following information: (1) A step-by-step outline of the benchmarking process. (2) A set of performance metrics for the whole building as well as individual systems. For each metric, the guide provides a definition, performance benchmarks, and potential actions that can be inferred from evaluating this metric. (3) A list and descriptions of the data required for computing the metrics. This guide is complemented by spreadsheet templates for data collection and for computing the benchmarking metrics. This guide builds on prior research supported by the national Laboratories for the 21st Century (Labs21) program, supported by the U.S. Department of Energy and the U.S. Environmental Protection Agency. Much of the benchmarking data are drawn from the Labs21 benchmarking database and technical guides. Additional benchmark data were obtained from engineering experts including laboratory designers and energy managers.

  9. Pool critical assembly pressure vessel facility benchmark

    International Nuclear Information System (INIS)

    Remec, I.; Kam, F.B.K.

    1997-07-01

    This pool critical assembly (PCA) pressure vessel wall facility benchmark (PCA benchmark) is described and analyzed in this report. Analysis of the PCA benchmark can be used for partial fulfillment of the requirements for the qualification of the methodology for pressure vessel neutron fluence calculations, as required by the US Nuclear Regulatory Commission regulatory guide DG-1053. Section 1 of this report describes the PCA benchmark and provides all data necessary for the benchmark analysis. The measured quantities, to be compared with the calculated values, are the equivalent fission fluxes. In Section 2 the analysis of the PCA benchmark is described. Calculations with the computer code DORT, based on the discrete-ordinates method, were performed for three ENDF/B-VI-based multigroup libraries: BUGLE-93, SAILOR-95, and BUGLE-96. An excellent agreement of the calculated (C) and measures (M) equivalent fission fluxes was obtained. The arithmetic average C/M for all the dosimeters (total of 31) was 0.93 ± 0.03 and 0.92 ± 0.03 for the SAILOR-95 and BUGLE-96 libraries, respectively. The average C/M ratio, obtained with the BUGLE-93 library, for the 28 measurements was 0.93 ± 0.03 (the neptunium measurements in the water and air regions were overpredicted and excluded from the average). No systematic decrease in the C/M ratios with increasing distance from the core was observed for any of the libraries used

  10. Investigations of the VVER-1000 coolant transient benchmark phase 1 with the coupled code system RELAP5/PARCS

    International Nuclear Information System (INIS)

    Sanchez-Espinoza, Victor Hugo

    2008-07-01

    As part of the reactor dynamics activities of FZK/IRS, the qualification of best-estimate coupled code systems for reactor safety evaluations is a key step toward improving their prediction capability and acceptability. The VVER-1000 Coolant Transient Benchmark Phase 1 represents an excellent opportunity to validate the simulation capability of the coupled code system RELAP5/PACRS regarding both the thermal hydraulic plant response (RELAP5) using measured data obtained during commissioning tests at the Kozloduy nuclear power plant unit 6 and the neutron kinetics models of PARCS for hexagonal geometries. The Phase 1 is devoted to the analysis of the switching on of one main coolant pump while the other three pumps are in operation. It includes the following exercises: (a) investigation of the integral plant response using a best-estimate thermal hydraulic system code with a point kinetics model (b) analysis of the core response for given initial and transient thermal hydraulic boundary conditions using a coupled code system with 3D-neutron kinetics model and (c) investigation of the integral plant response using a best-estimate coupled code system with 3D-neutron kinetics. Already before the test, complex flow conditions exist within the RPV e.g. coolant mixing in the upper plenum caused by the reverse flow through the loop-3 with the stopped pump. The test is initiated by switching on the main coolant pump of loop-3 that leads to a reversal of the flow through the respective piping. After about 13 s the mass flow rate through this loop reaches values comparable with the one of the other loops. During this time period, the increased primary coolant flow causes a reduction of the core averaged coolant temperature and thus an increase of the core power. Later on, the power stabilizes at a level higher than the initial power. In this analysis, special attention is paid on the prediction of the spatial asymmetrical core cooling during the test and its effects on the

  11. Investigations of the VVER-1000 coolant transient benchmark phase 1 with the coupled code system RELAP5/PARCS

    Energy Technology Data Exchange (ETDEWEB)

    Sanchez-Espinoza, Victor Hugo

    2008-07-15

    As part of the reactor dynamics activities of FZK/IRS, the qualification of best-estimate coupled code systems for reactor safety evaluations is a key step toward improving their prediction capability and acceptability. The VVER-1000 Coolant Transient Benchmark Phase 1 represents an excellent opportunity to validate the simulation capability of the coupled code system RELAP5/PACRS regarding both the thermal hydraulic plant response (RELAP5) using measured data obtained during commissioning tests at the Kozloduy nuclear power plant unit 6 and the neutron kinetics models of PARCS for hexagonal geometries. The Phase 1 is devoted to the analysis of the switching on of one main coolant pump while the other three pumps are in operation. It includes the following exercises: (a) investigation of the integral plant response using a best-estimate thermal hydraulic system code with a point kinetics model (b) analysis of the core response for given initial and transient thermal hydraulic boundary conditions using a coupled code system with 3D-neutron kinetics model and (c) investigation of the integral plant response using a best-estimate coupled code system with 3D-neutron kinetics. Already before the test, complex flow conditions exist within the RPV e.g. coolant mixing in the upper plenum caused by the reverse flow through the loop-3 with the stopped pump. The test is initiated by switching on the main coolant pump of loop-3 that leads to a reversal of the flow through the respective piping. After about 13 s the mass flow rate through this loop reaches values comparable with the one of the other loops. During this time period, the increased primary coolant flow causes a reduction of the core averaged coolant temperature and thus an increase of the core power. Later on, the power stabilizes at a level higher than the initial power. In this analysis, special attention is paid on the prediction of the spatial asymmetrical core cooling during the test and its effects on the

  12. Specification of phase 3 benchmark (Hex-Z heterogeneous and burnup calculation)

    International Nuclear Information System (INIS)

    Kim, Y.I.

    2002-01-01

    During the second RCM of the IAEA Co-ordinated Research Project Updated Codes and Methods to Reduce the Calculational Uncertainties of the LMFR Reactivity Effects the following items were identified as important. Heterogeneity will affect absolute core reactivity. Rod worths could be considerably reduced by heterogeneity effects depending on their detailed design. Heterogeneity effects will affect the resonance self-shielding in the treatment of fuel Doppler, steel Doppler and sodium density effects. However, it was considered more important to concentrate on the sodium density effect in order to reduce the calculational effort required. It was also recognized that burnup effects will have an influence on fuel Doppler and sodium worths. A benchmark for the assessment of heterogeneity effect for Phase 3 was defined. It is to be performed for the Hex-Z model of the reactor only. No calculations will be performed for the R-Z model. For comparison with heterogeneous evaluations, the control rod worth will be calculated at the beginning of the equilibrium cycle, based on the homogeneous model. The definitions of rod raised and rod inserted for SHR are given, using the composition numbers

  13. Analysis of Three-Phase Rectifier Systems with Controlled DC-Link Current Under Unbalanced Grids

    DEFF Research Database (Denmark)

    Kumar, Dinesh; Davari, Pooya; Zare, Firuz

    2017-01-01

    Voltage unbalance is the most common disturbance in distribution networks, which give undesirable effects on many grid connected power electronics systems including Adjustable Speed Drive (ASD). Severe voltage unbalance can force three-phase rectifiers into almost single-phase operation, which...... degrades the grid power quality and also imposes a significant negative impact on the ASD system. This major power quality issue affecting the conventional rectifiers can be attenuated by controlling the DC-link current based on an Electronic Inductor (EI) technique. The purpose of this digest...... is to analyze and compare the performance of an EI with a conventional three-phase rectifier under unbalanced grid conditions. Experimental and simulation results validate the proposed mathematical modelling. Further analysis and benchmarking will be provided in the final paper....

  14. A New Performance Improvement Model: Adding Benchmarking to the Analysis of Performance Indicator Data.

    Science.gov (United States)

    Al-Kuwaiti, Ahmed; Homa, Karen; Maruthamuthu, Thennarasu

    2016-01-01

    A performance improvement model was developed that focuses on the analysis and interpretation of performance indicator (PI) data using statistical process control and benchmarking. PIs are suitable for comparison with benchmarks only if the data fall within the statistically accepted limit-that is, show only random variation. Specifically, if there is no significant special-cause variation over a period of time, then the data are ready to be benchmarked. The proposed Define, Measure, Control, Internal Threshold, and Benchmark model is adapted from the Define, Measure, Analyze, Improve, Control (DMAIC) model. The model consists of the following five steps: Step 1. Define the process; Step 2. Monitor and measure the variation over the period of time; Step 3. Check the variation of the process; if stable (no significant variation), go to Step 4; otherwise, control variation with the help of an action plan; Step 4. Develop an internal threshold and compare the process with it; Step 5.1. Compare the process with an internal benchmark; and Step 5.2. Compare the process with an external benchmark. The steps are illustrated through the use of health care-associated infection (HAI) data collected for 2013 and 2014 from the Infection Control Unit, King Fahd Hospital, University of Dammam, Saudi Arabia. Monitoring variation is an important strategy in understanding and learning about a process. In the example, HAI was monitored for variation in 2013, and the need to have a more predictable process prompted the need to control variation by an action plan. The action plan was successful, as noted by the shift in the 2014 data, compared to the historical average, and, in addition, the variation was reduced. The model is subject to limitations: For example, it cannot be used without benchmarks, which need to be calculated the same way with similar patient populations, and it focuses only on the "Analyze" part of the DMAIC model.

  15. A benchmark for statistical microarray data analysis that preserves actual biological and technical variance.

    Science.gov (United States)

    De Hertogh, Benoît; De Meulder, Bertrand; Berger, Fabrice; Pierre, Michael; Bareke, Eric; Gaigneaux, Anthoula; Depiereux, Eric

    2010-01-11

    Recent reanalysis of spike-in datasets underscored the need for new and more accurate benchmark datasets for statistical microarray analysis. We present here a fresh method using biologically-relevant data to evaluate the performance of statistical methods. Our novel method ranks the probesets from a dataset composed of publicly-available biological microarray data and extracts subset matrices with precise information/noise ratios. Our method can be used to determine the capability of different methods to better estimate variance for a given number of replicates. The mean-variance and mean-fold change relationships of the matrices revealed a closer approximation of biological reality. Performance analysis refined the results from benchmarks published previously.We show that the Shrinkage t test (close to Limma) was the best of the methods tested, except when two replicates were examined, where the Regularized t test and the Window t test performed slightly better. The R scripts used for the analysis are available at http://urbm-cluster.urbm.fundp.ac.be/~bdemeulder/.

  16. Benchmarking the Netherlands. Benchmarking for growth

    International Nuclear Information System (INIS)

    2003-01-01

    This is the fourth edition of the Ministry of Economic Affairs' publication 'Benchmarking the Netherlands', which aims to assess the competitiveness of the Dutch economy. The methodology and objective of the benchmarking remain the same. The basic conditions for economic activity (institutions, regulation, etc.) in a number of benchmark countries are compared in order to learn from the solutions found by other countries for common economic problems. This publication is devoted entirely to the potential output of the Dutch economy. In other words, its ability to achieve sustainable growth and create work over a longer period without capacity becoming an obstacle. This is important because economic growth is needed to increase prosperity in the broad sense and meeting social needs. Prosperity in both a material (per capita GDP) and immaterial (living environment, environment, health, etc) sense, in other words. The economy's potential output is determined by two structural factors: the growth of potential employment and the structural increase in labour productivity. Analysis by the Netherlands Bureau for Economic Policy Analysis (CPB) shows that in recent years the increase in the capacity for economic growth has been realised mainly by increasing the supply of labour and reducing the equilibrium unemployment rate. In view of the ageing of the population in the coming years and decades the supply of labour is unlikely to continue growing at the pace we have become accustomed to in recent years. According to a number of recent studies, to achieve a respectable rate of sustainable economic growth the aim will therefore have to be to increase labour productivity. To realise this we have to focus on for six pillars of economic policy: (1) human capital, (2) functioning of markets, (3) entrepreneurship, (4) spatial planning, (5) innovation, and (6) sustainability. These six pillars determine the course for economic policy aiming at higher productivity growth. Throughout

  17. Benchmarking the Netherlands. Benchmarking for growth

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2003-01-01

    This is the fourth edition of the Ministry of Economic Affairs' publication 'Benchmarking the Netherlands', which aims to assess the competitiveness of the Dutch economy. The methodology and objective of the benchmarking remain the same. The basic conditions for economic activity (institutions, regulation, etc.) in a number of benchmark countries are compared in order to learn from the solutions found by other countries for common economic problems. This publication is devoted entirely to the potential output of the Dutch economy. In other words, its ability to achieve sustainable growth and create work over a longer period without capacity becoming an obstacle. This is important because economic growth is needed to increase prosperity in the broad sense and meeting social needs. Prosperity in both a material (per capita GDP) and immaterial (living environment, environment, health, etc) sense, in other words. The economy's potential output is determined by two structural factors: the growth of potential employment and the structural increase in labour productivity. Analysis by the Netherlands Bureau for Economic Policy Analysis (CPB) shows that in recent years the increase in the capacity for economic growth has been realised mainly by increasing the supply of labour and reducing the equilibrium unemployment rate. In view of the ageing of the population in the coming years and decades the supply of labour is unlikely to continue growing at the pace we have become accustomed to in recent years. According to a number of recent studies, to achieve a respectable rate of sustainable economic growth the aim will therefore have to be to increase labour productivity. To realise this we have to focus on for six pillars of economic policy: (1) human capital, (2) functioning of markets, (3) entrepreneurship, (4) spatial planning, (5) innovation, and (6) sustainability. These six pillars determine the course for economic policy aiming at higher productivity

  18. Benchmark Specification for an HTR Fuelled with Reactor-grade Plutonium (or Reactor-grade Pu/Th and U/Th). Proposal version 2

    International Nuclear Information System (INIS)

    Hosking, J.G.; Newton, T.D.; Morris, P.

    2007-01-01

    This benchmark proposal builds upon that specified in NEA/NSC/DOC(2003)22 report. In addition to the three phases described in that report, another two phases have now been defined. Additional items for calculation have also been added to the existing phases. It is intended that further items may be added to the benchmark after consultation with its participants. Although the benchmark is specifically designed to provide inter-comparisons for plutonium- and thorium-containing fuels, it is proposed that phases considering simple calculations for a uranium fuel cell and uranium core be included. The purpose of these is to identify any increased uncertainties, relative to uranium fuel, associated with the lesser-known fuels to be investigated in different phases of this benchmark. The first phase considers an infinite array of fuel pebbles fuelled with uranium fuel. Phase 2 considers a similar array of pebbles but for plutonium fuel. Phase 3 continues the plutonium fuel inter-comparisons within the context of whole core calculations. Calculations for Phase 4 are for a uranium-fuelled core. Phase 5 considers an infinite array of pebbles containing thorium. In setting the benchmark the requirements in the definition of the LEUPRO-12 PROTEUS benchmark have been considered. Participants were invited to submit both deterministic results as well as, where appropriate, results from Monte Carlo calculations. Fundamental nuclear data, Avogadro's number, natural abundance data and atomic weights have been taken from the references indicated in the document

  19. Performance analysis of fusion nuclear-data benchmark experiments for light to heavy materials in MeV energy region with a neutron spectrum shifter

    International Nuclear Information System (INIS)

    Murata, Isao; Ohta, Masayuki; Miyamaru, Hiroyuki; Kondo, Keitaro; Yoshida, Shigeo; Iida, Toshiyuki; Ochiai, Kentaro; Konno, Chikara

    2011-01-01

    Nuclear data are indispensable for development of fusion reactor candidate materials. However, benchmarking of the nuclear data in MeV energy region is not yet adequate. In the present study, benchmark performance in the MeV energy region was investigated theoretically for experiments by using a 14 MeV neutron source. We carried out a systematical analysis for light to heavy materials. As a result, the benchmark performance for the neutron spectrum was confirmed to be acceptable, while for gamma-rays it was not sufficiently accurate. Consequently, a spectrum shifter has to be applied. Beryllium had the best performance as a shifter. Moreover, a preliminary examination of whether it is really acceptable that only the spectrum before the last collision is considered in the benchmark performance analysis. It was pointed out that not only the last collision but also earlier collisions should be considered equally in the benchmark performance analysis.

  20. Criteria of benchmark selection for efficient flexible multibody system formalisms

    Directory of Open Access Journals (Sweden)

    Valášek M.

    2007-10-01

    Full Text Available The paper deals with the selection process of benchmarks for testing and comparing efficient flexible multibody formalisms. The existing benchmarks are briefly summarized. The purposes for benchmark selection are investigated. The result of this analysis is the formulation of the criteria of benchmark selection for flexible multibody formalisms. Based on them the initial set of suitable benchmarks is described. Besides that the evaluation measures are revised and extended.

  1. Higgs pair production: choosing benchmarks with cluster analysis

    Energy Technology Data Exchange (ETDEWEB)

    Carvalho, Alexandra; Dall’Osso, Martino; Dorigo, Tommaso [Dipartimento di Fisica e Astronomia and INFN, Sezione di Padova,Via Marzolo 8, I-35131 Padova (Italy); Goertz, Florian [CERN,1211 Geneva 23 (Switzerland); Gottardo, Carlo A. [Physikalisches Institut, Universität Bonn,Nussallee 12, 53115 Bonn (Germany); Tosi, Mia [CERN,1211 Geneva 23 (Switzerland)

    2016-04-20

    New physics theories often depend on a large number of free parameters. The phenomenology they predict for fundamental physics processes is in some cases drastically affected by the precise value of those free parameters, while in other cases is left basically invariant at the level of detail experimentally accessible. When designing a strategy for the analysis of experimental data in the search for a signal predicted by a new physics model, it appears advantageous to categorize the parameter space describing the model according to the corresponding kinematical features of the final state. A multi-dimensional test statistic can be used to gauge the degree of similarity in the kinematics predicted by different models; a clustering algorithm using that metric may allow the division of the space into homogeneous regions, each of which can be successfully represented by a benchmark point. Searches targeting those benchmarks are then guaranteed to be sensitive to a large area of the parameter space. In this document we show a practical implementation of the above strategy for the study of non-resonant production of Higgs boson pairs in the context of extensions of the standard model with anomalous couplings of the Higgs bosons. A non-standard value of those couplings may significantly enhance the Higgs boson pair-production cross section, such that the process could be detectable with the data that the LHC will collect in Run 2.

  2. Self-benchmarking Guide for Laboratory Buildings: Metrics, Benchmarks, Actions

    Energy Technology Data Exchange (ETDEWEB)

    Mathew, Paul; Greenberg, Steve; Sartor, Dale

    2009-07-13

    This guide describes energy efficiency metrics and benchmarks that can be used to track the performance of and identify potential opportunities to reduce energy use in laboratory buildings. This guide is primarily intended for personnel who have responsibility for managing energy use in existing laboratory facilities - including facilities managers, energy managers, and their engineering consultants. Additionally, laboratory planners and designers may also use the metrics and benchmarks described in this guide for goal-setting in new construction or major renovation. This guide provides the following information: (1) A step-by-step outline of the benchmarking process. (2) A set of performance metrics for the whole building as well as individual systems. For each metric, the guide provides a definition, performance benchmarks, and potential actions that can be inferred from evaluating this metric. (3) A list and descriptions of the data required for computing the metrics. This guide is complemented by spreadsheet templates for data collection and for computing the benchmarking metrics. This guide builds on prior research supported by the national Laboratories for the 21st Century (Labs21) program, supported by the U.S. Department of Energy and the U.S. Environmental Protection Agency. Much of the benchmarking data are drawn from the Labs21 benchmarking database and technical guides. Additional benchmark data were obtained from engineering experts including laboratory designers and energy managers.

  3. NODAL3 Sensitivity Analysis for NEACRP 3D LWR Core Transient Benchmark (PWR

    Directory of Open Access Journals (Sweden)

    Surian Pinem

    2016-01-01

    Full Text Available This paper reports the results of sensitivity analysis of the multidimension, multigroup neutron diffusion NODAL3 code for the NEACRP 3D LWR core transient benchmarks (PWR. The code input parameters covered in the sensitivity analysis are the radial and axial node sizes (the number of radial node per fuel assembly and the number of axial layers, heat conduction node size in the fuel pellet and cladding, and the maximum time step. The output parameters considered in this analysis followed the above-mentioned core transient benchmarks, that is, power peak, time of power peak, power, averaged Doppler temperature, maximum fuel centerline temperature, and coolant outlet temperature at the end of simulation (5 s. The sensitivity analysis results showed that the radial node size and maximum time step give a significant effect on the transient parameters, especially the time of power peak, for the HZP and HFP conditions. The number of ring divisions for fuel pellet and cladding gives negligible effect on the transient solutions. For productive work of the PWR transient analysis, based on the present sensitivity analysis results, we recommend NODAL3 users to use 2×2 radial nodes per assembly, 1×18 axial layers per assembly, the maximum time step of 10 ms, and 9 and 1 ring divisions for fuel pellet and cladding, respectively.

  4. Integral benchmarks with reference to thorium fuel cycle

    International Nuclear Information System (INIS)

    Ganesan, S.

    2003-01-01

    This is a power point presentation about the Indian participation in the CRP 'Evaluated Data for the Thorium-Uranium fuel cycle'. The plans and scope of the Indian participation are to provide selected integral experimental benchmarks for nuclear data validation, including Indian Thorium burn up benchmarks, post-irradiation examination studies, comparison of basic evaluated data files and analysis of selected benchmarks for Th-U fuel cycle

  5. Summary Report of Consultants' Meeting on Accuracy of Experimental and Theoretical Nuclear Cross-Section Data for Ion Beam Analysis and Benchmarking

    International Nuclear Information System (INIS)

    Abriola, Daniel; Dimitriou, Paraskevi; Gurbich, Alexander F.

    2013-11-01

    A summary is given of a Consultants' Meeting assembled to assess the accuracy of experimental and theoretical nuclear cross-section data for Ion Beam Analysis and the role of benchmarking experiments. The participants discussed the different approaches to assigning uncertainties to evaluated data, and presented results of benchmark experiments performed in their laboratories. They concluded that priority should be given to the validation of cross- section data by benchmark experiments, and recommended that an experts meeting be held to prepare the guidelines, methodology and work program of a future coordinated project on benchmarking.

  6. SP2Bench: A SPARQL Performance Benchmark

    Science.gov (United States)

    Schmidt, Michael; Hornung, Thomas; Meier, Michael; Pinkel, Christoph; Lausen, Georg

    A meaningful analysis and comparison of both existing storage schemes for RDF data and evaluation approaches for SPARQL queries necessitates a comprehensive and universal benchmark platform. We present SP2Bench, a publicly available, language-specific performance benchmark for the SPARQL query language. SP2Bench is settled in the DBLP scenario and comprises a data generator for creating arbitrarily large DBLP-like documents and a set of carefully designed benchmark queries. The generated documents mirror vital key characteristics and social-world distributions encountered in the original DBLP data set, while the queries implement meaningful requests on top of this data, covering a variety of SPARQL operator constellations and RDF access patterns. In this chapter, we discuss requirements and desiderata for SPARQL benchmarks and present the SP2Bench framework, including its data generator, benchmark queries and performance metrics.

  7. Burn-up Credit Criticality Safety Benchmark Phase III-C. Nuclide Composition and Neutron Multiplication Factor of a Boiling Water Reactor Spent Fuel Assembly for Burn-up Credit and Criticality Control of Damaged Nuclear Fuel

    International Nuclear Information System (INIS)

    Suyama, K.; Uchida, Y.; Kashima, T.; Ito, T.; Miyaji, T.

    2016-01-01

    similar to those in the previous Phase III-B benchmark. A constant specific power of 25.3 MW/tHM is assumed for a final burn-up value of 50 GWd/tHM. Three cases of cooling time are requested after the burn-up; 0, 5 and 15 years. A constant void fraction of 0, 40 or 70% during the burn-up is assumed. The present benchmark is a compilation of 35 calculation results from 16 institutes in 9 countries covering different cross-section libraries. The total number of the calculation results is twice that of the previous Phase III-B benchmark. Concerning nuclide density, the 2-sigma (r) of 235 U is less than 6% and 239,240,241 Pu are less than 7%. For minor actinides, 2-sigma (r) becomes larger than 10% because of a difference in the cross-section data adopted by each calculation code. For fission product isotopes, 2-sigma (r) is less than 7%, except for some nuclides. Generally, the mutual-agreement of nuclide density has improved from the previous benchmark. For the neutron multiplication factors, 2-sigma (r) is less than 1.1% for lower burn-up and it becomes about 1.6% at 10 GWd/t and gets smaller at 30 and 50 GWd/t. This might be a sufficient agreement considering that the adopted nuclides for the criticality calculation differ in the diverse methodologies used. Comparison of peak k inf shows that it has approximately 2-sigma (r) of 1% and it becomes larger for higher void fraction cases. Comparison of the burn-up distribution results is not the main purpose of this benchmark, but was requested to confirm the credibility of the calculation. A general good agreement of the burn-up distribution is shown. However, how gadolinium depletion is handled may still pose an issue to solve and some uncertainty depending on the analysis code used still remains. Using this benchmark, progress of the burn-up calculation capability is confirmed. Introduction of continuous-energy Monte Carlo codes has a clear advantage in treating multi-dimensional burn-up calculation problems, even though

  8. Principles for Developing Benchmark Criteria for Staff Training in Responsible Gambling.

    Science.gov (United States)

    Oehler, Stefan; Banzer, Raphaela; Gruenerbl, Agnes; Malischnig, Doris; Griffiths, Mark D; Haring, Christian

    2017-03-01

    One approach to minimizing the negative consequences of excessive gambling is staff training to reduce the rate of the development of new cases of harm or disorder within their customers. The primary goal of the present study was to assess suitable benchmark criteria for the training of gambling employees at casinos and lottery retailers. The study utilised the Delphi Method, a survey with one qualitative and two quantitative phases. A total of 21 invited international experts in the responsible gambling field participated in all three phases. A total of 75 performance indicators were outlined and assigned to six categories: (1) criteria of content, (2) modelling, (3) qualification of trainer, (4) framework conditions, (5) sustainability and (6) statistical indicators. Nine of the 75 indicators were rated as very important by 90 % or more of the experts. Unanimous support for importance was given to indicators such as (1) comprehensibility and (2) concrete action-guidance for handling with problem gamblers, Additionally, the study examined the implementation of benchmarking, when it should be conducted, and who should be responsible. Results indicated that benchmarking should be conducted every 1-2 years regularly and that one institution should be clearly defined and primarily responsible for benchmarking. The results of the present study provide the basis for developing a benchmarking for staff training in responsible gambling.

  9. IRIS-2012 OECD/NEA/CSNI benchmark: Numerical simulations of structural impact

    International Nuclear Information System (INIS)

    Orbovic, Nebojsa; Tarallo, Francois; Rambach, Jean-Mathieu; Sagals, Genadijs; Blahoianu, Andrei

    2015-01-01

    A benchmark of numerical simulations related to the missile impact on reinforced concrete (RC) slabs has been launched in the frame of OECD/NEA/CSNI research program “Improving Robustness Assessment Methodologies for Structures Impacted by Missiles”, under the acronym IRIS. The goal of the research program is to simulate RC structural, flexural and punching, behavior under deformable and rigid missile impact. The first phase called IRIS-2010 was a blind prediction of the tests performed at VTT facility in Espoo, Finland. The two simulations were performed related to two series of tests: (1) two tests on the impact of a deformable missile exhibiting damage mainly by flexural (so-called “flexural tests”) or global response and (2) three tests on the impact of a rigid missile exhibiting damage mainly by punching response (so-called “punching tests”) or local response. The simulation results showed significant scatter (coefficient of variation up to 132%) for both flexural and punching cases. The IRIS-2012 is the second, post-test, phase of the benchmark with the goal to improve simulations and reduce the scatter of the results. Based on the IRIS-2010 recommendations and to better calibrate concrete constitutive models, a series of tri-axial tests as well as Brazilian tests were performed as a part of the IRIS-2012 benchmark. 25 teams from 11 countries took part in this exercise. Majority of participants were part of the IRIS-2010 benchmark. Participants showed significant improvement in reducing epistemic uncertainties in impact simulations. Several teams presented both finite element (FE) and simplified analysis as per recommendations of the IRIS-2010. The improvements were at the level of simulation results but also at the level of understanding of impact phenomena and its modeling. Due to the complexity of the physical phenomena and its simulation (high geometric and material non-linear behavior) and inherent epistemic and aleatory uncertainties, the

  10. IRIS-2012 OECD/NEA/CSNI benchmark: Numerical simulations of structural impact

    Energy Technology Data Exchange (ETDEWEB)

    Orbovic, Nebojsa, E-mail: nebojsa.orbovic@cnsc-ccsn.gc.ca [Canadian Nuclear Safety Commission, Ottawa, ON (Canada); Tarallo, Francois [IRSN, Fontenay aux Roses (France); Rambach, Jean-Mathieu [Géodynamique et Structures, Bagneux (France); Sagals, Genadijs; Blahoianu, Andrei [Canadian Nuclear Safety Commission, Ottawa, ON (Canada)

    2015-12-15

    A benchmark of numerical simulations related to the missile impact on reinforced concrete (RC) slabs has been launched in the frame of OECD/NEA/CSNI research program “Improving Robustness Assessment Methodologies for Structures Impacted by Missiles”, under the acronym IRIS. The goal of the research program is to simulate RC structural, flexural and punching, behavior under deformable and rigid missile impact. The first phase called IRIS-2010 was a blind prediction of the tests performed at VTT facility in Espoo, Finland. The two simulations were performed related to two series of tests: (1) two tests on the impact of a deformable missile exhibiting damage mainly by flexural (so-called “flexural tests”) or global response and (2) three tests on the impact of a rigid missile exhibiting damage mainly by punching response (so-called “punching tests”) or local response. The simulation results showed significant scatter (coefficient of variation up to 132%) for both flexural and punching cases. The IRIS-2012 is the second, post-test, phase of the benchmark with the goal to improve simulations and reduce the scatter of the results. Based on the IRIS-2010 recommendations and to better calibrate concrete constitutive models, a series of tri-axial tests as well as Brazilian tests were performed as a part of the IRIS-2012 benchmark. 25 teams from 11 countries took part in this exercise. Majority of participants were part of the IRIS-2010 benchmark. Participants showed significant improvement in reducing epistemic uncertainties in impact simulations. Several teams presented both finite element (FE) and simplified analysis as per recommendations of the IRIS-2010. The improvements were at the level of simulation results but also at the level of understanding of impact phenomena and its modeling. Due to the complexity of the physical phenomena and its simulation (high geometric and material non-linear behavior) and inherent epistemic and aleatory uncertainties, the

  11. Benchmarking foreign electronics technologies

    Energy Technology Data Exchange (ETDEWEB)

    Bostian, C.W.; Hodges, D.A.; Leachman, R.C.; Sheridan, T.B.; Tsang, W.T.; White, R.M.

    1994-12-01

    This report has been drafted in response to a request from the Japanese Technology Evaluation Center`s (JTEC) Panel on Benchmarking Select Technologies. Since April 1991, the Competitive Semiconductor Manufacturing (CSM) Program at the University of California at Berkeley has been engaged in a detailed study of quality, productivity, and competitiveness in semiconductor manufacturing worldwide. The program is a joint activity of the College of Engineering, the Haas School of Business, and the Berkeley Roundtable on the International Economy, under sponsorship of the Alfred P. Sloan Foundation, and with the cooperation of semiconductor producers from Asia, Europe and the United States. Professors David A. Hodges and Robert C. Leachman are the project`s Co-Directors. The present report for JTEC is primarily based on data and analysis drawn from that continuing program. The CSM program is being conducted by faculty, graduate students and research staff from UC Berkeley`s Schools of Engineering and Business, and Department of Economics. Many of the participating firms are represented on the program`s Industry Advisory Board. The Board played an important role in defining the research agenda. A pilot study was conducted in 1991 with the cooperation of three semiconductor plants. The research plan and survey documents were thereby refined. The main phase of the CSM benchmarking study began in mid-1992 and will continue at least through 1997. reports are presented on the manufacture of integrated circuits; data storage; wireless technology; human-machine interfaces; and optoelectronics. Selected papers are indexed separately for inclusion in the Energy Science and Technology Database.

  12. Quantitative Performance Analysis of the SPEC OMPM2001 Benchmarks

    Directory of Open Access Journals (Sweden)

    Vishal Aslot

    2003-01-01

    Full Text Available The state of modern computer systems has evolved to allow easy access to multiprocessor systems by supporting multiple processors on a single physical package. As the multiprocessor hardware evolves, new ways of programming it are also developed. Some inventions may merely be adopting and standardizing the older paradigms. One such evolving standard for programming shared-memory parallel computers is the OpenMP API. The Standard Performance Evaluation Corporation (SPEC has created a suite of parallel programs called SPEC OMP to compare and evaluate modern shared-memory multiprocessor systems using the OpenMP standard. We have studied these benchmarks in detail to understand their performance on a modern architecture. In this paper, we present detailed measurements of the benchmarks. We organize, summarize, and display our measurements using a Quantitative Model. We present a detailed discussion and derivation of the model. Also, we discuss the important loops in the SPEC OMPM2001 benchmarks and the reasons for less than ideal speedup on our platform.

  13. Benchmark Analysis of Institutional University Autonomy Higher Education Sectors in Denmark, Lithuania, Romania, Scotland and Sweden

    DEFF Research Database (Denmark)

    Turcan, Romeo V.; Bugaian, Larisa; Gulieva, Valeria

    2015-01-01

    This chapter consolidates the process and the findings from the four benchmark reports. It presents (i) the methodology and methods employed for data collection and data analysis; (ii) the comparative analysis of HE sectors and respective education systems in these countries; (iii) the executive ...

  14. Statistical Analysis of Reactor Pressure Vessel Fluence Calculation Benchmark Data Using Multiple Regression Techniques

    International Nuclear Information System (INIS)

    Carew, John F.; Finch, Stephen J.; Lois, Lambros

    2003-01-01

    The calculated >1-MeV pressure vessel fluence is used to determine the fracture toughness and integrity of the reactor pressure vessel. It is therefore of the utmost importance to ensure that the fluence prediction is accurate and unbiased. In practice, this assurance is provided by comparing the predictions of the calculational methodology with an extensive set of accurate benchmarks. A benchmarking database is used to provide an estimate of the overall average measurement-to-calculation (M/C) bias in the calculations ( ). This average is used as an ad-hoc multiplicative adjustment to the calculations to correct for the observed calculational bias. However, this average only provides a well-defined and valid adjustment of the fluence if the M/C data are homogeneous; i.e., the data are statistically independent and there is no correlation between subsets of M/C data.Typically, the identification of correlations between the errors in the database M/C values is difficult because the correlation is of the same magnitude as the random errors in the M/C data and varies substantially over the database. In this paper, an evaluation of a reactor dosimetry benchmark database is performed to determine the statistical validity of the adjustment to the calculated pressure vessel fluence. Physical mechanisms that could potentially introduce a correlation between the subsets of M/C ratios are identified and included in a multiple regression analysis of the M/C data. Rigorous statistical criteria are used to evaluate the homogeneity of the M/C data and determine the validity of the adjustment.For the database evaluated, the M/C data are found to be strongly correlated with dosimeter response threshold energy and dosimeter location (e.g., cavity versus in-vessel). It is shown that because of the inhomogeneity in the M/C data, for this database, the benchmark data do not provide a valid basis for adjusting the pressure vessel fluence.The statistical criteria and methods employed in

  15. Finite-element solutions of the AER-2 rod ejection benchmark by CRONOS

    International Nuclear Information System (INIS)

    Kolev, N.P.; Lenain, R.; Fedon-Magnaud, C.

    2001-01-01

    The finite-element option in CRONOS was used to analyse the AER-2 rod-ejection benchmark for WWER-440. The objective is to obtain spatially converged solutions by means of node subdivision and approximation refinement. This paper presents the first phase of analysis dealing with the initial and just-ejected states used for calculation of the initial reactivity. Fine-mesh and extrapolated to zero mesh size solutions were obtained and verified by comparison to MAG code solutions. These differences provide potential for large deviations in the transient results and deserve further attention in reactor safety analysis (Authors)

  16. Constructing Benchmark Databases and Protocols for Medical Image Analysis: Diabetic Retinopathy

    Directory of Open Access Journals (Sweden)

    Tomi Kauppi

    2013-01-01

    Full Text Available We address the performance evaluation practices for developing medical image analysis methods, in particular, how to establish and share databases of medical images with verified ground truth and solid evaluation protocols. Such databases support the development of better algorithms, execution of profound method comparisons, and, consequently, technology transfer from research laboratories to clinical practice. For this purpose, we propose a framework consisting of reusable methods and tools for the laborious task of constructing a benchmark database. We provide a software tool for medical image annotation helping to collect class label, spatial span, and expert's confidence on lesions and a method to appropriately combine the manual segmentations from multiple experts. The tool and all necessary functionality for method evaluation are provided as public software packages. As a case study, we utilized the framework and tools to establish the DiaRetDB1 V2.1 database for benchmarking diabetic retinopathy detection algorithms. The database contains a set of retinal images, ground truth based on information from multiple experts, and a baseline algorithm for the detection of retinopathy lesions.

  17. Benchmarking Analysis between CONTEMPT and COPATTA Containment Codes

    Energy Technology Data Exchange (ETDEWEB)

    Seo, Kwi Hyun; Song, Wan Jung [ENERGEO Inc. Sungnam, (Korea, Republic of); Song, Dong Soo; Byun, Choong Sup [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    2006-07-01

    The containment is the requirement that the releases of radioactive materials subsequent to an accident do not result in doses in excess of the values specified in 10 CFR 100. The containment must withstand the pressure and temperature of the DBA(Design Basis Accident) including margin without exceeding the design leakage rate. COPATTA as Bechtel's vendor code is used for the containment pressure and temperature prediction in power uprating project for Kori 3,4 and Yonggwang 1,2 nuclear power plants(NPPs). However, CONTEMPTLT/ 028 is used for calculating the containment pressure and temperatures in equipment qualification project for the same NPPs. During benchmarking analysis between two codes, it is known two codes have model differences. This paper show the performance evaluation results because of the main model differences.

  18. Benchmarking Analysis between CONTEMPT and COPATTA Containment Codes

    International Nuclear Information System (INIS)

    Seo, Kwi Hyun; Song, Wan Jung; Song, Dong Soo; Byun, Choong Sup

    2006-01-01

    The containment is the requirement that the releases of radioactive materials subsequent to an accident do not result in doses in excess of the values specified in 10 CFR 100. The containment must withstand the pressure and temperature of the DBA(Design Basis Accident) including margin without exceeding the design leakage rate. COPATTA as Bechtel's vendor code is used for the containment pressure and temperature prediction in power uprating project for Kori 3,4 and Yonggwang 1,2 nuclear power plants(NPPs). However, CONTEMPTLT/ 028 is used for calculating the containment pressure and temperatures in equipment qualification project for the same NPPs. During benchmarking analysis between two codes, it is known two codes have model differences. This paper show the performance evaluation results because of the main model differences

  19. Boiling water reactor turbine trip (TT) benchmark. Volume II: Summary Results of Exercise 1

    International Nuclear Information System (INIS)

    Akdeniz, Bedirhan; Ivanov, Kostadin N.; Olson, Andy M.

    2005-06-01

    The OECD Nuclear Energy Agency (NEA) completed under US Nuclear Regulatory Commission (NRC) sponsorship a PWR main steam line break (MSLB) benchmark against coupled system three-dimensional (3-D) neutron kinetics and thermal-hydraulic codes. Another OECD/NRC coupled-code benchmark was recently completed for a BWR turbine trip (TT) transient and is the object of the present report. Turbine trip transients in a BWR are pressurisation events in which the coupling between core space-dependent neutronic phenomena and system dynamics plays an important role. The data made available from actual experiments carried out at the Peach Bottom 2 plant make the present benchmark particularly valuable. While defining and coordinating the BWR TT benchmark, a systematic approach and level methodology not only allowed for a consistent and comprehensive validation process, but also contributed to the study of key parameters of pressurisation transients. The benchmark consists of three separate exercises, two initial states and five transient scenarios. The BWR TT Benchmark will be published in four volumes as NEA reports. CD-ROMs will also be prepared and will include the four reports and the transient boundary conditions, decay heat values as a function of time, cross-section libraries and supplementary tables and graphs not published in the paper version. BWR TT Benchmark - Volume I: Final Specifications was issued in 2001 [NEA/NSC/DOC(2001)]. The benchmark team [Pennsylvania State University (PSU) in co-operation with Exelon Nuclear and the NEA] has been responsible for coordinating benchmark activities, answering participant questions and assisting them in developing their models, as well as analysing submitted solutions and providing reports summarising the results for each phase. The benchmark team has also been involved in the technical aspects of the benchmark, including sensitivity studies for the different exercises. Volume II summarises the results for Exercise 1 of the

  20. The NEA benchmark study of the accident at the Fukushima Daiichi NPP

    International Nuclear Information System (INIS)

    Koganeya, Toshiyuki

    2015-01-01

    In November 2012, the NEA, under the aegis of the Committee on the Safety of Nuclear Installations (CSNI), initiated a joint research project called the Benchmark Study of the Accident at the Fukushima Daiichi Nuclear Power Station (BSAF). Objectives of this project include supporting Fukushima Daiichi decommissioning by analysing accident progression and the current status of the reactors, such as fuel debris distribution in the reactor pressure vessels and primary containment vessels in preparation for fuel debris removal. A second objective of the project is to improve SA (severe analysis) codes through comparisons with data from the Fukushima reactors. So as to enhance communication between analysts and those involved in decommissioning activities, participants in the project have been discussing the remaining uncertainties in relation to understanding the accident and the data needs from the viewpoint of the analysts. Since the accident sequences at the Fukushima Daiichi site include a wide range of phenomena, a phased approach is being applied in this benchmark exercise while awaiting more detailed information on debris examination and other factors. This article provides an overview of the project (scope, input data and boundary conditions, participants (from eight countries), analytical approach - common case and best estimate case) as well as an outline of the project's next phase (BASF phase 2) that begins in June 2015

  1. Benchmarking of refinery emissions performance : Executive summary

    International Nuclear Information System (INIS)

    2003-07-01

    This study was undertaken to collect emissions performance data for Canadian and comparable American refineries. The objective was to examine parameters that affect refinery air emissions performance and develop methods or correlations to normalize emissions performance. Another objective was to correlate and compare the performance of Canadian refineries to comparable American refineries. For the purpose of this study, benchmarking involved the determination of levels of emission performance that are being achieved for generic groups of facilities. A total of 20 facilities were included in the benchmarking analysis, and 74 American refinery emission correlations were developed. The recommended benchmarks, and the application of those correlations for comparison between Canadian and American refinery performance, were discussed. The benchmarks were: sulfur oxides, nitrogen oxides, carbon monoxide, particulate, volatile organic compounds, ammonia and benzene. For each refinery in Canada, benchmark emissions were developed. Several factors can explain differences in Canadian and American refinery emission performance. 4 tabs., 7 figs

  2. International collaborative fire modeling project (ICFMP). Summary of benchmark

    International Nuclear Information System (INIS)

    Roewekamp, Marina; Klein-Hessling, Walter; Dreisbach, Jason; McGrattan, Kevin; Miles, Stewart; Plys, Martin; Riese, Olaf

    2008-09-01

    This document was developed in the frame of the 'International Collaborative Project to Evaluate Fire Models for Nuclear Power Plant Applications' (ICFMP). The objective of this collaborative project is to share the knowledge and resources of various organizations to evaluate and improve the state of the art of fire models for use in nuclear power plant fire safety, fire hazard analysis and fire risk assessment. The project is divided into two phases. The objective of the first phase is to evaluate the capabilities of current fire models for fire safety analysis in nuclear power plants. The second phase will extend the validation database of those models and implement beneficial improvements to the models that are identified in the first phase of ICFMP. In the first phase, more than 20 expert institutions from six countries were represented in the collaborative project. This Summary Report gives an overview on the results of the first phase of the international collaborative project. The main objective of the project was to evaluate the capability of fire models to analyze a variety of fire scenarios typical for nuclear power plants (NPP). The evaluation of the capability of fire models to analyze these scenarios was conducted through a series of in total five international Benchmark Exercises. Different types of models were used by the participating expert institutions from five countries. The technical information that will be useful for fire model users, developers and further experts is summarized in this document. More detailed information is provided in the corresponding technical reference documents for the ICFMP Benchmark Exercises No. 1 to 5. The objective of these exercises was not to compare the capabilities and strengths of specific models, address issues specific to a model, nor to recommend specific models over others. This document is not intended to provide guidance to users of fire models. Guidance on the use of fire models is currently being

  3. Benchmarking high performance computing architectures with CMS’ skeleton framework

    Science.gov (United States)

    Sexton-Kennedy, E.; Gartung, P.; Jones, C. D.

    2017-10-01

    In 2012 CMS evaluated which underlying concurrency technology would be the best to use for its multi-threaded framework. The available technologies were evaluated on the high throughput computing systems dominating the resources in use at that time. A skeleton framework benchmarking suite that emulates the tasks performed within a CMSSW application was used to select Intel’s Thread Building Block library, based on the measured overheads in both memory and CPU on the different technologies benchmarked. In 2016 CMS will get access to high performance computing resources that use new many core architectures; machines such as Cori Phase 1&2, Theta, Mira. Because of this we have revived the 2012 benchmark to test it’s performance and conclusions on these new architectures. This talk will discuss the results of this exercise.

  4. Benchmark enclosure fire suppression experiments - phase 1 test report.

    Energy Technology Data Exchange (ETDEWEB)

    Figueroa, Victor G.; Nichols, Robert Thomas; Blanchat, Thomas K.

    2007-06-01

    A series of fire benchmark water suppression tests were performed that may provide guidance for dispersal systems for the protection of high value assets. The test results provide boundary and temporal data necessary for water spray suppression model development and validation. A review of fire suppression in presented for both gaseous suppression and water mist fire suppression. The experimental setup and procedure for gathering water suppression performance data are shown. Characteristics of the nozzles used in the testing are presented. Results of the experiments are discussed.

  5. The Data Envelopment Analysis Method in Benchmarking of Technological Incubators

    Directory of Open Access Journals (Sweden)

    Bożena Kaczmarska

    2010-01-01

    Full Text Available This paper presents an original concept for the application of Data Envelopment Analysis (DEA in benchmarking processes within innovation and entrepreneurship centers based on the example of technological incubators. Applying the DEA method, it is possible to order analyzed objects, on the basis of explicitly defined relative efficiency, by compiling a rating list and rating classes. Establishing standards and indicating “clearances” allows the studied objects - innovation and entrepreneurship centers - to select a way of developing effectively, as well as preserving their individuality and a unique way of acting with the account of local needs. (original abstract

  6. VVER-1000 coolant transient benchmark. Phase 1 (V1000CT-1). Vol. 3: summary results of exercise 2 on coupled 3-D kinetics/core thermal-hydraulics

    International Nuclear Information System (INIS)

    2007-01-01

    In the field of coupled neutronics/thermal-hydraulics computation there is a need to enhance scientific knowledge in order to develop advanced modelling techniques for new nuclear technologies and concepts, as well as current applications. (authors) Recently developed best-estimate computer code systems for modelling 3-D coupled neutronics/thermal-hydraulics transients in nuclear cores and for the coupling of core phenomena and system dynamics need to be compared against each other and validated against results from experiments. International benchmark studies have been set up for this purpose. The present volume is a follow-up to the first two volumes. While the first described the specification of the benchmark, the second presented the results of the first exercise that identified the key parameters and important issues concerning the thermal-hydraulic system modelling of the simulated transient caused by the switching on of a main coolant pump when the other three were in operation. Volume 3 summarises the results for Exercise 2 of the benchmark that identifies the key parameters and important issues concerning the 3-D neutron kinetics modelling of the simulated transient. These studies are based on an experiment that was conducted by Bulgarian and Russian engineers during the plant-commissioning phase at the VVER-1000 Kozloduy Unit 6. The final volume will soon be published, completing Phase 1 of this study. (authors)

  7. Lesson learned from the SARNET wall condensation benchmarks

    International Nuclear Information System (INIS)

    Ambrosini, W.; Forgione, N.; Merli, F.; Oriolo, F.; Paci, S.; Kljenak, I.; Kostka, P.; Vyskocil, L.; Travis, J.R.; Lehmkuhl, J.; Kelm, S.; Chin, Y.-S.; Bucci, M.

    2014-01-01

    Highlights: • The results of the benchmarking activity on wall condensation are reported. • The work was performed in the frame of SARNET. • General modelling techniques for condensation are discussed. • Results of University of Pisa and of other benchmark participants are discussed. • The lesson learned is drawn. - Abstract: The prediction of condensation in the presence of noncondensable gases has received continuing attention in the frame of the Severe Accident Research Network of Excellence, both in the first (2004–2008) and in the second (2009–2013) EC integrated projects. Among the different reasons for considering so relevant this basic phenomenon, coped with by classical treatments dated in the first decades of the last century, there is the interest for developing updated CFD models for reactor containment analysis, requiring validating at a different level the available modelling techniques. In the frame of SARNET, benchmarking activities were undertaken taking advantage of the work performed at different institutions in setting up and developing models for steam condensation in conditions of interest for nuclear reactor containment. Four steps were performed in the activity, involving: (1) an idealized problem freely inspired at the actual conditions occurring in an experimental facility, CONAN, installed at the University of Pisa; (2) a first comparison with experimental data purposely collected by the CONAN facility; (3) a second comparison with data available from experimental campaigns performed in the same apparatus before the inclusion of the activities in SARNET; (4) a third exercise involving data obtained at lower mixture velocity than in previous campaigns, aimed at providing conditions closer to those addressed in reactor containment analyses. The last step of the benchmarking activity required to change the configuration of the experimental apparatus to achieve the lower flow rates involved in the new test specifications. The

  8. Library Benchmarking

    Directory of Open Access Journals (Sweden)

    Wiji Suwarno

    2017-02-01

    Full Text Available The term benchmarking has been encountered in the implementation of total quality (TQM or in Indonesian termed holistic quality management because benchmarking is a tool to look for ideas or learn from the library. Benchmarking is a processof measuring and comparing for continuous business process of systematic and continuous measurement, the process of measuring and comparing for continuous business process of an organization to get information that can help these organization improve their performance efforts.

  9. The level 1 and 2 specification for parallel benchmark and a benchmark test of scalar-parallel computer SP2 based on the specifications

    International Nuclear Information System (INIS)

    Orii, Shigeo

    1998-06-01

    A benchmark specification for performance evaluation of parallel computers for numerical analysis is proposed. Level 1 benchmark, which is a conventional type benchmark using processing time, measures performance of computers running a code. Level 2 benchmark proposed in this report is to give the reason of the performance. As an example, scalar-parallel computer SP2 is evaluated with this benchmark specification in case of a molecular dynamics code. As a result, the main causes to suppress the parallel performance are maximum band width and start-up time of communication between nodes. Especially the start-up time is proportional not only to the number of processors but also to the number of particles. (author)

  10. Analysis and sensitivity studies with CORETRAN and RETRAN-3D of the NEACRP PWR rod ejection benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Ferroukhi, H.; Coddington, P. [Paul Scherrer Inst. (PSI), Villigen (Switzerland)

    2001-07-01

    The OECD/NEA PWR rod ejection benchmark has been analysed using the 3-D nodal spatial-kinetic codes CORETRAN and RETRAN-3D. The following results were obtained. A) The agreement in 3-D solution between CORETRAN and RETRAN-3D was found to be very good both during steady-state and transient conditions. In particular at HZP (hot zero power), an excellent agreement in the initial steady-state 3-D power distribution and with regard to the core power excursion during the super-prompt critical phase of the transient (i.e. when the negative reactivity feedback is still very weak) was found. This illustrates the consistency in the neutronic solution between both codes. B) At both HZP and FP (full power) conditions, the CORETRAN and RETRAN-3D results lie well within the range of the previous benchmark solutions. In particular at HZP, both codes predict a power excursion and an increase in maximum pellet temperature that are among the closest results to those obtained with the benchmark reference solution. It must here be emphasised that these analyses are by no means a validation of the codes. However, the good agreement of both CORETRAN and RETRAN-3D with other 3-D solutions provides confidence in the ability of these codes to analyse LWR (light water reactor) core transients. In addition, it was found appropriate to perform, for this well-defined international benchmark problem, some sensitivity studies in order to assess the impact of modelling options on the CORETRAN and RETRAN-3D results. (authors)

  11. Analysis and sensitivity studies with CORETRAN and RETRAN-3D of the NEACRP PWR rod ejection benchmark

    International Nuclear Information System (INIS)

    Ferroukhi, H.; Coddington, P.

    2001-01-01

    The OECD/NEA PWR rod ejection benchmark has been analysed using the 3-D nodal spatial-kinetic codes CORETRAN and RETRAN-3D. The following results were obtained. A) The agreement in 3-D solution between CORETRAN and RETRAN-3D was found to be very good both during steady-state and transient conditions. In particular at HZP (hot zero power), an excellent agreement in the initial steady-state 3-D power distribution and with regard to the core power excursion during the super-prompt critical phase of the transient (i.e. when the negative reactivity feedback is still very weak) was found. This illustrates the consistency in the neutronic solution between both codes. B) At both HZP and FP (full power) conditions, the CORETRAN and RETRAN-3D results lie well within the range of the previous benchmark solutions. In particular at HZP, both codes predict a power excursion and an increase in maximum pellet temperature that are among the closest results to those obtained with the benchmark reference solution. It must here be emphasised that these analyses are by no means a validation of the codes. However, the good agreement of both CORETRAN and RETRAN-3D with other 3-D solutions provides confidence in the ability of these codes to analyse LWR (light water reactor) core transients. In addition, it was found appropriate to perform, for this well-defined international benchmark problem, some sensitivity studies in order to assess the impact of modelling options on the CORETRAN and RETRAN-3D results. (authors)

  12. Benchmarking and Performance Measurement.

    Science.gov (United States)

    Town, J. Stephen

    This paper defines benchmarking and its relationship to quality management, describes a project which applied the technique in a library context, and explores the relationship between performance measurement and benchmarking. Numerous benchmarking methods contain similar elements: deciding what to benchmark; identifying partners; gathering…

  13. Results of neutronic benchmark analysis for a high temperature reactor of the GT-MHR type - HTR2008-58107

    International Nuclear Information System (INIS)

    Boyarinov, V. F.; Bryzgalov, V. I.; Davidenko, V. D.; Fomichenko, P. A.; Glushkov, E. S.; Gomin, E. A.; Gurevich, M. I.; Kodochigov, N. G.; Marova, E. V.; Mitenkova, E. F.; Novikov, N. V.; Osipov, S. L.; Sukharev, Y. P.; Tsibulsky, V. F.; Yudkevich, M. S.

    2008-01-01

    The paper presents a description of benchmark cases, achieved results, analysis of possible reasons of differences of calculation results obtained by various neutronic codes. The comparative analysis is presented showing the benchmark-results obtained with reference and design codes by Russian specialists (WIMS-D, JAR-HTGR, UNK, MCU, MCNP5-MONTEBURNS1.0-ORIGEN2.0), by French specialists (AP0LL02, TRIP0LI4 codes), and by Korean specialists (HELIOS, MASTER, MCNP5 codes). The analysis of possible reasons for deviations was carried out, which was aimed at the decrease of uncertainties in calculated characteristics. This additional investigation was conducted with the use of 2D models of a fuel assembly cell and a reactor plane section. (authors)

  14. A framework for benchmarking land models

    Directory of Open Access Journals (Sweden)

    Y. Q. Luo

    2012-10-01

    Full Text Available Land models, which have been developed by the modeling community in the past few decades to predict future states of ecosystems and climate, have to be critically evaluated for their performance skills of simulating ecosystem responses and feedback to climate change. Benchmarking is an emerging procedure to measure performance of models against a set of defined standards. This paper proposes a benchmarking framework for evaluation of land model performances and, meanwhile, highlights major challenges at this infant stage of benchmark analysis. The framework includes (1 targeted aspects of model performance to be evaluated, (2 a set of benchmarks as defined references to test model performance, (3 metrics to measure and compare performance skills among models so as to identify model strengths and deficiencies, and (4 model improvement. Land models are required to simulate exchange of water, energy, carbon and sometimes other trace gases between the atmosphere and land surface, and should be evaluated for their simulations of biophysical processes, biogeochemical cycles, and vegetation dynamics in response to climate change across broad temporal and spatial scales. Thus, one major challenge is to select and define a limited number of benchmarks to effectively evaluate land model performance. The second challenge is to develop metrics of measuring mismatches between models and benchmarks. The metrics may include (1 a priori thresholds of acceptable model performance and (2 a scoring system to combine data–model mismatches for various processes at different temporal and spatial scales. The benchmark analyses should identify clues of weak model performance to guide future development, thus enabling improved predictions of future states of ecosystems and climate. The near-future research effort should be on development of a set of widely acceptable benchmarks that can be used to objectively, effectively, and reliably evaluate fundamental properties

  15. BWR stability analysis: methodology of the stability analysis and results of PSI for the NEA/NCR benchmark task

    International Nuclear Information System (INIS)

    Hennig, D.; Nechvatal, L.

    1996-09-01

    The report describes the PSI stability analysis methodology and the validation of this methodology based on the international OECD/NEA BWR stability benchmark task. In the frame of this work, the stability properties of some operation points of the NPP Ringhals 1 have been analysed and compared with the experimental results. (author) figs., tabs., 45 refs

  16. Comparison of typical inelastic analysis predictions with benchmark problem experimental results

    International Nuclear Information System (INIS)

    Clinard, J.A.; Corum, J.M.; Sartory, W.K.

    1975-01-01

    The results of exemplary inelastic analyses are presented for a series of experimental benchmark problems. Consistent analytical procedures and constitutive relations were used in each of the analyses, and published material behavior data were used in all cases. Two finite-element inelastic computer programs were employed. These programs implement the analysis procedures and constitutive equations for Type 304 stainless steel that are currently used in many analyses of elevated-temperature nuclear reactor system components. The analysis procedures and constitutive relations are briefly discussed, and representative analytical results are presented and compared to the test data. The results that are presented demonstrate the feasibility of performing inelastic analyses, and they are indicative of the general level of agreement that the analyst might expect when using conventional inelastic analysis procedures. (U.S.)

  17. MCNP analysis of the nine-cell LWR gadolinium benchmark

    International Nuclear Information System (INIS)

    Arkuszewski, J.J.

    1988-01-01

    The Monte Carlo results for a 9-cell fragment of the light water reactor square lattice with a central gadolinium-loaded pin are presented. The calculations are performed with the code MCNP-3A and the ENDF-B/5 library and compared with the results obtained from the BOXER code system and the JEF-1 library. The objective of this exercise is to study the feasibility of BOXER for the analysis of a Gd-loaded LWR lattice in the broader framework of GAP International Benchmark Analysis. A comparison of results indicates that, apart from unavoidable discrepancies originating from different data evaluations, the BOXER code overestimates the multiplication factor by 1.4 % and underestimates the power release in a Gd cell by 4.66 %. It is hoped that further similar studies with use of the JEF-1 library for both BOXER and MCNP will help to isolate and explain these discrepancies in a cleaner way. (author) 4 refs., 9 figs., 10 tabs

  18. Benchmarking in the Netherlands

    International Nuclear Information System (INIS)

    1999-01-01

    In two articles an overview is given of the activities in the Dutch industry and energy sector with respect to benchmarking. In benchmarking operational processes of different competitive businesses are compared to improve your own performance. Benchmark covenants for energy efficiency between the Dutch government and industrial sectors contribute to a growth of the number of benchmark surveys in the energy intensive industry in the Netherlands. However, some doubt the effectiveness of the benchmark studies

  19. Evaluation of PWR and BWR pin cell benchmark results

    International Nuclear Information System (INIS)

    Pijlgroms, B.J.; Gruppelaar, H.; Janssen, A.J.; Hoogenboom, J.E.; Leege, P.F.A. de; Voet, J. van der; Verhagen, F.C.M.

    1991-12-01

    Benchmark results of the Dutch PINK working group on PWR and BWR pin cell calculational benchmark as defined by EPRI are presented and evaluated. The observed discrepancies are problem dependent: a part of the results is satisfactory, some other results require further analysis. A brief overview is given of the different code packages used in this analysis. (author). 14 refs., 9 figs., 30 tabs

  20. Evaluation of PWR and BWR pin cell benchmark results

    Energy Technology Data Exchange (ETDEWEB)

    Pijlgroms, B.J.; Gruppelaar, H.; Janssen, A.J. (Netherlands Energy Research Foundation (ECN), Petten (Netherlands)); Hoogenboom, J.E.; Leege, P.F.A. de (Interuniversitair Reactor Inst., Delft (Netherlands)); Voet, J. van der (Gemeenschappelijke Kernenergiecentrale Nederland NV, Dodewaard (Netherlands)); Verhagen, F.C.M. (Keuring van Electrotechnische Materialen NV, Arnhem (Netherlands))

    1991-12-01

    Benchmark results of the Dutch PINK working group on PWR and BWR pin cell calculational benchmark as defined by EPRI are presented and evaluated. The observed discrepancies are problem dependent: a part of the results is satisfactory, some other results require further analysis. A brief overview is given of the different code packages used in this analysis. (author). 14 refs., 9 figs., 30 tabs.

  1. Evaluation of PWR and BWR pin cell benchmark results

    Energy Technology Data Exchange (ETDEWEB)

    Pilgroms, B.J.; Gruppelaar, H.; Janssen, A.J. (Netherlands Energy Research Foundation (ECN), Petten (Netherlands)); Hoogenboom, J.E.; Leege, P.F.A. de (Interuniversitair Reactor Inst., Delft (Netherlands)); Voet, J. van der (Gemeenschappelijke Kernenergiecentrale Nederland NV, Dodewaard (Netherlands)); Verhagen, F.C.M. (Keuring van Electrotechnische Materialen NV, Arnhem (Netherlands))

    1991-12-01

    Benchmark results of the Dutch PINK working group on the PWR and BWR pin cell calculational benchmark as defined by EPRI are presented and evaluated. The observed discrepancies are problem dependent: a part of the results is satisfactory, some other results require further analysis. A brief overview is given of the different code packages used in this analysis. (author). 14 refs.; 9 figs.; 30 tabs.

  2. Strategic behaviour under regulatory benchmarking

    Energy Technology Data Exchange (ETDEWEB)

    Jamasb, T. [Cambridge Univ. (United Kingdom). Dept. of Applied Economics; Nillesen, P. [NUON NV (Netherlands); Pollitt, M. [Cambridge Univ. (United Kingdom). Judge Inst. of Management

    2004-09-01

    In order to improve the efficiency of electricity distribution networks, some regulators have adopted incentive regulation schemes that rely on performance benchmarking. Although regulation benchmarking can influence the ''regulation game,'' the subject has received limited attention. This paper discusses how strategic behaviour can result in inefficient behaviour by firms. We then use the Data Envelopment Analysis (DEA) method with US utility data to examine implications of illustrative cases of strategic behaviour reported by regulators. The results show that gaming can have significant effects on the measured performance and profitability of firms. (author)

  3. Performance Based Clustering for Benchmarking of Container Ports: an Application of Dea and Cluster Analysis Technique

    Directory of Open Access Journals (Sweden)

    Jie Wu

    2010-12-01

    Full Text Available The operational performance of container ports has received more and more attentions in both academic and practitioner circles, the performance evaluation and process improvement of container ports have also been the focus of several studies. In this paper, Data Envelopment Analysis (DEA, an effective tool for relative efficiency assessment, is utilized for measuring the performances and benchmarking of the 77 world container ports in 2007. The used approaches in the current study consider four inputs (Capacity of Cargo Handling Machines, Number of Berths, Terminal Area and Storage Capacity and a single output (Container Throughput. The results for the efficiency scores are analyzed, and a unique ordering of the ports based on average cross efficiency is provided, also cluster analysis technique is used to select the more appropriate targets for poorly performing ports to use as benchmarks.

  4. CCF-RBE common cause failure reliability benchmark exercise

    International Nuclear Information System (INIS)

    Poucet, A.; Amendola, A.; Cacciabue, P.C.

    1987-01-01

    This report summarizes results, obtained by the participants in the Reliability Benchmark Exercise on Common Cause Failures (CCF-RBE). The reference power plant of the CCF-RBE was the NPP at Grohnde (KWG): it is a 1300 MW PWR plant of KWU design and operated by the utility Preussen Elektra. The systems studied were the Start-up and Shut-down system (RR/RL) and the Emergency Feedwater System (RS) both systems that can feed water into the steam generators in the emergency power mode. The CCF-RBE was organized in two phases: 1. The first phase: during which all participants have performed an analysis on the complete system as defined by the assumed boundaries, i.e. the Start-up and Shut-down system (RR/RL) and the Emergency Feedwater System (RS). 2. The second phase: in which the scope was limited to the RS system. This limitation in scope was agreed upon in the discussion on the results of the first phase, which showed that, within the boundaries of the exercise, RR/RL and RS systems could be considered independent of each other. This report gives an overview of the works carried out, the results obtained and the conclusions and lessons that could be drawn from the CCF-RBE

  5. Benchmarking electricity distribution

    Energy Technology Data Exchange (ETDEWEB)

    Watts, K. [Department of Justice and Attorney-General, QLD (Australia)

    1995-12-31

    Benchmarking has been described as a method of continuous improvement that involves an ongoing and systematic evaluation and incorporation of external products, services and processes recognised as representing best practice. It is a management tool similar to total quality management (TQM) and business process re-engineering (BPR), and is best used as part of a total package. This paper discusses benchmarking models and approaches and suggests a few key performance indicators that could be applied to benchmarking electricity distribution utilities. Some recent benchmarking studies are used as examples and briefly discussed. It is concluded that benchmarking is a strong tool to be added to the range of techniques that can be used by electricity distribution utilities and other organizations in search of continuous improvement, and that there is now a high level of interest in Australia. Benchmarking represents an opportunity for organizations to approach learning from others in a disciplined and highly productive way, which will complement the other micro-economic reforms being implemented in Australia. (author). 26 refs.

  6. An analysis of the CSNI/GREST core concrete interaction chemical thermodynamic benchmark exercise using the MPEC2 computer code

    International Nuclear Information System (INIS)

    Muramatsu, Ken; Kondo, Yasuhiko; Uchida, Masaaki; Soda, Kunihisa

    1989-01-01

    Fission product (EP) release during a core concrete interaction (CCI) is an important factor of the uncertainty associated with a source term estimation for an LWR severe accident. An analysis was made on the CCI Chemical Thermodynamic Benchmark Exercise organized by OECD/NEA/CSNI Group of Experts on Source Terms (GREST) for investigating the uncertainty in thermodynamic modeling for CCI. The benchmark exercise was to calculate the equilibrium FP vapor pressure for given system of temperature, pressure, and debris composition. The benchmark consisted of two parts, A and B. Part A was a simplified problem intended to test the numerical techniques. In part B, the participants were requested to use their own best estimate thermodynamic data base to examine the variability of the results due to the difference in thermodynamic data base. JAERI participated in this benchmark exercise with use of the MPEC2 code. Chemical thermodynamic data base needed for analysis of Part B was taken from the VENESA code. This report describes the computer code used, inputs to the code, and results from the calculation by JAERI. The present calculation indicates that the FP vapor pressure depends strongly on temperature and Oxygen potential in core debris and the pattern of dependency may be different for different FP elements. (author)

  7. Detection of Weak Spots in Benchmarks Memory Space by using PCA and CA

    Directory of Open Access Journals (Sweden)

    Abdul Kareem PARCHUR

    2010-12-01

    Full Text Available This paper describes the weak spots in SPEC CPU INT 2006 Benchmarks memory space by using Principal Component Analysis and Cluster Analysis. We used recently published SPEC CPU INT 2006 Benchmark scores of AMD Opteron 2000+ and AMD Opteron 8000+ series processors. The four most significant PCs, which are retained for 72.6% of the variance, PC2, PC3, and PC4 covers 26.5%, 2.9%, 0.91% and 0.019% variance respectively. The dendrogram is useful to identify the similarities and dissimilarities between the benchmarks in workload space. These results and analysis can be used by performance engineers, scientists and developers to better understand the benchmark behavior in workload space and to design a Benchmark Suite that covers the complete workload space.

  8. Benchmarking semantic web technology

    CERN Document Server

    García-Castro, R

    2009-01-01

    This book addresses the problem of benchmarking Semantic Web Technologies; first, from a methodological point of view, proposing a general methodology to follow in benchmarking activities over Semantic Web Technologies and, second, from a practical point of view, presenting two international benchmarking activities that involved benchmarking the interoperability of Semantic Web technologies using RDF(S) as the interchange language in one activity and OWL in the other.The book presents in detail how the different resources needed for these interoperability benchmarking activities were defined:

  9. Interior beam searchlight semi-analytical benchmark

    International Nuclear Information System (INIS)

    Ganapol, Barry D.; Kornreich, Drew E.

    2008-01-01

    Multidimensional semi-analytical benchmarks to provide highly accurate standards to assess routine numerical particle transport algorithms are few and far between. Because of the well-established 1D theory for the analytical solution of the transport equation, it is sometimes possible to 'bootstrap' a 1D solution to generate a more comprehensive solution representation. Here, we consider the searchlight problem (SLP) as a multidimensional benchmark. A variation of the usual SLP is the interior beam SLP (IBSLP) where a beam source lies beneath the surface of a half space and emits directly towards the free surface. We consider the establishment of a new semi-analytical benchmark based on a new FN formulation. This problem is important in radiative transfer experimental analysis to determine cloud absorption and scattering properties. (authors)

  10. Benchmarking multi-dimensional large strain consolidation analyses

    International Nuclear Information System (INIS)

    Priestley, D.; Fredlund, M.D.; Van Zyl, D.

    2010-01-01

    Analyzing the consolidation of tailings slurries and dredged fills requires a more extensive formulation than is used for common (small strain) consolidation problems. Large strain consolidation theories have traditionally been limited to 1-D formulations. SoilVision Systems has developed the capacity to analyze large strain consolidation problems in 2 and 3-D. The benchmarking of such formulations is not a trivial task. This paper presents several examples of modeling large strain consolidation in the beta versions of the new software. These examples were taken from the literature and were used to benchmark the large strain formulation used by the new software. The benchmarks reported here are: a comparison to the consolidation software application CONDES0, Townsend's Scenario B and a multi-dimensional analysis of long-term column tests performed on oil sands tailings. All three of these benchmarks were attained using the SVOffice suite. (author)

  11. RUNE benchmarks

    DEFF Research Database (Denmark)

    Peña, Alfredo

    This report contains the description of a number of benchmarks with the purpose of evaluating flow models for near-shore wind resource estimation. The benchmarks are designed based on the comprehensive database of observations that the RUNE coastal experiment established from onshore lidar...

  12. MCNP neutron benchmarks

    International Nuclear Information System (INIS)

    Hendricks, J.S.; Whalen, D.J.; Cardon, D.A.; Uhle, J.L.

    1991-01-01

    Over 50 neutron benchmark calculations have recently been completed as part of an ongoing program to validate the MCNP Monte Carlo radiation transport code. The new and significant aspects of this work are as follows: These calculations are the first attempt at a validation program for MCNP and the first official benchmarking of version 4 of the code. We believe the chosen set of benchmarks is a comprehensive set that may be useful for benchmarking other radiation transport codes and data libraries. These calculations provide insight into how well neutron transport calculations can be expected to model a wide variety of problems

  13. Hospital benchmarking: are U.S. eye hospitals ready?

    Science.gov (United States)

    de Korne, Dirk F; van Wijngaarden, Jeroen D H; Sol, Kees J C A; Betz, Robert; Thomas, Richard C; Schein, Oliver D; Klazinga, Niek S

    2012-01-01

    Benchmarking is increasingly considered a useful management instrument to improve quality in health care, but little is known about its applicability in hospital settings. The aims of this study were to assess the applicability of a benchmarking project in U.S. eye hospitals and compare the results with an international initiative. We evaluated multiple cases by applying an evaluation frame abstracted from the literature to five U.S. eye hospitals that used a set of 10 indicators for efficiency benchmarking. Qualitative analysis entailed 46 semistructured face-to-face interviews with stakeholders, document analyses, and questionnaires. The case studies only partially met the conditions of the evaluation frame. Although learning and quality improvement were stated as overall purposes, the benchmarking initiative was at first focused on efficiency only. No ophthalmic outcomes were included, and clinicians were skeptical about their reporting relevance and disclosure. However, in contrast with earlier findings in international eye hospitals, all U.S. hospitals worked with internal indicators that were integrated in their performance management systems and supported benchmarking. Benchmarking can support performance management in individual hospitals. Having a certain number of comparable institutes provide similar services in a noncompetitive milieu seems to lay fertile ground for benchmarking. International benchmarking is useful only when these conditions are not met nationally. Although the literature focuses on static conditions for effective benchmarking, our case studies show that it is a highly iterative and learning process. The journey of benchmarking seems to be more important than the destination. Improving patient value (health outcomes per unit of cost) requires, however, an integrative perspective where clinicians and administrators closely cooperate on both quality and efficiency issues. If these worlds do not share such a relationship, the added

  14. Determining the sensitivity of Data Envelopment Analysis method used in airport benchmarking

    Directory of Open Access Journals (Sweden)

    Mircea BOSCOIANU

    2013-03-01

    Full Text Available In the last decade there were some important changes in the airport industry, caused by the liberalization of the air transportation market. Until recently airports were considered infrastructure elements, and they were evaluated only by traffic values or their maximum capacity. Gradual orientation towards commercial led to the need of finding another ways of evaluation, more efficiency oriented. The existing methods for assessing efficiency used for other production units were not suitable to be used in case of airports due to specific features and high complexity of airport operations. In the last years there were some papers that proposed the Data Envelopment Analysis as a method for assessing the operational efficiency in order to conduct the benchmarking. This method offers the possibility of dealing with a large number of variables of different types, which represents the main advantage of this method and also recommends it as a good benchmarking tool for the airports management. This paper goal is to determine the sensitivity of this method in relation with its inputs and outputs. A Data Envelopment Analysis is conducted for 128 airports worldwide, in both input- and output-oriented measures, and the results are analysed against some inputs and outputs variations. Possible weaknesses of using DEA for assessing airports performance are revealed and analysed against this method advantages.

  15. Comparison of typical inelastic analysis predictions with benchmark problem experimental results

    International Nuclear Information System (INIS)

    Clinard, J.A.; Corum, J.M.; Sartory, W.K.

    1975-01-01

    The results of exemplary inelastic analyses for experimental benchmark problems on reactor components are presented. Consistent analytical procedures and constitutive relations were used in each of the analyses, and the material behavior data presented in the Appendix were used in all cases. Two finite-element inelastic computer programs were employed. These programs implement the analysis procedures and constitutive equations for type 304 stainless steel that are currently used in many analyses of elevated-temperature nuclear reactor system components. The analysis procedures and constitutive relations are briefly discussed, and representative analytical results are presented and compared to the test data. The results that are presented demonstrate the feasibility of performing inelastic analyses for the types of problems discussed, and they are indicative of the general level of agreement that the analyst might expect when using conventional inelastic analysis procedures. (U.S.)

  16. Comparison between RELAP5 versions for a two-phase natural circulation analysis

    Energy Technology Data Exchange (ETDEWEB)

    Braz Filho, Francisco A.; Ribeiro, Guilherme B.; Sabundjian, Gaianê; Caldeira, Alexandre D., E-mail: fbraz@ieav.cta.br, E-mail: gbribeiro@ieav.cta.br, E-mail: alexdc@ieav.cta.br, E-mail: gdjian@ipen.br [Instituto de Estudos Avançados (IEAv), São José dos Campos, SP (Brazil). Div. de Energia Nuclear; Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2017-11-01

    RELAP5 is one of the most used numerical tools to predict thermal-hydraulic and neutronic phenomena in nuclear reactors. RELAP5-3D is the latest version of this software family, but RELAP5-mod3 is still widely used in Brazilian research institutes and it is also used as benchmark for several nuclear applications. Among these applications, the use of passive heat transfer mechanisms, such as natural circulation, has drawn attention of several studies, especially after the Fukushima-Daiichi accident. Considering this aforementioned aspect, this study proposes a comparison of RELAP5-3D and RELAP5-mod3 versions, focusing on a two-phase natural circulation loop. For comparison purposes, an experimental data set is part of the analysis. Results showed that during the single-phase regime, the temperature difference between versions is negligible. However, when the two-phase flow regime takes place, different wavelengths and amplitudes of flow instabilities were obtained for each version. When compared to the experimental data set, the RELAP5-3D version provided the best prediction results. (author)

  17. Results of LWR core transient benchmarks

    International Nuclear Information System (INIS)

    Finnemann, H.; Bauer, H.; Galati, A.; Martinelli, R.

    1993-10-01

    LWR core transient (LWRCT) benchmarks, based on well defined problems with a complete set of input data, are used to assess the discrepancies between three-dimensional space-time kinetics codes in transient calculations. The PWR problem chosen is the ejection of a control assembly from an initially critical core at hot zero power or at full power, each for three different geometrical configurations. The set of problems offers a variety of reactivity excursions which efficiently test the coupled neutronic/thermal - hydraulic models of the codes. The 63 sets of submitted solutions are analyzed by comparison with a nodal reference solution defined by using a finer spatial and temporal resolution than in standard calculations. The BWR problems considered are reactivity excursions caused by cold water injection and pressurization events. In the present paper, only the cold water injection event is discussed and evaluated in some detail. Lacking a reference solution the evaluation of the 8 sets of BWR contributions relies on a synthetic comparative discussion. The results of this first phase of LWRCT benchmark calculations are quite satisfactory, though there remain some unresolved issues. It is therefore concluded that even more challenging problems can be successfully tackled in a suggested second test phase. (authors). 46 figs., 21 tabs., 3 refs

  18. Benchmarking to improve the quality of cystic fibrosis care.

    Science.gov (United States)

    Schechter, Michael S

    2012-11-01

    Benchmarking involves the ascertainment of healthcare programs with most favorable outcomes as a means to identify and spread effective strategies for delivery of care. The recent interest in the development of patient registries for patients with cystic fibrosis (CF) has been fueled in part by an interest in using them to facilitate benchmarking. This review summarizes reports of how benchmarking has been operationalized in attempts to improve CF care. Although certain goals of benchmarking can be accomplished with an exclusive focus on registry data analysis, benchmarking programs in Germany and the United States have supplemented these data analyses with exploratory interactions and discussions to better understand successful approaches to care and encourage their spread throughout the care network. Benchmarking allows the discovery and facilitates the spread of effective approaches to care. It provides a pragmatic alternative to traditional research methods such as randomized controlled trials, providing insights into methods that optimize delivery of care and allowing judgments about the relative effectiveness of different therapeutic approaches.

  19. Definition and Analysis of Heavy Water Reactor Benchmarks for Testing New Wims-D Libraries

    International Nuclear Information System (INIS)

    Leszczynski, Francisco

    2000-01-01

    This work is part of the IAEA-WIMS Library Update Project (WLUP). A group of heavy water reactor benchmarks have been selected for testing new WIMS-D libraries, including calculations with WIMSD5B program and the analysis of results.These benchmarks cover a wide variety of reactors and conditions, from fresh fuels to high burnup, and from natural to enriched uranium.Besides, each benchmark includes variations in lattice pitch and in coolants (normally heavy water and void).Multiplication factors with critical experimental bucklings and other parameters are calculated and compared with experimental reference values.The WIMS libraries used for the calculations were generated with basic data from JEF-2.2 Rev.3 (JEF) and ENDF/B-VI iNReleaseln 5 (E6) Results obtained with WIMS-86 (W86) library, included with WIMSD5B package, from Windfrith, UK with adjusted data, are included also, for showing the improvements obtained with the new -not adjusted- libraries.The calculations with WIMSD5B were made with two methods (input program options): PIJ (two-dimension collision probability method) and DSN (one-dimension Sn method, with homogenization of materials by ring).The general conclusions are: the library based on JEF data and the DSN meted give the best results, that in average are acceptable

  20. Using the fuzzy linear regression method to benchmark the energy efficiency of commercial buildings

    International Nuclear Information System (INIS)

    Chung, William

    2012-01-01

    Highlights: ► Fuzzy linear regression method is used for developing benchmarking systems. ► The systems can be used to benchmark energy efficiency of commercial buildings. ► The resulting benchmarking model can be used by public users. ► The resulting benchmarking model can capture the fuzzy nature of input–output data. -- Abstract: Benchmarking systems from a sample of reference buildings need to be developed to conduct benchmarking processes for the energy efficiency of commercial buildings. However, not all benchmarking systems can be adopted by public users (i.e., other non-reference building owners) because of the different methods in developing such systems. An approach for benchmarking the energy efficiency of commercial buildings using statistical regression analysis to normalize other factors, such as management performance, was developed in a previous work. However, the field data given by experts can be regarded as a distribution of possibility. Thus, the previous work may not be adequate to handle such fuzzy input–output data. Consequently, a number of fuzzy structures cannot be fully captured by statistical regression analysis. This present paper proposes the use of fuzzy linear regression analysis to develop a benchmarking process, the resulting model of which can be used by public users. An illustrative example is given as well.

  1. Application of the random vibration approach in the seismic analysis of LMFBR structures - Benchmark calculations

    International Nuclear Information System (INIS)

    Preumont, A.; Shilab, S.; Cornaggia, L.; Reale, M.; Labbe, P.; Noe, H.

    1992-01-01

    This benchmark exercise is the continuation of the state-of-the-art review (EUR 11369 EN) which concluded that the random vibration approach could be an effective tool in seismic analysis of nuclear power plants, with potential advantages on time history and response spectrum techniques. As compared to the latter, the random vibration method provides an accurate treatment of multisupport excitations, non classical damping as well as the combination of high-frequency modal components. With respect to the former, the random vibration method offers direct information on statistical variability (probability distribution) and cheaper computations. The disadvantages of the random vibration method are that it is based on stationary results, and requires a power spectral density input instead of a response spectrum. A benchmark exercise to compare the three methods from the various aspects mentioned above, on one or several simple structures has been made. The following aspects have been covered with the simplest possible models: (i) statistical variability, (ii) multisupport excitation, (iii) non-classical damping. The random vibration method is therefore concluded to be a reliable method of analysis. Its use is recommended, particularly for preliminary design, owing to its computational advantage on multiple time history analysis

  2. New Multi-group Transport Neutronics (PHISICS) Capabilities for RELAP5-3D and its Application to Phase I of the OECD/NEA MHTGR-350 MW Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Gerhard Strydom; Cristian Rabiti; Andrea Alfonsi

    2012-10-01

    PHISICS is a neutronics code system currently under development at the Idaho National Laboratory (INL). Its goal is to provide state of the art simulation capability to reactor designers. The different modules for PHISICS currently under development are a nodal and semi-structured transport core solver (INSTANT), a depletion module (MRTAU) and a cross section interpolation (MIXER) module. The INSTANT module is the most developed of the mentioned above. Basic functionalities are ready to use, but the code is still in continuous development to extend its capabilities. This paper reports on the effort of coupling the nodal kinetics code package PHISICS (INSTANT/MRTAU/MIXER) to the thermal hydraulics system code RELAP5-3D, to enable full core and system modeling. This will enable the possibility to model coupled (thermal-hydraulics and neutronics) problems with more options for 3D neutron kinetics, compared to the existing diffusion theory neutron kinetics module in RELAP5-3D (NESTLE). In the second part of the paper, an overview of the OECD/NEA MHTGR-350 MW benchmark is given. This benchmark has been approved by the OECD, and is based on the General Atomics 350 MW Modular High Temperature Gas Reactor (MHTGR) design. The benchmark includes coupled neutronics thermal hydraulics exercises that require more capabilities than RELAP5-3D with NESTLE offers. Therefore, the MHTGR benchmark makes extensive use of the new PHISICS/RELAP5-3D coupling capabilities. The paper presents the preliminary results of the three steady state exercises specified in Phase I of the benchmark using PHISICS/RELAP5-3D.

  3. ZZ-PBMR-400, OECD/NEA PBMR Coupled Neutronics/Thermal Hydraulics Transient Benchmark - The PBMR-400 Core Design

    International Nuclear Information System (INIS)

    Reitsma, Frederik

    2007-01-01

    Description of benchmark: This international benchmark, concerns Pebble-Bed Modular Reactor (PBMR) coupled neutronics/thermal hydraulics transients based on the PBMR-400 MW design. The deterministic neutronics, thermal-hydraulics and transient analysis tools and methods available to design and analyse PBMRs lag, in many cases, behind the state of the art compared to other reactor technologies. This has motivated the testing of existing methods for HTGRs but also the development of more accurate and efficient tools to analyse the neutronics and thermal-hydraulic behaviour for the design and safety evaluations of the PBMR. In addition to the development of new methods, this includes defining appropriate benchmarks to verify and validate the new methods in computer codes. The scope of the benchmark is to establish well-defined problems, based on a common given set of cross sections, to compare methods and tools in core simulation and thermal hydraulics analysis with a specific focus on transient events through a set of multi-dimensional computational test problems. The benchmark exercise has the following objectives: - Establish a standard benchmark for coupled codes (neutronics/thermal-hydraulics) for PBMR design; - Code-to-code comparison using a common cross section library ; - Obtain a detailed understanding of the events and the processes; - Benefit from different approaches, understanding limitations and approximations. Major Design and Operating Characteristics of the PBMR (PBMR Characteristic and Value): Installed thermal capacity: 400 MW(t); Installed electric capacity: 165 MW(e); Load following capability: 100-40-100%; Availability: ≥ 95%; Core configuration: Vertical with fixed centre graphite reflector; Fuel: TRISO ceramic coated U-235 in graphite spheres; Primary coolant: Helium; Primary coolant pressure: 9 MPa; Moderator: Graphite; Core outlet temperature: 900 C.; Core inlet temperature: 500 C.; Cycle type: Direct; Number of circuits: 1; Cycle

  4. Comprehensive Benchmark Suite for Simulation of Particle Laden Flows Using the Discrete Element Method with Performance Profiles from the Multiphase Flow with Interface eXchanges (MFiX) Code

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Peiyuan [Univ. of Colorado, Boulder, CO (United States); Brown, Timothy [Univ. of Colorado, Boulder, CO (United States); Fullmer, William D. [Univ. of Colorado, Boulder, CO (United States); Hauser, Thomas [Univ. of Colorado, Boulder, CO (United States); Hrenya, Christine [Univ. of Colorado, Boulder, CO (United States); Grout, Ray [National Renewable Energy Lab. (NREL), Golden, CO (United States); Sitaraman, Hariswaran [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2016-01-29

    Five benchmark problems are developed and simulated with the computational fluid dynamics and discrete element model code MFiX. The benchmark problems span dilute and dense regimes, consider statistically homogeneous and inhomogeneous (both clusters and bubbles) particle concentrations and a range of particle and fluid dynamic computational loads. Several variations of the benchmark problems are also discussed to extend the computational phase space to cover granular (particles only), bidisperse and heat transfer cases. A weak scaling analysis is performed for each benchmark problem and, in most cases, the scalability of the code appears reasonable up to approx. 103 cores. Profiling of the benchmark problems indicate that the most substantial computational time is being spent on particle-particle force calculations, drag force calculations and interpolating between discrete particle and continuum fields. Hardware performance analysis was also carried out showing significant Level 2 cache miss ratios and a rather low degree of vectorization. These results are intended to serve as a baseline for future developments to the code as well as a preliminary indicator of where to best focus performance optimizations.

  5. Benchmark selection

    DEFF Research Database (Denmark)

    Hougaard, Jens Leth; Tvede, Mich

    2002-01-01

    Within a production theoretic framework, this paper considers an axiomatic approach to benchmark selection. It is shown that two simple and weak axioms; efficiency and comprehensive monotonicity characterize a natural family of benchmarks which typically becomes unique. Further axioms are added...... in order to obtain a unique selection...

  6. Benchmark of AC and DC active power decoupling circuits for second-order harmonic mitigation in kW-scale single-phase inverters

    DEFF Research Database (Denmark)

    Qin, Zian; Tang, Yi; Loh, Poh Chiang

    2015-01-01

    studied, where the commercially available film capacitors, circuit topologies, and control strategies for active power decoupling are all taken into account. Then, an adaptive decoupling voltage control method is proposed to further improve the performance of dc decoupling in terms of efficiency...... and reliability. The feasibility and superiority of the identified solution for active power decoupling together with the proposed adaptive decoupling voltage control method are finally verified by both the experimental results obtained on a 2 kW single-phase inverter.......This paper presents the benchmark study of ac and dc active power decoupling circuits for second-order harmonic mitigation in kW-scale single-phase inverters. First of all, the best solutions of active power decoupling to achieve high efficiency and power density are identified and comprehensively...

  7. Report on the on-going EUREDATA Benchmark Exercise on data analysis

    International Nuclear Information System (INIS)

    Besi, A.; Colombo, A.G.

    1989-01-01

    In April 1987 the JRC was charged by the Assembly of the EuReDatA members with the organization and the coordination of a Benchmark Exercise (BE) on data analysis. The main aim of the BE is a comparison of the methods used by the various organizations to estimate reliability parameters and functions from field data. The reference data set was to be constituted by raw data taken from the Component Event Data Bank (CEDB). The CEDB is a centralized bank, which collects data describing the operational behaviour of components of nuclear power plants operating in various European Countries. (orig./HSCH)

  8. Benchmarking school nursing practice: the North West Regional Benchmarking Group

    OpenAIRE

    Littler, Nadine; Mullen, Margaret; Beckett, Helen; Freshney, Alice; Pinder, Lynn

    2016-01-01

    It is essential that the quality of care is reviewed regularly through robust processes such as benchmarking to ensure all outcomes and resources are evidence-based so that children and young people’s needs are met effectively. This article provides an example of the use of benchmarking in school nursing practice. Benchmarking has been defined as a process for finding, adapting and applying best practices (Camp, 1994). This concept was first adopted in the 1970s ‘from industry where it was us...

  9. OECD/NEA burnup credit criticality benchmarks phase IIIB: Burnup calculations of BWR fuel assemblies for storage and transport

    International Nuclear Information System (INIS)

    Okuno, Hiroshi; Naito, Yoshitaka; Suyama, Kenya

    2002-02-01

    The report describes the final results of the Phase IIIB Benchmark conducted by the Expert Group on Burnup Credit Criticality Safety under the auspices of the Nuclear Energy Agency (NEA) of the Organization for Economic Cooperation and Development (OECD). The Benchmark was intended to compare the predictability of current computer code and data library combinations for the atomic number densities of an irradiated PWR fuel assembly model. The fuel assembly was irradiated under specific power of 25.6 MW/tHM up to 40 GWd/tHM and cooled for five years. The void fraction was assumed to be uniform throughout the channel box and constant, at 0, 40 and 70%, during burnup. In total, 16 results were submitted from 13 institutes of 7 countries. The calculated atomic number densities of 12 actinides and 20 fission product nuclides were found to be for the most part within a range of ±10% relative to the average, although some results, esp. 155 Eu and gadolinium isotopes, exceeded the band, which will require further investigation. Pin-wise burnup results agreed well among the participants. The results in the infinite neutron multiplication factor k ∞ also accorded well with each other for void fractions of 0 and 40%; however some results deviated from the averaged value noticeably for the void fraction of 70%. (author)

  10. OECD/NEA burnup credit criticality benchmarks phase IIIB. Burnup calculations of BWR fuel assemblies for storage and transport

    Energy Technology Data Exchange (ETDEWEB)

    Okuno, Hiroshi; Naito, Yoshitaka; Suyama, Kenya [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2002-02-01

    The report describes the final results of the Phase IIIB Benchmark conducted by the Expert Group on Burnup Credit Criticality Safety under the auspices of the Nuclear Energy Agency (NEA) of the Organization for Economic Cooperation and Development (OECD). The Benchmark was intended to compare the predictability of current computer code and data library combinations for the atomic number densities of an irradiated PWR fuel assembly model. The fuel assembly was irradiated under specific power of 25.6 MW/tHM up to 40 GWd/tHM and cooled for five years. The void fraction was assumed to be uniform throughout the channel box and constant, at 0, 40 and 70%, during burnup. In total, 16 results were submitted from 13 institutes of 7 countries. The calculated atomic number densities of 12 actinides and 20 fission product nuclides were found to be for the most part within a range of {+-}10% relative to the average, although some results, esp. {sup 155}Eu and gadolinium isotopes, exceeded the band, which will require further investigation. Pin-wise burnup results agreed well among the participants. The results in the infinite neutron multiplication factor k{sub {infinity}} also accorded well with each other for void fractions of 0 and 40%; however some results deviated from the averaged value noticeably for the void fraction of 70%. (author)

  11. OECD/DOE/CEA VVER-1000 Coolant Transient Benchmark. Summary Record of the Fourth Workshop (V100-CT4)

    International Nuclear Information System (INIS)

    2006-01-01

    The overall objective of the VVER-1000 coolant transient (V1000CT) benchmark is to assess computer codes used in the safety analysis of VVER power plants, specifically for their use in analysis of reactivity transients in a VVER-1000. The V1000CT benchmark consists of two phases: V1000CT-1 is a simulation of the switching on of one main coolant pump (MCP) when the other three MCPs are in operation, and V1000CT-2 concerns calculation of coolant mixing tests and main steam line break (MSLB) scenarios. Each of the two phases contains three exercises. The reference problem chosen for simulation in Phase 1 is a MCP switching on when the other three main coolant pumps are in operation in a VVER-1000. This event is characterized by rapid increase in the flow through the core resulting in a coolant temperature decrease, which is spatially dependent. This leads to insertion of spatially distributed positive reactivity due to the modelled feedback mechanisms and non-symmetric power distribution. Simulation of the transient requires evaluation of core response from a multi-dimensional perspective (coupled three-dimensional neutronics/core thermal-hydraulics) supplemented by a one-dimensional simulation of the remainder of the reactor coolant system. Three exercises are defined in the framework of Phase 1: a) Exercise 1 - Point kinetics plant simulation; b) Exercise 2 - Coupled 3-D neutronics/core thermal-hydraulics response evaluation; c) Exercise 3 - Best-estimate coupled 3-D core/plant system transient modelling. In addition to the measured (experiment) scenario, extreme calculation scenarios were defined in the frame of Exercise 3 for better testing 3-D neutronics/thermal-hydraulics techniques. The proposals concerned: rod ejection simulations with scram set points at two different power levels. Since the previous coupled code benchmarks indicated that further development of the mixing computation models in the integrated codes is necessary, a coolant mixing experiment and

  12. Benchmarking Foot Trajectory Estimation Methods for Mobile Gait Analysis

    Directory of Open Access Journals (Sweden)

    Julius Hannink

    2017-08-01

    Full Text Available Mobile gait analysis systems based on inertial sensing on the shoe are applied in a wide range of applications. Especially for medical applications, they can give new insights into motor impairment in, e.g., neurodegenerative disease and help objectify patient assessment. One key component in these systems is the reconstruction of the foot trajectories from inertial data. In literature, various methods for this task have been proposed. However, performance is evaluated on a variety of datasets due to the lack of large, generally accepted benchmark datasets. This hinders a fair comparison of methods. In this work, we implement three orientation estimation and three double integration schemes for use in a foot trajectory estimation pipeline. All methods are drawn from literature and evaluated against a marker-based motion capture reference. We provide a fair comparison on the same dataset consisting of 735 strides from 16 healthy subjects. As a result, the implemented methods are ranked and we identify the most suitable processing pipeline for foot trajectory estimation in the context of mobile gait analysis.

  13. Benchmarking of nuclear economics tools

    International Nuclear Information System (INIS)

    Moore, Megan; Korinny, Andriy; Shropshire, David; Sadhankar, Ramesh

    2017-01-01

    Highlights: • INPRO and GIF economic tools exhibited good alignment in total capital cost estimation. • Subtle discrepancies in the cost result from differences in financing and the fuel cycle assumptions. • A common set of assumptions was found to reduce the discrepancies to 1% or less. • Opportunities for harmonisation of economic tools exists. - Abstract: Benchmarking of the economics methodologies developed by the Generation IV International Forum (GIF) and the International Atomic Energy Agency’s International Project on Innovative Nuclear Reactors and Fuel Cycles (INPRO), was performed for three Generation IV nuclear energy systems. The Economic Modeling Working Group of GIF developed an Excel based spreadsheet package, G4ECONS (Generation 4 Excel-based Calculation Of Nuclear Systems), to calculate the total capital investment cost (TCIC) and the levelised unit energy cost (LUEC). G4ECONS is sufficiently generic in the sense that it can accept the types of projected input, performance and cost data that are expected to become available for Generation IV systems through various development phases and that it can model both open and closed fuel cycles. The Nuclear Energy System Assessment (NESA) Economic Support Tool (NEST) was developed to enable an economic analysis using the INPRO methodology to easily calculate outputs including the TCIC, LUEC and other financial figures of merit including internal rate of return, return of investment and net present value. NEST is also Excel based and can be used to evaluate nuclear reactor systems using the open fuel cycle, MOX (mixed oxide) fuel recycling and closed cycles. A Super Critical Water-cooled Reactor system with an open fuel cycle and two Fast Reactor systems, one with a break-even fuel cycle and another with a burner fuel cycle, were selected for the benchmarking exercise. Published data on capital and operating costs were used for economics analyses using G4ECONS and NEST tools. Both G4ECONS and

  14. Uncertainty and sensitivity analysis of control strategies using the benchmark simulation model No1 (BSM1)

    DEFF Research Database (Denmark)

    Flores-Alsina, Xavier; Rodriguez-Roda, Ignasi; Sin, Gürkan

    2009-01-01

    The objective of this paper is to perform an uncertainty and sensitivity analysis of the predictions of the Benchmark Simulation Model (BSM) No. 1, when comparing four activated sludge control strategies. The Monte Carlo simulation technique is used to evaluate the uncertainty in the BSM1 predict...

  15. Interactive benchmarking

    DEFF Research Database (Denmark)

    Lawson, Lartey; Nielsen, Kurt

    2005-01-01

    We discuss individual learning by interactive benchmarking using stochastic frontier models. The interactions allow the user to tailor the performance evaluation to preferences and explore alternative improvement strategies by selecting and searching the different frontiers using directional...... in the suggested benchmarking tool. The study investigates how different characteristics on dairy farms influences the technical efficiency....

  16. Verification and validation benchmarks.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy

    2007-02-01

    Verification and validation (V&V) are the primary means to assess the accuracy and reliability of computational simulations. V&V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V&V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the level of

  17. Deflection-based method for seismic response analysis of concrete walls: Benchmarking of CAMUS experiment

    International Nuclear Information System (INIS)

    Basu, Prabir C.; Roshan, A.D.

    2007-01-01

    A number of shake table tests had been conducted on the scaled down model of a concrete wall as part of CAMUS experiment. The experiments were conducted between 1996 and 1998 in the CEA facilities in Saclay, France. Benchmarking of CAMUS experiments was undertaken as a part of the coordinated research program on 'Safety Significance of Near-Field Earthquakes' organised by International Atomic Energy Agency (IAEA). Technique of deflection-based method was adopted for benchmarking exercise. Non-linear static procedure of deflection-based method has two basic steps: pushover analysis, and determination of target displacement or performance point. Pushover analysis is an analytical procedure to assess the capacity to withstand seismic loading effect that a structural system can offer considering the redundancies and inelastic deformation. Outcome of a pushover analysis is the plot of force-displacement (base shear-top/roof displacement) curve of the structure. This is obtained by step-by-step non-linear static analysis of the structure with increasing value of load. The second step is to determine target displacement, which is also known as performance point. The target displacement is the likely maximum displacement of the structure due to a specified seismic input motion. Established procedures, FEMA-273 and ATC-40, are available to determine this maximum deflection. The responses of CAMUS test specimen are determined by deflection-based method and analytically calculated values compare well with the test results

  18. TRACE/PARCS analysis of the OECD/NEA Oskarshamn-2 BWR stability benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Kozlowski, T. [Univ. of Illinois, Urbana-Champaign, IL (United States); Downar, T.; Xu, Y.; Wysocki, A. [Univ. of Michigan, Ann Arbor, MI (United States); Ivanov, K.; Magedanz, J.; Hardgrove, M. [Pennsylvania State Univ., Univ. Park, PA (United States); March-Leuba, J. [Oak Ridge National Laboratory, Oak Ridge, TN (United States); Hudson, N.; Woodyatt, D. [Nuclear Regulatory Commission, Rockville, MD (United States)

    2012-07-01

    On February 25, 1999, the Oskarshamn-2 NPP experienced a stability event which culminated in diverging power oscillations with a decay ratio of about 1.4. The event was successfully modeled by the TRACE/PARCS coupled code system, and further analysis of the event is described in this paper. The results show very good agreement with the plant data, capturing the entire behavior of the transient including the onset of instability, growth of the oscillations (decay ratio) and oscillation frequency. This provides confidence in the prediction of other parameters which are not available from the plant records. The event provides coupled code validation for a challenging BWR stability event, which involves the accurate simulation of neutron kinetics (NK), thermal-hydraulics (TH), and TH/NK. coupling. The success of this work has demonstrated the ability of the 3-D coupled systems code TRACE/PARCS to capture the complex behavior of BWR stability events. The problem was released as an international OECD/NEA benchmark, and it is the first benchmark based on measured plant data for a stability event with a DR greater than one. Interested participants are invited to contact authors for more information. (authors)

  19. Benchmarking criticality analysis of TRIGA fuel storage racks.

    Science.gov (United States)

    Robinson, Matthew Loren; DeBey, Timothy M; Higginbotham, Jack F

    2017-01-01

    A criticality analysis was benchmarked to sub-criticality measurements of the hexagonal fuel storage racks at the United States Geological Survey TRIGA MARK I reactor in Denver. These racks, which hold up to 19 fuel elements each, are arranged at 0.61m (2 feet) spacings around the outer edge of the reactor. A 3-dimensional model was created of the racks using MCNP5, and the model was verified experimentally by comparison to measured subcritical multiplication data collected in an approach to critical loading of two of the racks. The validated model was then used to show that in the extreme condition where the entire circumference of the pool was lined with racks loaded with used fuel the storage array is subcritical with a k value of about 0.71; well below the regulatory limit of 0.8. A model was also constructed of the rectangular 2×10 fuel storage array used in many other TRIGA reactors to validate the technique against the original TRIGA licensing sub-critical analysis performed in 1966. The fuel used in this study was standard 20% enriched (LEU) aluminum or stainless steel clad TRIGA fuel. Copyright © 2016. Published by Elsevier Ltd.

  20. The institutionalization of benchmarking in the Danish construction industry

    DEFF Research Database (Denmark)

    Rasmussen, Grane Mikael Gregaard; Gottlieb, Stefan Christoffer

    , the chapter accounts for the data collection methods used to conduct the empirical data collection and the appertaining choices that are made, based on the account for analyzing institutionalization processes. The analysis unfolds over seven chapters, starting with an exposition of the political foundation...... and disseminated to the construction industry. The fourth chapter demonstrates how benchmarking was concretized into a benchmarking system and articulated to address several political focus areas for the construction industry. BEC accordingly became a political arena where many local perspectives and strategic...... emerged as actors expressed diverse political interests in the institutionalization of benchmarking. The political struggles accounted for in chapter five constituted a powerful political pressure and called for transformations of the institutionalization in order for benchmarking to attain institutional...

  1. The KMAT: Benchmarking Knowledge Management.

    Science.gov (United States)

    de Jager, Martha

    Provides an overview of knowledge management and benchmarking, including the benefits and methods of benchmarking (e.g., competitive, cooperative, collaborative, and internal benchmarking). Arthur Andersen's KMAT (Knowledge Management Assessment Tool) is described. The KMAT is a collaborative benchmarking tool, designed to help organizations make…

  2. Simulation benchmark based on THAI-experiment on dissolution of a steam stratification by natural convection

    Energy Technology Data Exchange (ETDEWEB)

    Freitag, M., E-mail: freitag@becker-technologies.com; Schmidt, E.; Gupta, S.; Poss, G.

    2016-04-01

    Highlights: . • We studied the generation and dissolution of steam stratification in natural convection. • We performed a computer code benchmark including blind and open phases. • The dissolution of stratification predicted only qualitatively by LP and CFD models during the blind simulation phase. - Abstract: Locally enriched hydrogen as in stratification may contribute to early containment failure in the course of severe nuclear reactor accidents. During accident sequences steam might accumulate as well to stratifications which can directly influence the distribution and ignitability of hydrogen mixtures in containments. An international code benchmark including Computational Fluid Dynamics (CFD) and Lumped Parameter (LP) codes was conducted in the frame of the German THAI program. Basis for the benchmark was experiment TH24.3 which investigates the dissolution of a steam layer subject to natural convection in the steam-air atmosphere of the THAI vessel. The test provides validation data for the development of CFD and LP models to simulate the atmosphere in the containment of a nuclear reactor installation. In test TH24.3 saturated steam is injected into the upper third of the vessel forming a stratification layer which is then mixed by a superposed thermal convection. In this paper the simulation benchmark will be evaluated in addition to the general discussion about the experimental transient of test TH24.3. Concerning the steam stratification build-up and dilution of the stratification, the numerical programs showed very different results during the blind evaluation phase, but improved noticeable during open simulation phase.

  3. Status on benchmark testing of CENDL-3

    CERN Document Server

    Liu Ping

    2002-01-01

    CENDL-3, the newest version of China Evaluated Nuclear Data Library has been finished, and distributed for some benchmarks analysis recently. The processing was carried out using the NJOY nuclear data processing code system. The calculations and analysis of benchmarks were done with Monte Carlo code MCNP and reactor lattice code WIMSD5A. The calculated results were compared with the experimental results based on ENDF/B6. In most thermal and fast uranium criticality benchmarks, the calculated k sub e sub f sub f values with CENDL-3 were in good agreements with experimental results. In the plutonium fast cores, the k sub e sub f sub f values were improved significantly with CENDL-3. This is duo to reevaluation of the fission spectrum and elastic angular distributions of sup 2 sup 3 sup 9 Pu and sup 2 sup 4 sup 0 Pu. CENDL-3 underestimated the k sub e sub f sub f values compared with other evaluated data libraries for most spherical or cylindrical assemblies of plutonium or uranium with beryllium

  4. Benchmarking in Mobarakeh Steel Company

    OpenAIRE

    Sasan Ghasemi; Mohammad Nazemi; Mehran Nejati

    2008-01-01

    Benchmarking is considered as one of the most effective ways of improving performance in companies. Although benchmarking in business organizations is a relatively new concept and practice, it has rapidly gained acceptance worldwide. This paper introduces the benchmarking project conducted in Esfahan's Mobarakeh Steel Company, as the first systematic benchmarking project conducted in Iran. It aims to share the process deployed for the benchmarking project in this company and illustrate how th...

  5. How Activists Use Benchmarks

    DEFF Research Database (Denmark)

    Seabrooke, Leonard; Wigan, Duncan

    2015-01-01

    Non-governmental organisations use benchmarks as a form of symbolic violence to place political pressure on firms, states, and international organisations. The development of benchmarks requires three elements: (1) salience, that the community of concern is aware of the issue and views...... are put to the test. The first is a reformist benchmarking cycle where organisations defer to experts to create a benchmark that conforms with the broader system of politico-economic norms. The second is a revolutionary benchmarking cycle driven by expert-activists that seek to contest strong vested...... interests and challenge established politico-economic norms. Differentiating these cycles provides insights into how activists work through organisations and with expert networks, as well as how campaigns on complex economic issues can be mounted and sustained....

  6. ES-RBE Event sequence reliability Benchmark exercise

    International Nuclear Information System (INIS)

    Poucet, A.E.J.

    1991-01-01

    The event Sequence Reliability Benchmark Exercise (ES-RBE) can be considered as a logical extension of the other three Reliability Benchmark Exercices : the RBE on Systems Analysis, the RBE on Common Cause Failures and the RBE on Human Factors. The latter, constituting Activity No. 1, was concluded by the end of 1987. The ES-RBE covered the techniques that are currently used for analysing and quantifying sequences of events starting from an initiating event to various plant damage states, including analysis of various system failures and/or successes, human intervention failure and/or success and dependencies between systems. By this way, one of the scopes of the ES-RBE was to integrate the experiences gained in the previous exercises

  7. Benchmarking in Mobarakeh Steel Company

    Directory of Open Access Journals (Sweden)

    Sasan Ghasemi

    2008-05-01

    Full Text Available Benchmarking is considered as one of the most effective ways of improving performance incompanies. Although benchmarking in business organizations is a relatively new concept and practice, ithas rapidly gained acceptance worldwide. This paper introduces the benchmarking project conducted in Esfahan’s Mobarakeh Steel Company, as the first systematic benchmarking project conducted in Iran. It aimsto share the process deployed for the benchmarking project in this company and illustrate how the projectsystematic implementation led to succes.

  8. Aquatic Life Benchmarks

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Aquatic Life Benchmarks is an EPA-developed set of criteria for freshwater species. These benchmarks are based on toxicity values reviewed by EPA and used in the...

  9. The CMSSW benchmarking suite: Using HEP code to measure CPU performance

    International Nuclear Information System (INIS)

    Benelli, G

    2010-01-01

    The demanding computing needs of the CMS experiment require thoughtful planning and management of its computing infrastructure. A key factor in this process is the use of realistic benchmarks when assessing the computing power of the different architectures available. In recent years a discrepancy has been observed between the CPU performance estimates given by the reference benchmark for HEP computing (SPECint) and actual performances of HEP code. Making use of the CPU performance tools from the CMSSW performance suite, comparative CPU performance studies have been carried out on several architectures. A benchmarking suite has been developed and integrated in the CMSSW framework, to allow computing centers and interested third parties to benchmark architectures directly with CMSSW. The CMSSW benchmarking suite can be used out of the box, to test and compare several machines in terms of CPU performance and report with the wanted level of detail the different benchmarking scores (e.g. by processing step) and results. In this talk we describe briefly the CMSSW software performance suite, and in detail the CMSSW benchmarking suite client/server design, the performance data analysis and the available CMSSW benchmark scores. The experience in the use of HEP code for benchmarking will be discussed and CMSSW benchmark results presented.

  10. Framework for benchmarking online retailing performance using fuzzy AHP and TOPSIS method

    Directory of Open Access Journals (Sweden)

    M. Ahsan Akhtar Hasin

    2012-08-01

    Full Text Available Due to increasing penetration of internet connectivity, on-line retail is growing from the pioneer phase to increasing integration within people's lives and companies' normal business practices. In the increasingly competitive environment, on-line retail service providers require systematic and structured approach to have cutting edge over the rival. Thus, the use of benchmarking has become indispensable to accomplish superior performance to support the on-line retail service providers. This paper uses the fuzzy analytic hierarchy process (FAHP approach to support a generic on-line retail benchmarking process. Critical success factors for on-line retail service have been identified from a structured questionnaire and literature and prioritized using fuzzy AHP. Using these critical success factors, performance levels of the ORENET an on-line retail service provider is benchmarked along with four other on-line service providers using TOPSIS method. Based on the benchmark, their relative ranking has also been illustrated.

  11. Joint European contribution to phases 1 and 2 of the BN600 hybrid reactor benchmark core analysis

    International Nuclear Information System (INIS)

    Rimpault, Gerald; Newton, Tim; Smith, Peter

    2000-01-01

    This paper describes the ERANOS code developed within the European cooperation on fast reactors. Reference scheme and ERANOS code validation are included. The method for BN-600 reactor core analysis and the results of phases 1 and two are presented. They include effective multiplication factors, fuel Doppler constants; steel Doppler constants; sodium density coefficient; steel density coefficients; fuel density coefficient; absorber density coefficient; axial and radial expansion coefficients; dynamic parameters; power distribution; beta and neutron life time; reaction rate distribution

  12. Benchmarking for Higher Education.

    Science.gov (United States)

    Jackson, Norman, Ed.; Lund, Helen, Ed.

    The chapters in this collection explore the concept of benchmarking as it is being used and developed in higher education (HE). Case studies and reviews show how universities in the United Kingdom are using benchmarking to aid in self-regulation and self-improvement. The chapters are: (1) "Introduction to Benchmarking" (Norman Jackson…

  13. Verification and validation benchmarks

    International Nuclear Information System (INIS)

    Oberkampf, William Louis; Trucano, Timothy Guy

    2007-01-01

    Verification and validation (V and V) are the primary means to assess the accuracy and reliability of computational simulations. V and V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V and V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the

  14. Verification and validation benchmarks

    International Nuclear Information System (INIS)

    Oberkampf, William L.; Trucano, Timothy G.

    2008-01-01

    Verification and validation (V and V) are the primary means to assess the accuracy and reliability of computational simulations. V and V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V and V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the

  15. Benchmarking and Learning in Public Healthcare

    DEFF Research Database (Denmark)

    Buckmaster, Natalie; Mouritsen, Jan

    2017-01-01

    This research investigates the effects of learning-oriented benchmarking in public healthcare settings. Benchmarking is a widely adopted yet little explored accounting practice that is part of the paradigm of New Public Management. Extant studies are directed towards mandated coercive benchmarking...... applications. The present study analyses voluntary benchmarking in a public setting that is oriented towards learning. The study contributes by showing how benchmarking can be mobilised for learning and offers evidence of the effects of such benchmarking for performance outcomes. It concludes that benchmarking...... can enable learning in public settings but that this requires actors to invest in ensuring that benchmark data are directed towards improvement....

  16. Benchmark job – Watch out!

    CERN Multimedia

    Staff Association

    2017-01-01

    On 12 December 2016, in Echo No. 259, we already discussed at length the MERIT and benchmark jobs. Still, we find that a couple of issues warrant further discussion. Benchmark job – administrative decision on 1 July 2017 On 12 January 2017, the HR Department informed all staff members of a change to the effective date of the administrative decision regarding benchmark jobs. The benchmark job title of each staff member will be confirmed on 1 July 2017, instead of 1 May 2017 as originally announced in HR’s letter on 18 August 2016. Postponing the administrative decision by two months will leave a little more time to address the issues related to incorrect placement in a benchmark job. Benchmark job – discuss with your supervisor, at the latest during the MERIT interview In order to rectify an incorrect placement in a benchmark job, it is essential that the supervisor and the supervisee go over the assigned benchmark job together. In most cases, this placement has been done autom...

  17. International Benchmark based on Pressurised Water Reactor Sub-channel and Bundle Tests. Volume III: Departure from Nucleate Boiling

    International Nuclear Information System (INIS)

    Rubin, Adam; Avramova, Maria; Velazquez-Lozada, Alexander

    2016-03-01

    This report summarised the second phase of the Nuclear Energy Agency (NEA) and the Nuclear Regulatory Commission (NRC) Benchmark Based on NUPEC PWR Sub-channel and Bundle Tests (PSBT), which was intended to provide data for the verification of Departure from Nucleate Boiling (DNB) prediction in existing thermal-hydraulics codes and provide direction in the development of future methods. This phase was composed of three exercises; Exercise 1: fluid temperature benchmark, Exercise 2: steady-state rod bundle benchmark and Exercise 3: transient rod bundle benchmark. The experimental data provided to the participants of this benchmark is from a series of void measurement tests using full-size mock-up tests for both BWRs and PWRs. These tests were performed from 1987 to 1995 by the Nuclear Power Engineering Corporation (NUPEC) in Japan and made available by the Japan Nuclear Energy Safety Organisation (JNES) for the purposes of this benchmark, which was organised by Pennsylvania State University. Nine institutions from seven countries participated in this benchmark. Nine different computer codes were used in Exercise 1, 2 and 3. Among the computer codes were porous media, sub-channel and systems thermal-hydraulic code. The improvement between FLICA-OVAP (sub-channel) and FLICA (sub-channel) was noticeable. The main difference between the two was that FLICA-OVAP implicitly assigned flow regime based on drift flux, while FLICA assumes single phase flows. In Exercises 2 and 3, the codes were generally able to predict the Departure from Nucleate Boiling (DNB) power as well as the axial location of the onset of DNB (for the steady-state cases) and the time of DNB (for the transient cases). It was noted that the codes that used the Electric-Power-Research- Institute (EPRI) Critical-Heat-Flux (CHF) correlation had the lowest mean error in Exercise 2 for the predicted DNB power

  18. The benchmark testing of 9Be of CENDL-3

    International Nuclear Information System (INIS)

    Liu Ping

    2002-01-01

    CENDL-3, the latest version of China Evaluated Nuclear Data Library was finished. The data of 9 Be were updated, and distributed for benchmark analysis recently. The calculated results were presented, and compared with the experimental data and the results based on other evaluated nuclear data libraries. The results show that CENDL-3 is better than others for most benchmarks

  19. Benchmarking reference services: an introduction.

    Science.gov (United States)

    Marshall, J G; Buchanan, H S

    1995-01-01

    Benchmarking is based on the common sense idea that someone else, either inside or outside of libraries, has found a better way of doing certain things and that your own library's performance can be improved by finding out how others do things and adopting the best practices you find. Benchmarking is one of the tools used for achieving continuous improvement in Total Quality Management (TQM) programs. Although benchmarking can be done on an informal basis, TQM puts considerable emphasis on formal data collection and performance measurement. Used to its full potential, benchmarking can provide a common measuring stick to evaluate process performance. This article introduces the general concept of benchmarking, linking it whenever possible to reference services in health sciences libraries. Data collection instruments that have potential application in benchmarking studies are discussed and the need to develop common measurement tools to facilitate benchmarking is emphasized.

  20. Benchmarking analysis of three multimedia models: RESRAD, MMSOILS, and MEPAS

    International Nuclear Information System (INIS)

    Cheng, J.J.; Faillace, E.R.; Gnanapragasam, E.K.

    1995-11-01

    Multimedia modelers from the United States Environmental Protection Agency (EPA) and the United States Department of Energy (DOE) collaborated to conduct a comprehensive and quantitative benchmarking analysis of three multimedia models. The three models-RESRAD (DOE), MMSOILS (EPA), and MEPAS (DOE)-represent analytically based tools that are used by the respective agencies for performing human exposure and health risk assessments. The study is performed by individuals who participate directly in the ongoing design, development, and application of the models. A list of physical/chemical/biological processes related to multimedia-based exposure and risk assessment is first presented as a basis for comparing the overall capabilities of RESRAD, MMSOILS, and MEPAS. Model design, formulation, and function are then examined by applying the models to a series of hypothetical problems. Major components of the models (e.g., atmospheric, surface water, groundwater) are evaluated separately and then studied as part of an integrated system for the assessment of a multimedia release scenario to determine effects due to linking components of the models. Seven modeling scenarios are used in the conduct of this benchmarking study: (1) direct biosphere exposure, (2) direct release to the air, (3) direct release to the vadose zone, (4) direct release to the saturated zone, (5) direct release to surface water, (6) surface water hydrology, and (7) multimedia release. Study results show that the models differ with respect to (1) environmental processes included (i.e., model features) and (2) the mathematical formulation and assumptions related to the implementation of solutions (i.e., parameterization)

  1. Internet based benchmarking

    DEFF Research Database (Denmark)

    Bogetoft, Peter; Nielsen, Kurt

    2005-01-01

    We discuss the design of interactive, internet based benchmarking using parametric (statistical) as well as nonparametric (DEA) models. The user receives benchmarks and improvement potentials. The user is also given the possibility to search different efficiency frontiers and hereby to explore...

  2. Toxicological Benchmarks for Wildlife

    Energy Technology Data Exchange (ETDEWEB)

    Sample, B.E. Opresko, D.M. Suter, G.W.

    1993-01-01

    Ecological risks of environmental contaminants are evaluated by using a two-tiered process. In the first tier, a screening assessment is performed where concentrations of contaminants in the environment are compared to no observed adverse effects level (NOAEL)-based toxicological benchmarks. These benchmarks represent concentrations of chemicals (i.e., concentrations presumed to be nonhazardous to the biota) in environmental media (water, sediment, soil, food, etc.). While exceedance of these benchmarks does not indicate any particular level or type of risk, concentrations below the benchmarks should not result in significant effects. In practice, when contaminant concentrations in food or water resources are less than these toxicological benchmarks, the contaminants may be excluded from further consideration. However, if the concentration of a contaminant exceeds a benchmark, that contaminant should be retained as a contaminant of potential concern (COPC) and investigated further. The second tier in ecological risk assessment, the baseline ecological risk assessment, may use toxicological benchmarks as part of a weight-of-evidence approach (Suter 1993). Under this approach, based toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. Other sources of evidence include media toxicity tests, surveys of biota (abundance and diversity), measures of contaminant body burdens, and biomarkers. This report presents NOAEL- and lowest observed adverse effects level (LOAEL)-based toxicological benchmarks for assessment of effects of 85 chemicals on 9 representative mammalian wildlife species (short-tailed shrew, little brown bat, meadow vole, white-footed mouse, cottontail rabbit, mink, red fox, and whitetail deer) or 11 avian wildlife species (American robin, rough-winged swallow, American woodcock, wild turkey, belted kingfisher, great blue heron, barred owl, barn owl, Cooper's hawk, and red

  3. Communication: energy benchmarking with quantum Monte Carlo for water nano-droplets and bulk liquid water.

    Science.gov (United States)

    Alfè, D; Bartók, A P; Csányi, G; Gillan, M J

    2013-06-14

    We show the feasibility of using quantum Monte Carlo (QMC) to compute benchmark energies for configuration samples of thermal-equilibrium water clusters and the bulk liquid containing up to 64 molecules. Evidence that the accuracy of these benchmarks approaches that of basis-set converged coupled-cluster calculations is noted. We illustrate the usefulness of the benchmarks by using them to analyze the errors of the popular BLYP approximation of density functional theory (DFT). The results indicate the possibility of using QMC as a routine tool for analyzing DFT errors for non-covalent bonding in many types of condensed-phase molecular system.

  4. Benchmarking in academic pharmacy departments.

    Science.gov (United States)

    Bosso, John A; Chisholm-Burns, Marie; Nappi, Jean; Gubbins, Paul O; Ross, Leigh Ann

    2010-10-11

    Benchmarking in academic pharmacy, and recommendations for the potential uses of benchmarking in academic pharmacy departments are discussed in this paper. Benchmarking is the process by which practices, procedures, and performance metrics are compared to an established standard or best practice. Many businesses and industries use benchmarking to compare processes and outcomes, and ultimately plan for improvement. Institutions of higher learning have embraced benchmarking practices to facilitate measuring the quality of their educational and research programs. Benchmarking is used internally as well to justify the allocation of institutional resources or to mediate among competing demands for additional program staff or space. Surveying all chairs of academic pharmacy departments to explore benchmarking issues such as department size and composition, as well as faculty teaching, scholarly, and service productivity, could provide valuable information. To date, attempts to gather this data have had limited success. We believe this information is potentially important, urge that efforts to gather it should be continued, and offer suggestions to achieve full participation.

  5. Benchmarking: applications to transfusion medicine.

    Science.gov (United States)

    Apelseth, Torunn Oveland; Molnar, Laura; Arnold, Emmy; Heddle, Nancy M

    2012-10-01

    Benchmarking is as a structured continuous collaborative process in which comparisons for selected indicators are used to identify factors that, when implemented, will improve transfusion practices. This study aimed to identify transfusion medicine studies reporting on benchmarking, summarize the benchmarking approaches used, and identify important considerations to move the concept of benchmarking forward in the field of transfusion medicine. A systematic review of published literature was performed to identify transfusion medicine-related studies that compared at least 2 separate institutions or regions with the intention of benchmarking focusing on 4 areas: blood utilization, safety, operational aspects, and blood donation. Forty-five studies were included: blood utilization (n = 35), safety (n = 5), operational aspects of transfusion medicine (n = 5), and blood donation (n = 0). Based on predefined criteria, 7 publications were classified as benchmarking, 2 as trending, and 36 as single-event studies. Three models of benchmarking are described: (1) a regional benchmarking program that collects and links relevant data from existing electronic sources, (2) a sentinel site model where data from a limited number of sites are collected, and (3) an institutional-initiated model where a site identifies indicators of interest and approaches other institutions. Benchmarking approaches are needed in the field of transfusion medicine. Major challenges include defining best practices and developing cost-effective methods of data collection. For those interested in initiating a benchmarking program, the sentinel site model may be most effective and sustainable as a starting point, although the regional model would be the ideal goal. Copyright © 2012 Elsevier Inc. All rights reserved.

  6. Proton-deuteron phase-shift analysis above the deuteron breakup threshold

    Energy Technology Data Exchange (ETDEWEB)

    Tornow, W. [Duke Univ., Durham, NC (United States). Dept. of Physics]|[Triangle Universities Nuclear Laboratory, Box 90308, Durham, NC (United States); Witala, H. [Institute of Physics, Jagellonian University, Reymonta 4, 30059 Cracow (Poland)

    1998-03-02

    We have performed single-energy phase-shift analyses of proton-deuteron elastic scattering data in the proton energy range from 3.5 to 10 MeV. The resulting values for the {sup 2}S{sub 1/2} and {sup 4}P{sub 1/2}, {sup 4}P{sub 3/2}, and {sup 4}P{sub 5/2} phase shifts are important benchmark values for three-nucleon calculations based on nucleon-nucleon potential models (with and without three-nucleon forces) aimed at describing the triton binding energy and at resolving the nucleon-deuteron A{sub y}({theta}) and iT{sub 11}({theta}) puzzles, respectively. (orig.) 7 refs.

  7. Finite element analysis of a 1:4 scale PCCV model - Korea Atomic Energy Research Institute, Phase 2

    International Nuclear Information System (INIS)

    Lee, Hong-pyo; Choun, Young-sun

    2005-01-01

    This report covers phase 2 of the International Standard Problem 48 (ISP48) benchmark on containment integrity. It describes the finite element (FE) analysis results of a 1:4 scale model of a pre-stressed concrete containment vessel (PCCV) model. The objective of the present FE analysis is to evaluate the ultimate internal pressure capacity of the PCCV as well as its failure mechanism when the PCCV model is subjected to a monotonous internal pressure beyond its design pressure. The FE analysis used two concrete failure criteria with the commercial code ABAQUS. One is axisymmetric model with modified Drucker-Prager failure criteria and the other is 3-dimensional model with damaged plasticity model. Finally, the FE analysis results on the ultimate pressure and failure modes have a good agreement with experimental data

  8. HPC Benchmark Suite NMx, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — In the phase II effort, Intelligent Automation Inc., (IAI) and University of Central Florida (UCF) propose to develop a comprehensive numerical test suite for...

  9. SMORN-III benchmark test on reactor noise analysis methods

    International Nuclear Information System (INIS)

    Shinohara, Yoshikuni; Hirota, Jitsuya

    1984-02-01

    A computational benchmark test was performed in conjunction with the Third Specialists Meeting on Reactor Noise (SMORN-III) which was held in Tokyo, Japan in October 1981. This report summarizes the results of the test as well as the works made for preparation of the test. (author)

  10. COSA II Further benchmark exercises to compare geomechanical computer codes for salt

    International Nuclear Information System (INIS)

    Lowe, M.J.S.; Knowles, N.C.

    1989-01-01

    Project COSA (COmputer COdes COmparison for SAlt) was a benchmarking exercise involving the numerical modelling of the geomechanical behaviour of heated rock salt. Its main objective was to assess the current European capability to predict the geomechanical behaviour of salt, in the context of the disposal of heat-producing radioactive waste in salt formations. Twelve organisations participated in the exercise in which their solutions to a number of benchmark problems were compared. The project was organised in two distinct phases: The first, from 1984-1986, concentrated on the verification of the computer codes. The second, from 1986-1988 progressed to validation, using three in-situ experiments at the Asse research facility in West Germany as a basis for comparison. This document reports the activities of the second phase of the project and presents the results, assessments and conclusions

  11. Melcor benchmarking against integral severe fuel damage tests

    Energy Technology Data Exchange (ETDEWEB)

    Madni, I.K. [Brookhaven National Lab., Upton, NY (United States)

    1995-09-01

    MELCOR is a fully integrated computer code that models all phases of the progression of severe accidents in light water reactor nuclear power plants, and is being developed for the U.S. Nuclear Regulatory Commission (NRC) by Sandia National Laboratories (SNL). Brookhaven National Laboratory (BNL) has a program with the NRC to provide independent assessment of MELCOR, and a very important part of this program is to benchmark MELCOR against experimental data from integral severe fuel damage tests and predictions of that data from more mechanistic codes such as SCDAP or SCDAP/RELAP5. Benchmarking analyses with MELCOR have been carried out at BNL for five integral severe fuel damage tests, namely, PBF SFD 1-1, SFD 14, and NRU FLHT-2, analyses, and their role in identifying areas of modeling strengths and weaknesses in MELCOR.

  12. EGS4 benchmark program

    International Nuclear Information System (INIS)

    Yasu, Y.; Hirayama, H.; Namito, Y.; Yashiro, S.

    1995-01-01

    This paper proposes EGS4 Benchmark Suite which consists of three programs called UCSAMPL4, UCSAMPL4I and XYZDOS. This paper also evaluates optimization methods of recent RISC/UNIX systems, such as IBM, HP, DEC, Hitachi and Fujitsu, for the benchmark suite. When particular compiler option and math library were included in the evaluation process, system performed significantly better. Observed performance of some of the RISC/UNIX systems were beyond some so-called Mainframes of IBM, Hitachi or Fujitsu. The computer performance of EGS4 Code System on an HP9000/735 (99MHz) was defined to be the unit of EGS4 Unit. The EGS4 Benchmark Suite also run on various PCs such as Pentiums, i486 and DEC alpha and so forth. The performance of recent fast PCs reaches that of recent RISC/UNIX systems. The benchmark programs have been evaluated with correlation of industry benchmark programs, namely, SPECmark. (author)

  13. A Benchmarking System for Domestic Water Use

    Directory of Open Access Journals (Sweden)

    Dexter V. L. Hunt

    2014-05-01

    Full Text Available The national demand for water in the UK is predicted to increase, exacerbated by a growing UK population, and home-grown demands for energy and food. When set against the context of overstretched existing supply sources vulnerable to droughts, particularly in increasingly dense city centres, the delicate balance of matching minimal demands with resource secure supplies becomes critical. When making changes to "internal" demands the role of technological efficiency and user behaviour cannot be ignored, yet existing benchmarking systems traditionally do not consider the latter. This paper investigates the practicalities of adopting a domestic benchmarking system (using a band rating that allows individual users to assess their current water use performance against what is possible. The benchmarking system allows users to achieve higher benchmarks through any approach that reduces water consumption. The sensitivity of water use benchmarks are investigated by making changes to user behaviour and technology. The impact of adopting localised supplies (i.e., Rainwater harvesting—RWH and Grey water—GW and including "external" gardening demands are investigated. This includes the impacts (in isolation and combination of the following: occupancy rates (1 to 4; roof size (12.5 m2 to 100 m2; garden size (25 m2 to 100 m2 and geographical location (North West, Midlands and South East, UK with yearly temporal effects (i.e., rainfall and temperature. Lessons learnt from analysis of the proposed benchmarking system are made throughout this paper, in particular its compatibility with the existing Code for Sustainable Homes (CSH accreditation system. Conclusions are subsequently drawn for the robustness of the proposed system.

  14. Single pin BWR benchmark problem for coupled Monte Carlo - Thermal hydraulics analysis

    International Nuclear Information System (INIS)

    Ivanov, A.; Sanchez, V.; Hoogenboom, J. E.

    2012-01-01

    As part of the European NURISP research project, a single pin BWR benchmark problem was defined. The aim of this initiative is to test the coupling strategies between Monte Carlo and subchannel codes developed by different project participants. In this paper the results obtained by the Delft Univ. of Technology and Karlsruhe Inst. of Technology will be presented. The benchmark problem was simulated with the following coupled codes: TRIPOLI-SUBCHANFLOW, MCNP-FLICA, MCNP-SUBCHANFLOW, and KENO-SUBCHANFLOW. (authors)

  15. Single pin BWR benchmark problem for coupled Monte Carlo - Thermal hydraulics analysis

    Energy Technology Data Exchange (ETDEWEB)

    Ivanov, A.; Sanchez, V. [Karlsruhe Inst. of Technology, Inst. for Neutron Physics and Reactor Technology, Herman-von-Helmholtz-Platz 1, 76344 Eggenstein-Leopoldshafen (Germany); Hoogenboom, J. E. [Delft Univ. of Technology, Faculty of Applied Sciences, Mekelweg 15, 2629 JB Delft (Netherlands)

    2012-07-01

    As part of the European NURISP research project, a single pin BWR benchmark problem was defined. The aim of this initiative is to test the coupling strategies between Monte Carlo and subchannel codes developed by different project participants. In this paper the results obtained by the Delft Univ. of Technology and Karlsruhe Inst. of Technology will be presented. The benchmark problem was simulated with the following coupled codes: TRIPOLI-SUBCHANFLOW, MCNP-FLICA, MCNP-SUBCHANFLOW, and KENO-SUBCHANFLOW. (authors)

  16. Concrete benchmark experiment: ex-vessel LWR surveillance dosimetry

    International Nuclear Information System (INIS)

    Ait Abderrahim, H.; D'Hondt, P.; Oeyen, J.; Risch, P.; Bioux, P.

    1993-09-01

    The analysis of DOEL-1 in-vessel and ex-vessel neutron dosimetry, using the DOT 3.5 Sn code coupled with the VITAMIN-C cross-section library, showed the same C/E values for different detectors at the surveillance capsule and the ex-vessel cavity positions. These results seem to be in contradiction with those obtained in several Benchmark experiments (PCA, PSF, VENUS...) when using the same computational tools. Indeed a strong decreasing radial trend of the C/E was observed, partly explained by the overestimation of the iron inelastic scattering. The flat trend seen in DOEL-1 could be explained by compensating errors in the calculation such as the backscattering due to the concrete walls outside the cavity. The 'Concrete Benchmark' experiment has been designed to judge the ability of this calculation methods to treat the backscattering. This paper describes the 'Concrete Benchmark' experiment, the measured and computed neutron dosimetry results and their comparison. This preliminary analysis seems to indicate an overestimation of the backscattering effect in the calculations. (authors). 5 figs., 1 tab., 7 refs

  17. Benchmarking ENDF/B-VII.0

    International Nuclear Information System (INIS)

    Marck, Steven C. van der

    2006-01-01

    The new major release VII.0 of the ENDF/B nuclear data library has been tested extensively using benchmark calculations. These were based upon MCNP-4C3 continuous-energy Monte Carlo neutronics simulations, together with nuclear data processed using the code NJOY. Three types of benchmarks were used, viz., criticality safety benchmarks (fusion) shielding benchmarks, and reference systems for which the effective delayed neutron fraction is reported. For criticality safety, more than 700 benchmarks from the International Handbook of Criticality Safety Benchmark Experiments were used. Benchmarks from all categories were used, ranging from low-enriched uranium, compound fuel, thermal spectrum ones (LEU-COMP-THERM), to mixed uranium-plutonium, metallic fuel, fast spectrum ones (MIX-MET-FAST). For fusion shielding many benchmarks were based on IAEA specifications for the Oktavian experiments (for Al, Co, Cr, Cu, LiF, Mn, Mo, Si, Ti, W, Zr), Fusion Neutronics Source in Japan (for Be, C, N, O, Fe, Pb), and Pulsed Sphere experiments at Lawrence Livermore National Laboratory (for 6 Li, 7 Li, Be, C, N, O, Mg, Al, Ti, Fe, Pb, D 2 O, H 2 O, concrete, polyethylene and teflon). For testing delayed neutron data more than thirty measurements in widely varying systems were used. Among these were measurements in the Tank Critical Assembly (TCA in Japan) and IPEN/MB-01 (Brazil), both with a thermal spectrum, and two cores in Masurca (France) and three cores in the Fast Critical Assembly (FCA, Japan), all with fast spectra. In criticality safety, many benchmarks were chosen from the category with a thermal spectrum, low-enriched uranium, compound fuel (LEU-COMP-THERM), because this is typical of most current-day reactors, and because these benchmarks were previously underpredicted by as much as 0.5% by most nuclear data libraries (such as ENDF/B-VI.8, JEFF-3.0). The calculated results presented here show that this underprediction is no longer there for ENDF/B-VII.0. The average over 257

  18. California commercial building energy benchmarking

    Energy Technology Data Exchange (ETDEWEB)

    Kinney, Satkartar; Piette, Mary Ann

    2003-07-01

    Building energy benchmarking is the comparison of whole-building energy use relative to a set of similar buildings. It provides a useful starting point for individual energy audits and for targeting buildings for energy-saving measures in multiple-site audits. Benchmarking is of interest and practical use to a number of groups. Energy service companies and performance contractors communicate energy savings potential with ''typical'' and ''best-practice'' benchmarks while control companies and utilities can provide direct tracking of energy use and combine data from multiple buildings. Benchmarking is also useful in the design stage of a new building or retrofit to determine if a design is relatively efficient. Energy managers and building owners have an ongoing interest in comparing energy performance to others. Large corporations, schools, and government agencies with numerous facilities also use benchmarking methods to compare their buildings to each other. The primary goal of Task 2.1.1 Web-based Benchmarking was the development of a web-based benchmarking tool, dubbed Cal-Arch, for benchmarking energy use in California commercial buildings. While there were several other benchmarking tools available to California consumers prior to the development of Cal-Arch, there were none that were based solely on California data. Most available benchmarking information, including the Energy Star performance rating, were developed using DOE's Commercial Building Energy Consumption Survey (CBECS), which does not provide state-level data. Each database and tool has advantages as well as limitations, such as the number of buildings and the coverage by type, climate regions and end uses. There is considerable commercial interest in benchmarking because it provides an inexpensive method of screening buildings for tune-ups and retrofits. However, private companies who collect and manage consumption data are concerned that the

  19. Benchmarking in Foodservice Operations

    National Research Council Canada - National Science Library

    Johnson, Bonnie

    1998-01-01

    The objective of this study was to identify usage of foodservice performance measures, important activities in foodservice benchmarking, and benchmarking attitudes, beliefs, and practices by foodservice directors...

  20. Integral Full Core Multi-Physics PWR Benchmark with Measured Data

    Energy Technology Data Exchange (ETDEWEB)

    Forget, Benoit; Smith, Kord; Kumar, Shikhar; Rathbun, Miriam; Liang, Jingang

    2018-04-11

    In recent years, the importance of modeling and simulation has been highlighted extensively in the DOE research portfolio with concrete examples in nuclear engineering with the CASL and NEAMS programs. These research efforts and similar efforts worldwide aim at the development of high-fidelity multi-physics analysis tools for the simulation of current and next-generation nuclear power reactors. Like all analysis tools, verification and validation is essential to guarantee proper functioning of the software and methods employed. The current approach relies mainly on the validation of single physic phenomena (e.g. critical experiment, flow loops, etc.) and there is a lack of relevant multiphysics benchmark measurements that are necessary to validate high-fidelity methods being developed today. This work introduces a new multi-cycle full-core Pressurized Water Reactor (PWR) depletion benchmark based on two operational cycles of a commercial nuclear power plant that provides a detailed description of fuel assemblies, burnable absorbers, in-core fission detectors, core loading and re-loading patterns. This benchmark enables analysts to develop extremely detailed reactor core models that can be used for testing and validation of coupled neutron transport, thermal-hydraulics, and fuel isotopic depletion. The benchmark also provides measured reactor data for Hot Zero Power (HZP) physics tests, boron letdown curves, and three-dimensional in-core flux maps from 58 instrumented assemblies. The benchmark description is now available online and has been used by many groups. However, much work remains to be done on the quantification of uncertainties and modeling sensitivities. This work aims to address these deficiencies and make this benchmark a true non-proprietary international benchmark for the validation of high-fidelity tools. This report details the BEAVRS uncertainty quantification for the first two cycle of operations and serves as the final report of the project.

  1. Benchmarking as a Global Strategy for Improving Instruction in Higher Education.

    Science.gov (United States)

    Clark, Karen L.

    This paper explores the concept of benchmarking in institutional research, a comparative analysis methodology designed to help colleges and universities increase their educational quality and delivery systems. The primary purpose of benchmarking is to compare an institution to its competitors in order to improve the product (in this case…

  2. Benchmarking routine psychological services: a discussion of challenges and methods.

    Science.gov (United States)

    Delgadillo, Jaime; McMillan, Dean; Leach, Chris; Lucock, Mike; Gilbody, Simon; Wood, Nick

    2014-01-01

    Policy developments in recent years have led to important changes in the level of access to evidence-based psychological treatments. Several methods have been used to investigate the effectiveness of these treatments in routine care, with different approaches to outcome definition and data analysis. To present a review of challenges and methods for the evaluation of evidence-based treatments delivered in routine mental healthcare. This is followed by a case example of a benchmarking method applied in primary care. High, average and poor performance benchmarks were calculated through a meta-analysis of published data from services working under the Improving Access to Psychological Therapies (IAPT) Programme in England. Pre-post treatment effect sizes (ES) and confidence intervals were estimated to illustrate a benchmarking method enabling services to evaluate routine clinical outcomes. High, average and poor performance ES for routine IAPT services were estimated to be 0.91, 0.73 and 0.46 for depression (using PHQ-9) and 1.02, 0.78 and 0.52 for anxiety (using GAD-7). Data from one specific IAPT service exemplify how to evaluate and contextualize routine clinical performance against these benchmarks. The main contribution of this report is to summarize key recommendations for the selection of an adequate set of psychometric measures, the operational definition of outcomes, and the statistical evaluation of clinical performance. A benchmarking method is also presented, which may enable a robust evaluation of clinical performance against national benchmarks. Some limitations concerned significant heterogeneity among data sources, and wide variations in ES and data completeness.

  3. The lead cooled fast reactor benchmark Brest-300: analysis with sensitivity method

    International Nuclear Information System (INIS)

    Smirnov, V.; Orlov, V.; Mourogov, A.; Lecarpentier, D.; Ivanova, T.

    2005-01-01

    Lead cooled fast neutrons reactor is one of the most interesting candidates for the development of atomic energy. BREST-300 is a 300 MWe lead cooled fast reactor developed by the NIKIET (Russia) with a deterministic safety approach which aims to exclude reactivity margins greater than the delayed neutron fraction. The development of innovative reactors (lead coolant, nitride fuel...) and fuel cycles with new constraints such as cycle closure or actinide burning, requires new technologies and new nuclear data. In this connection, the tool and neutron data used for the calculational analysis of reactor characteristics requires thorough validation. NIKIET developed a reactor benchmark fitting of design type calculational tools (including neutron data). In the frame of technical exchanges between NIKIET and EDF (France), results of this benchmark calculation concerning the principal parameters of fuel evolution and safety parameters has been inter-compared, in order to estimate the uncertainties and validate the codes for calculations of this new kind of reactors. Different codes and cross-sections data have been used, and sensitivity studies have been performed to understand and quantify the uncertainties sources.The comparison of results shows that the difference on k eff value between ERANOS code with ERALIB1 library and the reference is of the same order of magnitude than the delayed neutron fraction. On the other hand, the discrepancy is more than twice bigger if JEF2.2 library is used with ERANOS. Analysis of discrepancies in calculation results reveals that the main effect is provided by the difference of nuclear data, namely U 238 , Pu 239 fission and capture cross sections and lead inelastic cross sections

  4. Benchmarking i den offentlige sektor

    DEFF Research Database (Denmark)

    Bukh, Per Nikolaj; Dietrichson, Lars; Sandalgaard, Niels

    2008-01-01

    I artiklen vil vi kort diskutere behovet for benchmarking i fraværet af traditionelle markedsmekanismer. Herefter vil vi nærmere redegøre for, hvad benchmarking er med udgangspunkt i fire forskellige anvendelser af benchmarking. Regulering af forsyningsvirksomheder vil blive behandlet, hvorefter...

  5. Regional Competitive Intelligence: Benchmarking and Policymaking

    OpenAIRE

    Huggins , Robert

    2010-01-01

    Benchmarking exercises have become increasingly popular within the sphere of regional policymaking in recent years. The aim of this paper is to analyse the concept of regional benchmarking and its links with regional policymaking processes. It develops a typology of regional benchmarking exercises and regional benchmarkers, and critically reviews the literature, both academic and policy oriented. It is argued that critics who suggest regional benchmarking is a flawed concept and technique fai...

  6. Statistical process control as a tool for controlling operating room performance: retrospective analysis and benchmarking.

    Science.gov (United States)

    Chen, Tsung-Tai; Chang, Yun-Jau; Ku, Shei-Ling; Chung, Kuo-Piao

    2010-10-01

    There is much research using statistical process control (SPC) to monitor surgical performance, including comparisons among groups to detect small process shifts, but few of these studies have included a stabilization process. This study aimed to analyse the performance of surgeons in operating room (OR) and set a benchmark by SPC after stabilized process. The OR profile of 499 patients who underwent laparoscopic cholecystectomy performed by 16 surgeons at a tertiary hospital in Taiwan during 2005 and 2006 were recorded. SPC was applied to analyse operative and non-operative times using the following five steps: first, the times were divided into two segments; second, they were normalized; third, they were evaluated as individual processes; fourth, the ARL(0) was calculated;, and fifth, the different groups (surgeons) were compared. Outliers were excluded to ensure stability for each group and to facilitate inter-group comparison. The results showed that in the stabilized process, only one surgeon exhibited a significantly shorter total process time (including operative time and non-operative time). In this study, we use five steps to demonstrate how to control surgical and non-surgical time in phase I. There are some measures that can be taken to prevent skew and instability in the process. Also, using SPC, one surgeon can be shown to be a real benchmark. © 2010 Blackwell Publishing Ltd.

  7. Preliminary evaluation of factors associated with premature trial closure and feasibility of accrual benchmarks in phase III oncology trials.

    Science.gov (United States)

    Schroen, Anneke T; Petroni, Gina R; Wang, Hongkun; Gray, Robert; Wang, Xiaofei F; Cronin, Walter; Sargent, Daniel J; Benedetti, Jacqueline; Wickerham, Donald L; Djulbegovic, Benjamin; Slingluff, Craig L

    2010-08-01

    A major challenge for randomized phase III oncology trials is the frequent low rates of patient enrollment, resulting in high rates of premature closure due to insufficient accrual. We conducted a pilot study to determine the extent of trial closure due to poor accrual, feasibility of identifying trial factors associated with sufficient accrual, impact of redesign strategies on trial accrual, and accrual benchmarks designating high failure risk in the clinical trials cooperative group (CTCG) setting. A subset of phase III trials opened by five CTCGs between August 1991 and March 2004 was evaluated. Design elements, experimental agents, redesign strategies, and pretrial accrual assessment supporting accrual predictions were abstracted from CTCG documents. Percent actual/predicted accrual rate averaged per month was calculated. Trials were categorized as having sufficient or insufficient accrual based on reason for trial termination. Analyses included univariate and bivariate summaries to identify potential trial factors associated with accrual sufficiency. Among 40 trials from one CTCG, 21 (52.5%) trials closed due to insufficient accrual. In 82 trials from five CTCGs, therapeutic trials accrued sufficiently more often than nontherapeutic trials (59% vs 27%, p = 0.05). Trials including pretrial accrual assessment more often achieved sufficient accrual than those without (67% vs 47%, p = 0.08). Fewer exclusion criteria, shorter consent forms, other CTCG participation, and trial design simplicity were not associated with achieving sufficient accrual. Trials accruing at a rate much lower than predicted (accrual rate) were consistently closed due to insufficient accrual. This trial subset under-represents certain experimental modalities. Data sources do not allow accounting for all factors potentially related to accrual success. Trial closure due to insufficient accrual is common. Certain trial design factors appear associated with attaining sufficient accrual. Defining

  8. Evaluation of piping fracture analysis method by benchmark study, 1

    International Nuclear Information System (INIS)

    Takahashi, Yukio; Kashima, Koichi; Kuwabara, Kazuo

    1987-01-01

    Importance of strength evaluation methods for cracked piping is growing with the progress of the rationalization of the nuclear piping system based on the leak-before-break concept. As an analytical tool, finite element method is principally used. To obtain the reliable solutions by the finite element programs, it is important to grasp the influences of various factors on the solutions. In this study, benchmark analysis is carried out for a stainless steel pipe with a circumferential through-wall crack subjected to four-point bending loading. Eight solutions obtained by using five finite element programs are compared with each other. Good agreement is obtained between the solutions on the deformation characteristics as well as fracture mechanics parameters. It is found through this study that the influence of the difference in the solution technique is generally small. (author)

  9. Benchmarking energy performance of residential buildings using two-stage multifactor data envelopment analysis with degree-day based simple-normalization approach

    International Nuclear Information System (INIS)

    Wang, Endong; Shen, Zhigang; Alp, Neslihan; Barry, Nate

    2015-01-01

    Highlights: • Two-stage DEA model is developed to benchmark building energy efficiency. • Degree-day based simple normalization is used to neutralize the climatic noise. • Results of a real case study validated the benefits of this new model. - Abstract: Being able to identify detailed meta factors of energy performance is essential for creating effective residential energy-retrofitting strategies. Compared to other benchmarking methods, nonparametric multifactor DEA (data envelopment analysis) is capable of discriminating scale factors from management factors to reveal more details to better guide retrofitting practices. A two-stage DEA energy benchmarking method is proposed in this paper. This method includes (1) first-stage meta DEA which integrates the common degree day metrics for neutralizing noise energy effects of exogenous climatic variables; and (2) second-stage Tobit regression for further detailed efficiency analysis. A case study involving 3-year longitudinal panel data of 189 residential buildings indicated the proposed method has advantages over existing methods in terms of its efficiency in data processing and results interpretation. The results of the case study also demonstrated high consistency with existing linear regression based DEA.

  10. Benchmarking Using Basic DBMS Operations

    Science.gov (United States)

    Crolotte, Alain; Ghazal, Ahmad

    The TPC-H benchmark proved to be successful in the decision support area. Many commercial database vendors and their related hardware vendors used these benchmarks to show the superiority and competitive edge of their products. However, over time, the TPC-H became less representative of industry trends as vendors keep tuning their database to this benchmark-specific workload. In this paper, we present XMarq, a simple benchmark framework that can be used to compare various software/hardware combinations. Our benchmark model is currently composed of 25 queries that measure the performance of basic operations such as scans, aggregations, joins and index access. This benchmark model is based on the TPC-H data model due to its maturity and well-understood data generation capability. We also propose metrics to evaluate single-system performance and compare two systems. Finally we illustrate the effectiveness of this model by showing experimental results comparing two systems under different conditions.

  11. Energy use pattern and benchmarking of selected greenhouses in Iran using data envelopment analysis

    International Nuclear Information System (INIS)

    Omid, M.; Ghojabeige, F.; Delshad, M.; Ahmadi, H.

    2011-01-01

    This paper studies the degree of technical efficiency (TE) and scale efficiency (SE) of selected greenhouses in Iran and describes the process of benchmarking energy inputs and cucumber yield. Inquiries on 18 greenhouses were conducted in a face-to-face interviewing during September-December 2008 period. A non-parametric data envelopment analysis (DEA) technique was applied to investigate the degree of TE and SE of producers, and evaluate and rank productivity performance of cucumber producers based on eight energy inputs: human labour, diesel, machinery, fertilizers, chemicals, water for irrigation, seeds and electricity, and output yield values of cucumber. DEA optimizes the performance measure of each greenhouse or decision making unit (DMU). Specifically, the DEA was used to compare the performance of each DMU in region of increasing, constant or decreasing return to scale in multiple-inputs situations. The CRS model helped us to decompose the pure TE into the overall TE and SE components, thereby allowing investigating the scale effects. The results of analysis showed that DEA is an effective tool for analyzing and benchmarking productive efficiency of greenhouses. The VRS analysis showed that only 12 out of the 18 DMUs were efficient. The TE of the inefficient DMUs, on average, was calculated as 91.5%. This implies that the same level of output could be produced with 91.5% of the resources if these units were performing on the frontier. Another interpretation of this result is that 8.5% of overall resources could be saved by raising the performance of these DMUs to the highest level.

  12. Energy use pattern and benchmarking of selected greenhouses in Iran using data envelopment analysis

    Energy Technology Data Exchange (ETDEWEB)

    Omid, M.; Ghojabeige, F.; Ahmadi, H. [Department of Agricultural Machinery, College of Agriculture and Natural Resources, University of Tehran, Karaj (Iran, Islamic Republic of); Delshad, M. [Department of Horticultural Sciences, College of Agriculture and Natural Resources, University of Tehran, Karaj (Iran, Islamic Republic of)

    2011-01-15

    This paper studies the degree of technical efficiency (TE) and scale efficiency (SE) of selected greenhouses in Iran and describes the process of benchmarking energy inputs and cucumber yield. Inquiries on 18 greenhouses were conducted in a face-to-face interviewing during September-December 2008 period. A non-parametric data envelopment analysis (DEA) technique was applied to investigate the degree of TE and SE of producers, and evaluate and rank productivity performance of cucumber producers based on eight energy inputs: human labour, diesel, machinery, fertilizers, chemicals, water for irrigation, seeds and electricity, and output yield values of cucumber. DEA optimizes the performance measure of each greenhouse or decision making unit (DMU). Specifically, the DEA was used to compare the performance of each DMU in region of increasing, constant or decreasing return to scale in multiple-inputs situations. The CRS model helped us to decompose the pure TE into the overall TE and SE components, thereby allowing investigating the scale effects. The results of analysis showed that DEA is an effective tool for analyzing and benchmarking productive efficiency of greenhouses. The VRS analysis showed that only 12 out of the 18 DMUs were efficient. The TE of the inefficient DMUs, on average, was calculated as 91.5%. This implies that the same level of output could be produced with 91.5% of the resources if these units were performing on the frontier. Another interpretation of this result is that 8.5% of overall resources could be saved by raising the performance of these DMUs to the highest level. (author)

  13. High Energy Physics (HEP) benchmark program

    International Nuclear Information System (INIS)

    Yasu, Yoshiji; Ichii, Shingo; Yashiro, Shigeo; Hirayama, Hideo; Kokufuda, Akihiro; Suzuki, Eishin.

    1993-01-01

    High Energy Physics (HEP) benchmark programs are indispensable tools to select suitable computer for HEP application system. Industry standard benchmark programs can not be used for this kind of particular selection. The CERN and the SSC benchmark suite are famous HEP benchmark programs for this purpose. The CERN suite includes event reconstruction and event generator programs, while the SSC one includes event generators. In this paper, we found that the results from these two suites are not consistent. And, the result from the industry benchmark does not agree with either of these two. Besides, we describe comparison of benchmark results using EGS4 Monte Carlo simulation program with ones from two HEP benchmark suites. Then, we found that the result from EGS4 in not consistent with the two ones. The industry standard of SPECmark values on various computer systems are not consistent with the EGS4 results either. Because of these inconsistencies, we point out the necessity of a standardization of HEP benchmark suites. Also, EGS4 benchmark suite should be developed for users of applications such as medical science, nuclear power plant, nuclear physics and high energy physics. (author)

  14. Benchmarking Tool Kit.

    Science.gov (United States)

    Canadian Health Libraries Association.

    Nine Canadian health libraries participated in a pilot test of the Benchmarking Tool Kit between January and April, 1998. Although the Tool Kit was designed specifically for health libraries, the content and approach are useful to other types of libraries as well. Used to its full potential, benchmarking can provide a common measuring stick to…

  15. Benchmark tests and spin adaptation for the particle-particle random phase approximation

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Yang; Steinmann, Stephan N.; Peng, Degao [Department of Chemistry, Duke University, Durham, North Carolina 27708 (United States); Aggelen, Helen van, E-mail: Helen.VanAggelen@UGent.be [Department of Chemistry, Duke University, Durham, North Carolina 27708 (United States); Department of Inorganic and Physical Chemistry, Ghent University, 9000 Ghent (Belgium); Yang, Weitao, E-mail: Weitao.Yang@duke.edu [Department of Chemistry and Department of Physics, Duke University, Durham, North Carolina 27708 (United States)

    2013-11-07

    The particle-particle random phase approximation (pp-RPA) provides an approximation to the correlation energy in density functional theory via the adiabatic connection [H. van Aggelen, Y. Yang, and W. Yang, Phys. Rev. A 88, 030501 (2013)]. It has virtually no delocalization error nor static correlation error for single-bond systems. However, with its formal O(N{sup 6}) scaling, the pp-RPA is computationally expensive. In this paper, we implement a spin-separated and spin-adapted pp-RPA algorithm, which reduces the computational cost by a substantial factor. We then perform benchmark tests on the G2/97 enthalpies of formation database, DBH24 reaction barrier database, and four test sets for non-bonded interactions (HB6/04, CT7/04, DI6/04, and WI9/04). For the G2/97 database, the pp-RPA gives a significantly smaller mean absolute error (8.3 kcal/mol) than the direct particle-hole RPA (ph-RPA) (22.7 kcal/mol). Furthermore, the error in the pp-RPA is nearly constant with the number of atoms in a molecule, while the error in the ph-RPA increases. For chemical reactions involving typical organic closed-shell molecules, pp- and ph-RPA both give accurate reaction energies. Similarly, both RPAs perform well for reaction barriers and nonbonded interactions. These results suggest that the pp-RPA gives reliable energies in chemical applications. The adiabatic connection formalism based on pairing matrix fluctuation is therefore expected to lead to widely applicable and accurate density functionals.

  16. A GFR benchmark comparison of transient analysis codes based on the ETDR concept

    International Nuclear Information System (INIS)

    Bubelis, E.; Coddington, P.; Castelliti, D.; Dor, I.; Fouillet, C.; Geus, E. de; Marshall, T.D.; Van Rooijen, W.; Schikorr, M.; Stainsby, R.

    2007-01-01

    A GFR (Gas-cooled Fast Reactor) transient benchmark study was performed to investigate the ability of different code systems to calculate the transition in the core heat removal from the main circuit forced flow to natural circulation cooling using the Decay Heat Removal (DHR) system. This benchmark is based on a main blower failure in the Experimental Technology Demonstration Reactor (ETDR) with reactor scram. The codes taking part into the benchmark are: RELAP5, TRAC/AAA, CATHARE, SIM-ADS, MANTA and SPECTRA. For comparison purposes the benchmark was divided into several stages: the initial steady-state solution, the main blower flow run-down, the opening of the DHR loop and the transition to natural circulation and finally the 'quasi' steady heat removal from the core by the DHR system. The results submitted by the participants showed that all the codes gave consistent results for all four stages of the benchmark. In the steady-state the calculations revealed some differences in the clad and fuel temperatures, the core and main loop pressure drops and in the total Helium mass inventory. Also some disagreements were observed in the Helium and water flow rates in the DHR loop during the final natural circulation stage. Good agreement was observed for the total main blower flow rate and Helium temperature rise in the core, as well as for the Helium inlet temperature into the core. In order to understand the reason for the differences in the initial 'blind' calculations a second round of calculations was performed using a more precise set of boundary conditions

  17. OECD/NEA International Benchmark exercises: Validation of CFD codes applied nuclear industry; OECD/NEA internatiion Benchmark exercices: La validacion de los codigos CFD aplicados a la industria nuclear

    Energy Technology Data Exchange (ETDEWEB)

    Pena-Monferrer, C.; Miquel veyrat, A.; Munoz-Cobo, J. L.; Chiva Vicent, S.

    2016-08-01

    In the recent years, due, among others, the slowing down of the nuclear industry, investment in the development and validation of CFD codes, applied specifically to the problems of the nuclear industry has been seriously hampered. Thus the International Benchmark Exercise (IBE) sponsored by the OECD/NEA have been fundamental to analyze the use of CFD codes in the nuclear industry, because although these codes are mature in many fields, still exist doubts about them in critical aspects of thermohydraulic calculations, even in single-phase scenarios. The Polytechnic University of Valencia (UPV) and the Universitat Jaume I (UJI), sponsored by the Nuclear Safety Council (CSN), have actively participated in all benchmark's proposed by NEA, as in the expert meetings,. In this paper, a summary of participation in the various IBE will be held, describing the benchmark itself, the CFD model created for it, and the main conclusions. (Author)

  18. Results of the reliability benchmark exercise and the future CEC-JRC program

    International Nuclear Information System (INIS)

    Amendola, A.

    1985-01-01

    As a contribution towards identifying problem areas and for assessing probabilistic safety assessment (PSA) methods and procedures of analysis, JRC has organized a wide-range Benchmark Exercise on systems reliability. This has been executed by ten different teams involving seventeen organizations from nine European countries. The exercise has been based on a real case (Auxiliary Feedwater System of EDF Paluel PWR 1300 MWe Unit), starting from analysis of technical specifications, logical and topological layout and operational procedures. Terms of references included both qualitative and quantitative analyses. The subdivision of the exercise into different phases and the rules adopted allowed assessment of the different components of the spread of the overall results. It appeared that modelling uncertainties may overwhelm data uncertainties and major efforts must be spent in order to improve consistency and completeness of qualitative analysis. After successful completion of the first exercise, CEC-JRC program has planned separate exercises on analysis of dependent failures and human factors before approaching the evaluation of a complete accident sequence

  19. Benchmark calculations of power distribution within assemblies

    International Nuclear Information System (INIS)

    Cavarec, C.; Perron, J.F.; Verwaerde, D.; West, J.P.

    1994-09-01

    The main objective of this Benchmark is to compare different techniques for fine flux prediction based upon coarse mesh diffusion or transport calculations. We proposed 5 ''core'' configurations including different assembly types (17 x 17 pins, ''uranium'', ''absorber'' or ''MOX'' assemblies), with different boundary conditions. The specification required results in terms of reactivity, pin by pin fluxes and production rate distributions. The proposal for these Benchmark calculations was made by J.C. LEFEBVRE, J. MONDOT, J.P. WEST and the specification (with nuclear data, assembly types, core configurations for 2D geometry and results presentation) was distributed to correspondents of the OECD Nuclear Energy Agency. 11 countries and 19 companies answered the exercise proposed by this Benchmark. Heterogeneous calculations and homogeneous calculations were made. Various methods were used to produce the results: diffusion (finite differences, nodal...), transport (P ij , S n , Monte Carlo). This report presents an analysis and intercomparisons of all the results received

  20. A Benchmark Study of a Seismic Analysis Program for a Single Column of a HTGR Core

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Ji Ho [KAERI, Daejeon (Korea, Republic of)

    2016-05-15

    A seismic analysis program, SAPCOR (Seismic Analysis of Prismatic HTGR Core), was developed in Korea Atomic Energy Research Institute. The program is used for the evaluation of deformed shapes and forces on the graphite blocks which using point-mass rigid bodies with Kelvin-Voigt impact models. In the previous studies, the program was verified using theoretical solutions and benchmark problems. To validate the program for more complicated problems, a free vibration analysis of a single column of a HTGR core was selected and the calculation results of the SAPCOR and a commercial FEM code, Abaqus, were compared in this study.

  1. Co-ordinated research programme on benchmark study for the seismic analysis and testing of WWER-type nuclear power plants. V. 4C. Paks NPP: Analysis and testing. Working material

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-07-01

    The Co-ordinated research programme on the benchmark study for the seismic analysis and testing of WWER-type nuclear power plants was initiated subsequent to the request from representatives of Member States. The conclusions adopted at the Technical Committee Meeting on Seismic Issues related to existing nuclear power plants held in Tokyo in 1991 called for the harmonization of methods and criteria used in Member States in issues related to seismic safety. The Consulltants' Meeting which followed resulted in producing a working document for CRP. It was decided that a benchmark study is the most effective way to achieve the principal objective. Two types of WWER reactors (WWER-440/213 and WWER-1000) were selected as prototypes for the benchmark exercise to be tested on a full scale using explosions and/or vibration generators. The two prototypes are Kozloduy Units 5/6 for WWER-1000 and Paks for WWER-440/213 nuclear power plants. This volume of Working material involves comparative analysis of the seismic analysis results of the reactor building for soft soil conditions, derivation of design response spectra for components and systems; and upper range design response spectra for soft soil site conditions at Paks NPP.

  2. Co-ordinated research programme on benchmark study for the seismic analysis and testing of WWER-type nuclear power plants. V. 4C. Paks NPP: Analysis and testing. Working material

    International Nuclear Information System (INIS)

    1996-01-01

    The Co-ordinated research programme on the benchmark study for the seismic analysis and testing of WWER-type nuclear power plants was initiated subsequent to the request from representatives of Member States. The conclusions adopted at the Technical Committee Meeting on Seismic Issues related to existing nuclear power plants held in Tokyo in 1991 called for the harmonization of methods and criteria used in Member States in issues related to seismic safety. The Consulltants' Meeting which followed resulted in producing a working document for CRP. It was decided that a benchmark study is the most effective way to achieve the principal objective. Two types of WWER reactors (WWER-440/213 and WWER-1000) were selected as prototypes for the benchmark exercise to be tested on a full scale using explosions and/or vibration generators. The two prototypes are Kozloduy Units 5/6 for WWER-1000 and Paks for WWER-440/213 nuclear power plants. This volume of Working material involves comparative analysis of the seismic analysis results of the reactor building for soft soil conditions, derivation of design response spectra for components and systems; and upper range design response spectra for soft soil site conditions at Paks NPP

  3. Criticality Benchmark Analysis of Water-Reflected Uranium Oxyfluoride Slabs

    International Nuclear Information System (INIS)

    Marshall, Margaret A.; Bess, John D.

    2009-01-01

    A series of twelve experiments were conducted in the mid 1950's at the Oak Ridge National Laboratory Critical Experiments Facility to determine the critical conditions of a semi-infinite water-reflected slab of aqueous uranium oxyfluoride (UO2F2). A different slab thickness was used for each experiment. Results from the twelve experiment recorded in the laboratory notebook were published in Reference 1. Seven of the twelve experiments were determined to be acceptable benchmark experiments for the inclusion in the International Handbook of Evaluated Criticality Safety Benchmark Experiments. This evaluation will not only be available to handbook users for the validation of computer codes and integral cross-section data, but also for the reevaluation of experimental data used in the ANSI/ANS-8.1 standard. This evaluation is important as part of the technical basis of the subcritical slab limits in ANSI/ANS-8.1. The original publication of the experimental results was used for the determination of bias and bias uncertainties for subcritical slab limits, as documented by Hugh Clark's paper 'Subcritical Limits for Uranium-235 Systems'.

  4. Analysis of neutronics benchmarks for the utilization of mixed oxide fuel in light water reactor using DRAGON code

    International Nuclear Information System (INIS)

    Nithyadevi, Rajan; Thilagam, L.; Karthikeyan, R.; Pal, Usha

    2016-01-01

    Highlights: • Use of advanced computational code – DRAGON-5 using advanced self shielding model USS. • Testing the capability of DRAGON-5 code for the analysis of light water reactor system. • Wide variety of fuels LEU, MOX and spent fuel have been analyzed. • Parameters such as k ∞ , one, few and multi-group macroscopic cross-sections and fluxes were calculated. • Suitability of deterministic methodology employed in DRAGON-5 code is demonstrated for LWR. - Abstract: Advances in reactor physics have led to the development of new computational technologies and upgraded cross-section libraries so as to produce an accurate approximation to the true solution for the problem. Thus it is necessary to revisit the benchmark problems with the advanced computational code system and upgraded cross-section libraries to see how far they are in agreement with the earlier reported values. Present study is one such analysis with the DRAGON code employing advanced self shielding models like USS and 172 energy group ‘JEFF3.1’ cross-section library in DRAGLIB format. Although DRAGON code has already demonstrated its capability for heavy water moderator systems, it is now tested for light water reactor (LWR) and fast reactor systems. As a part of validation of DRAGON for LWR, a VVER computational benchmark titled “Neutronics Benchmarks for the Utilization of Mixed-Oxide Fuel-Volume 3” submitted by the Russian Federation has been taken up. Presently, pincell and assembly calculations are carried out considering variation in fuel temperature (both fresh and spent), moderator temperatures and boron content in the moderator. Various parameters such as infinite neutron multiplication (k ∞ ) factor, one group integrated flux, few group homogenized cross-sections (absorption, nu-fission) and reaction rates (absorption, nu-fission) of individual isotopic nuclides are calculated for different reactor states. Comparisons of results are made with the reported Monte Carlo

  5. Benchmark exercises on PWR level-1 PSA (step 3). Analyses of accident sequence and conclusions

    International Nuclear Information System (INIS)

    Niwa, Yuji; Takahashi, Hideaki.

    1996-01-01

    The results of level 1 PSA generate fluctuations due to the assumptions based on several engineering judgements set in the stages of PSA analysis. On the purpose of the investigation of uncertainties due to assumptions, three kinds of a standard problem, what we call benchmark exercise have been set. In this report, sensitivity studies (benchmark exercise) of sequence analyses are treated and conclusions are mentioned. The treatment of inter-system dependency would generate uncertainly of PSA. In addition, as a conclusion of the PSA benchmark exercise, several findings in the sequence analysis together with previous benchmark analyses in earlier INSS Journals are treated. (author)

  6. Argonne Code Center: Benchmark problem book.

    Energy Technology Data Exchange (ETDEWEB)

    None, None

    1977-06-01

    This book is an outgrowth of activities of the Computational Benchmark Problems Committee of the Mathematics and Computation Division of the American Nuclear Society. This is the second supplement of the original benchmark book which was first published in February, 1968 and contained computational benchmark problems in four different areas. Supplement No. 1, which was published in December, 1972, contained corrections to the original benchmark book plus additional problems in three new areas. The current supplement. Supplement No. 2, contains problems in eight additional new areas. The objectives of computational benchmark work and the procedures used by the committee in pursuing the objectives are outlined in the original edition of the benchmark book (ANL-7416, February, 1968). The members of the committee who have made contributions to Supplement No. 2 are listed below followed by the contributors to the earlier editions of the benchmark book.

  7. Anomaly detection in OECD Benchmark data using co-variance methods

    International Nuclear Information System (INIS)

    Srinivasan, G.S.; Krinizs, K.; Por, G.

    1993-02-01

    OECD Benchmark data distributed for the SMORN VI Specialists Meeting in Reactor Noise were investigated for anomaly detection in artificially generated reactor noise benchmark analysis. It was observed that statistical features extracted from covariance matrix of frequency components are very sensitive in terms of the anomaly detection level. It is possible to create well defined alarm levels. (R.P.) 5 refs.; 23 figs.; 1 tab

  8. Concrete benchmark experiment: ex-vessel LWR surveillance dosimetry; Experience ``Benchmark beton`` pour la dosimetrie hors cuve dans les reacteurs a eau legere

    Energy Technology Data Exchange (ETDEWEB)

    Ait Abderrahim, H.; D`Hondt, P.; Oeyen, J.; Risch, P.; Bioux, P.

    1993-09-01

    The analysis of DOEL-1 in-vessel and ex-vessel neutron dosimetry, using the DOT 3.5 Sn code coupled with the VITAMIN-C cross-section library, showed the same C/E values for different detectors at the surveillance capsule and the ex-vessel cavity positions. These results seem to be in contradiction with those obtained in several Benchmark experiments (PCA, PSF, VENUS...) when using the same computational tools. Indeed a strong decreasing radial trend of the C/E was observed, partly explained by the overestimation of the iron inelastic scattering. The flat trend seen in DOEL-1 could be explained by compensating errors in the calculation such as the backscattering due to the concrete walls outside the cavity. The `Concrete Benchmark` experiment has been designed to judge the ability of this calculation methods to treat the backscattering. This paper describes the `Concrete Benchmark` experiment, the measured and computed neutron dosimetry results and their comparison. This preliminary analysis seems to indicate an overestimation of the backscattering effect in the calculations. (authors). 5 figs., 1 tab., 7 refs.

  9. Benchmarked Library Websites Comparative Study

    KAUST Repository

    Ramli, Rindra M.; Tyhurst, Janis

    2015-01-01

    This presentation provides an analysis of services provided by the benchmarked library websites. The exploratory study includes comparison of these websites against a list of criterion and presents a list of services that are most commonly deployed by the selected websites. In addition to that, the investigators proposed a list of services that could be provided via the KAUST library website.

  10. Comparison of investigator-delineated gross tumor volumes and quality assurance in pancreatic cancer: Analysis of the pretrial benchmark case for the SCALOP trial.

    Science.gov (United States)

    Fokas, Emmanouil; Clifford, Charlotte; Spezi, Emiliano; Joseph, George; Branagan, Jennifer; Hurt, Chris; Nixon, Lisette; Abrams, Ross; Staffurth, John; Mukherjee, Somnath

    2015-12-01

    To evaluate the variation in investigator-delineated volumes and assess plans from the radiotherapy trial quality assurance (RTTQA) program of SCALOP, a phase II trial in locally advanced pancreatic cancer. Participating investigators (n=25) outlined a pre-trial benchmark case as per RT protocol, and the accuracy of investigators' GTV (iGTV) and PTV (iPTV) was evaluated, against the trials team-defined gold standard GTV (gsGTV) and PTV (gsPTV), using both qualitative and geometric analyses. The median Jaccard Conformity Index (JCI) and Geographical Miss Index (GMI) were calculated. Participating RT centers also submitted a radiotherapy plan for this benchmark case, which was centrally reviewed against protocol-defined constraints. Twenty-five investigator-defined contours were evaluated. The median JCI and GMI of iGTVs were 0.57 (IQR: 0.51-0.65) and 0.26 (IQR: 0.15-0.40). For iPTVs, these were 0.75 (IQR: 0.71-0.79) and 0.14 (IQR: 0.11-0.22) respectively. Qualitative analysis showed largest variation at the tumor edges and failure to recognize a peri-pancreatic lymph node. There were no major protocol deviations in RT planning, but three minor PTV coverage deviations were identified. . SCALOP demonstrated considerable variation in iGTV delineation. RTTQA workshops and real-time central review of delineations are needed in future trials. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  11. ANN-Benchmarks: A Benchmarking Tool for Approximate Nearest Neighbor Algorithms

    DEFF Research Database (Denmark)

    Aumüller, Martin; Bernhardsson, Erik; Faithfull, Alexander

    2017-01-01

    This paper describes ANN-Benchmarks, a tool for evaluating the performance of in-memory approximate nearest neighbor algorithms. It provides a standard interface for measuring the performance and quality achieved by nearest neighbor algorithms on different standard data sets. It supports several...... visualise these as images, Open image in new window plots, and websites with interactive plots. ANN-Benchmarks aims to provide a constantly updated overview of the current state of the art of k-NN algorithms. In the short term, this overview allows users to choose the correct k-NN algorithm and parameters...... for their similarity search task; in the longer term, algorithm designers will be able to use this overview to test and refine automatic parameter tuning. The paper gives an overview of the system, evaluates the results of the benchmark, and points out directions for future work. Interestingly, very different...

  12. Benchmarking 2011: Trends in Education Philanthropy

    Science.gov (United States)

    Grantmakers for Education, 2011

    2011-01-01

    The analysis in "Benchmarking 2011" is based on data from an unduplicated sample of 184 education grantmaking organizations--approximately two-thirds of Grantmakers for Education's (GFE's) network of grantmakers--who responded to an online survey consisting of fixed-choice and open-ended questions. Because a different subset of funders elects to…

  13. Benchmarking Swiss electricity grids

    International Nuclear Information System (INIS)

    Walti, N.O.; Weber, Ch.

    2001-01-01

    This extensive article describes a pilot benchmarking project initiated by the Swiss Association of Electricity Enterprises that assessed 37 Swiss utilities. The data collected from these utilities on a voluntary basis included data on technical infrastructure, investments and operating costs. These various factors are listed and discussed in detail. The assessment methods and rating mechanisms that provided the benchmarks are discussed and the results of the pilot study are presented that are to form the basis of benchmarking procedures for the grid regulation authorities under the planned Switzerland's electricity market law. Examples of the practical use of the benchmarking methods are given and cost-efficiency questions still open in the area of investment and operating costs are listed. Prefaces by the Swiss Association of Electricity Enterprises and the Swiss Federal Office of Energy complete the article

  14. Update of KASHIL-E6 library for shielding analysis and benchmark calculations

    International Nuclear Information System (INIS)

    Kim, D. H.; Kil, C. S.; Jang, J. H.

    2004-01-01

    For various shielding and reactor pressure vessel dosimetry applications, a pseudo-problem-independent neutron-photon coupled MATXS-format library based on the last release of ENDF/B-VI has been generated as a part of the update program for KASHIL-E6, which was based on ENDF/B-VI.5. It has VITAMIN-B6 neutron and photon energy group structures, i.e., 199 groups for neutron and 42 groups for photon. The neutron and photon weighting functions and the Legendre order of scattering are same as KASHIL-E6. The library has been validated through some benchmarks: the PCA-REPLICA and NESDIP-2 experiments for LWR pressure vessel facility benchmark, the Winfrith Iron88 experiment for validation of iron data, and the Winfrith Graphite experiment for validation of graphite data. These calculations were performed by the TRANSXlDANTSYS code system. In addition, the substitutions of the JENDL-3.3 and JEFF-3.0 data for Fe, Cr, Cu and Ni, which are very important nuclides for shielding analyses, were investigated to estimate the effects on the benchmark calculation results

  15. The Benchmarking of Integrated Business Structures

    Directory of Open Access Journals (Sweden)

    Nifatova Olena M.

    2017-12-01

    Full Text Available The aim of the article is to study the role of the benchmarking in the process of integration of business structures in the aspect of knowledge sharing. The results of studying the essential content of the concept “integrated business structure” and its semantic analysis made it possible to form our own understanding of this category with an emphasis on the need to consider it in the plane of three projections — legal, economic and organizational one. The economic projection of the essential content of integration associations of business units is supported by the organizational projection, which is expressed through such essential aspects as existence of a single center that makes key decisions; understanding integration as knowledge sharing; using the benchmarking as exchange of experience on key business processes. Understanding the process of integration of business units in the aspect of knowledge sharing involves obtaining certain information benefits. Using the benchmarking as exchange of experience on key business processes in integrated business structures will help improve the basic production processes, increase the efficiency of activity of both the individual business unit and the IBS as a whole.

  16. Critical power prediction by CATHARE2 of the OECD/NRC BFBT benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Lutsanych, Sergii, E-mail: s.lutsanych@ing.unipi.it [San Piero a Grado Nuclear Research Group (GRNSPG), University of Pisa, Via Livornese 1291, 56122, San Piero a Grado, Pisa (Italy); Sabotinov, Luben, E-mail: luben.sabotinov@irsn.fr [Institut for Radiological Protection and Nuclear Safety (IRSN), 31 avenue de la Division Leclerc, 92262 Fontenay-aux-Roses (France); D’Auria, Francesco, E-mail: francesco.dauria@dimnp.unipi.it [San Piero a Grado Nuclear Research Group (GRNSPG), University of Pisa, Via Livornese 1291, 56122, San Piero a Grado, Pisa (Italy)

    2015-03-15

    Highlights: • We used CATHARE code to calculate the critical power exercises of the OECD/NRC BFBT benchmark. • We considered both steady-state and transient critical power tests of the benchmark. • We used both the 1D and 3D features of the CATHARE code to simulate the experiments. • Acceptable prediction of the critical power and its location in the bundle is obtained using appropriate modelling. - Abstract: This paper presents an application of the French best estimate thermal-hydraulic code CATHARE 2 to calculate the critical power and departure from nucleate boiling (DNB) exercises of the International OECD/NRC BWR Fuel Bundle Test (BFBT) benchmark. The assessment activity is performed comparing the code calculation results with available in the framework of the benchmark experimental data from Japanese Nuclear Power Engineering Corporation (NUPEC). Two-phase flow calculations on prediction of the critical power have been carried out both in steady state and transient cases, using one-dimensional and three-dimensional modelling. Results of the steady-state critical power tests calculation have shown the ability of CATHARE code to predict reasonably the critical power and its location, using appropriate modelling.

  17. Attila calculations for the 3-D C5G7 benchmark extension

    International Nuclear Information System (INIS)

    Wareing, T.A.; McGhee, J.M.; Barnett, D.A.; Failla, G.A.

    2005-01-01

    The performance of the Attila radiation transport software was evaluated for the 3-D C5G7 MOX benchmark extension, a follow-on study to the MOX benchmark developed by the 'OECD/NEA Expert Group on 3-D Radiation Transport Benchmarks'. These benchmarks were designed to test the ability of modern deterministic transport methods to model reactor problems without spatial homogenization. Attila is a general purpose radiation transport software package with an integrated graphical user interface (GUI) for analysis, set-up and postprocessing. Attila provides solutions to the discrete-ordinates form of the linear Boltzmann transport equation on a fully unstructured, tetrahedral mesh using linear discontinuous finite-element spatial differencing in conjunction with diffusion synthetic acceleration of inner iterations. The results obtained indicate that Attila can accurately solve the benchmark problem without spatial homogenization. (authors)

  18. RESULTS OF ANALYSIS OF BENCHMARKING METHODS OF INNOVATION SYSTEMS ASSESSMENT IN ACCORDANCE WITH AIMS OF SUSTAINABLE DEVELOPMENT OF SOCIETY

    Directory of Open Access Journals (Sweden)

    A. Vylegzhanina

    2016-01-01

    Full Text Available In this work, we introduce results of comparative analysis of international ratings indexes of innovation systems for their compliance with purposes of sustainable development. Purpose of this research is defining requirements to benchmarking methods of assessing national or regional innovation systems and compare them basing on assumption, that innovation system is aligned with sustainable development concept. Analysis of goal sets and concepts, which underlie observed international composite innovation indexes, comparison of their metrics and calculation techniques, allowed us to reveal opportunities and limitations of using these methods in frames of sustainable development concept. We formulated targets of innovation development on the base of innovation priorities of sustainable socio-economic development. Using comparative analysis of indexes with these targets, we revealed two methods of assessing innovation systems, maximally connected with goals of sustainable development. Nevertheless, today no any benchmarking method, which meets need of innovation systems assessing in compliance with sustainable development concept to a sufficient extent. We suggested practical directions of developing methods, assessing innovation systems in compliance with goals of societal sustainable development.

  19. Benchmarking af kommunernes sagsbehandling

    DEFF Research Database (Denmark)

    Amilon, Anna

    Fra 2007 skal Ankestyrelsen gennemføre benchmarking af kommuernes sagsbehandlingskvalitet. Formålet med benchmarkingen er at udvikle praksisundersøgelsernes design med henblik på en bedre opfølgning og at forbedre kommunernes sagsbehandling. Dette arbejdspapir diskuterer metoder for benchmarking...

  20. MFTF TOTAL benchmark

    International Nuclear Information System (INIS)

    Choy, J.H.

    1979-06-01

    A benchmark of the TOTAL data base management system as applied to the Mirror Fusion Test Facility (MFTF) data base was implemented and run in February and March of 1979. The benchmark was run on an Interdata 8/32 and involved the following tasks: (1) data base design, (2) data base generation, (3) data base load, and (4) develop and implement programs to simulate MFTF usage of the data base

  1. The Drill Down Benchmark

    NARCIS (Netherlands)

    P.A. Boncz (Peter); T. Rühl (Tim); F. Kwakkel

    1998-01-01

    textabstractData Mining places specific requirements on DBMS query performance that cannot be evaluated satisfactorily using existing OLAP benchmarks. The DD Benchmark - defined here - provides a practical case and yardstick to explore how well a DBMS is able to support Data Mining applications. It

  2. Discrepancies in Communication Versus Documentation of Weight-Management Benchmarks

    Directory of Open Access Journals (Sweden)

    Christy B. Turer MD, MHS

    2017-02-01

    Full Text Available To examine gaps in communication versus documentation of weight-management clinical practices, communication was recorded during primary care visits with 6- to 12-year-old overweight/obese Latino children. Communication/documentation content was coded by 3 reviewers using communication transcripts and health-record documentation. Discrepancies in communication/documentation content codes were resolved through consensus. Bivariate/multivariable analyses examined factors associated with discrepancies in benchmark communication/documentation. Benchmarks were neither communicated nor documented in up to 42% of visits, and communicated but not documented or documented but not communicated in up to 20% of visits. Lowest benchmark performance rates were for laboratory studies (35% and nutrition/weight-management referrals (42%. In multivariable analysis, overweight (vs obesity was associated with 1.6 more discrepancies in communication versus documentation (P = .03. Many weight-management benchmarks are not met, not documented, or performed without being communicated. Enhanced communication with families and documentation in health records may promote lifestyle changes in overweight children and higher quality care for overweight children in primary care.

  3. QFD Based Benchmarking Logic Using TOPSIS and Suitability Index

    Directory of Open Access Journals (Sweden)

    Jaeho Cho

    2015-01-01

    Full Text Available Users’ satisfaction on quality is a key that leads successful completion of the project in relation to decision-making issues in building design solutions. This study proposed QFD (quality function deployment based benchmarking logic of market products for building envelope solutions. Benchmarking logic is composed of QFD-TOPSIS and QFD-SI. QFD-TOPSIS assessment model is able to evaluate users’ preferences on building envelope solutions that are distributed in the market and may allow quick achievement of knowledge. TOPSIS (Technique for Order of Preference by Similarity to Ideal Solution provides performance improvement criteria that help defining users’ target performance criteria. SI (Suitability Index allows analysis on suitability of the building envelope solution based on users’ required performance criteria. In Stage 1 of the case study, QFD-TOPSIS was used to benchmark the performance criteria of market envelope products. In Stage 2, a QFD-SI assessment was performed after setting user performance targets. The results of this study contribute to confirming the feasibility of QFD based benchmarking in the field of Building Envelope Performance Assessment (BEPA.

  4. Calculations of IAEA-CRP-6 Benchmark Case 1 through 7 for a TRISO-Coated Fuel Particle

    International Nuclear Information System (INIS)

    Kim, Young Min; Lee, Y. W.; Chang, J. H.

    2005-01-01

    IAEA-CRP-6 is a coordinated research program of IAEA on Advances in HTGR fuel technology. The CRP examines aspects of HTGR fuel technology, ranging from design and fabrication to characterization, irradiation testing, performance modeling, as well as licensing and quality control issues. The benchmark section of the program treats simple analytical cases, pyrocarbon layer behavior, single TRISO-coated fuel particle behavior, and benchmark calculations of some irradiation experiments performed and planned. There are totally seventeen benchmark cases in the program. Member countries are participating in the benchmark calculations of the CRP with their own developed fuel performance analysis computer codes. Korea is also taking part in the benchmark calculations using a fuel performance analysis code, COPA (COated PArticle), which is being developed in Korea Atomic Energy Research Institute. The study shows the calculational results of IAEACRP- 6 benchmark cases 1 through 7 which describe the structural behaviors for a single fuel particle

  5. Benchmark of AC and DC Active Power Decoupling Circuits for Second-Order Harmonic Mitigation in Kilowatt-Scale Single-Phase Inverters

    DEFF Research Database (Denmark)

    Qin, Zian; Tang, Yi; Loh, Poh Chiang

    2016-01-01

    efficiency and high power density is identified and comprehensively studied, and the commercially available film capacitors, the circuit topologies, and the control strategies adopted for active power decoupling are all taken into account. Then, an adaptive decoupling voltage control method is proposed...... to further improve the performance of dc decoupling in terms of efficiency and reliability. The feasibility and superiority of the identified solution for active power decoupling together with the proposed adaptive decoupling voltage control method are finally verified by both the simulation and experimental......This paper presents the benchmark study of ac and dc active power decoupling circuits for second order harmonic mitigation in kW scale single-phase inverters. First of all, a brief comparison of recently reported active power decoupling circuits is given, and the best solution that can achieve high...

  6. Development of a new energy benchmark for improving the operational rating system of office buildings using various data-mining techniques

    International Nuclear Information System (INIS)

    Park, Hyo Seon; Lee, Minhyun; Kang, Hyuna; Hong, Taehoon; Jeong, Jaewook

    2016-01-01

    Highlights: • This study developed a new energy benchmark for office buildings. • Correlation analysis, decision tree, and analysis of variance were used. • The data from 1072 office buildings in South Korea were used. • As a result, six types of energy benchmarks for office buildings were developed. • The operational rating system can be improved by using the new energy benchmark. - Abstract: As improving energy efficiency in buildings has become a global issue today, many countries have adopted the operational rating system to evaluate the energy performance of a building based on the actual energy consumption. A rational and reasonable energy benchmark can be used in the operational rating system to evaluate the energy performance of a building accurately and effectively. This study aims to develop a new energy benchmark for improving the operational rating system of office buildings. Toward this end, this study used various data-mining techniques such as correlation analysis, decision tree (DT) analysis, and analysis of variance (ANOVA). Based on data from 1072 office buildings in South Korea, this study was conducted in three steps: (i) Step 1: establishment of the database; (ii) Step 2: development of the new energy benchmark; and (iii) Step 3: application of the new energy benchmark for improving the operational rating system. As a result, six types of energy benchmarks for office buildings were developed using DT analysis based on the gross floor area (GFA) and the building use ratio (BUR) of offices, and these new energy benchmarks were validated using ANOVA. To ensure the effectiveness of the new energy benchmark, it was applied to three operational rating systems for comparison: (i) the baseline system (the same energy benchmark is used for all office buildings); (ii) the conventional system (different energy benchmarks are used depending on the GFA, currently used in South Korea); and (iii) the proposed system (different energy benchmarks are

  7. Benchmarking and Learning in Public Healthcare

    DEFF Research Database (Denmark)

    Buckmaster, Natalie; Mouritsen, Jan

    2017-01-01

    This research investigates the effects of learning-oriented benchmarking in public healthcare settings. Benchmarking is a widely adopted yet little explored accounting practice that is part of the paradigm of New Public Management. Extant studies are directed towards mandated coercive benchmarking...

  8. Benchmarking optimization solvers for structural topology optimization

    DEFF Research Database (Denmark)

    Rojas Labanda, Susana; Stolpe, Mathias

    2015-01-01

    solvers in IPOPT and FMINCON, and the sequential quadratic programming method in SNOPT, are benchmarked on the library using performance profiles. Whenever possible the methods are applied to both the nested and the Simultaneous Analysis and Design (SAND) formulations of the problem. The performance...

  9. Study of LBS for characterization and analysis of big data benchmarks

    International Nuclear Information System (INIS)

    Chandio, A.A.; Zhang, F.; Memon, T.D.

    2014-01-01

    In the past few years, most organizations are gradually diverting their applications and services to Cloud. This is because Cloud paradigm enables (a) on-demand accessed and (b) large data processing for their applications and users on Internet anywhere in the world. The rapid growth of urbanization in developed and developing countries leads a new emerging concept called Urban Computing, one of the application domains that is rapidly deployed to the Cloud. More precisely, in the concept of Urban Computing, sensors, vehicles, devices, buildings, and roads are used as a component to probe city dynamics. Their data representation is widely available including GPS traces of vehicles. However, their applications are more towards data processing and storage hungry, which is due to their data increment in large volume starts from few dozen of TB (Tera Bytes) to thousands of PT (Peta Bytes) (i.e. Big Data). To increase the development and the assessment of the applications such as LBS (Location Based Services), a benchmark of Big Data is urgently needed. This research is a novel research on LBS to characterize and analyze the Big Data benchmarks. We focused on map-matching, which is being used as pre-processing step in many LBS applications. In this preliminary work, this paper also describes current status of Big Data benchmarks and our future direction. (author)

  10. Study on LBS for Characterization and Analysis of Big Data Benchmarks

    Directory of Open Access Journals (Sweden)

    Aftab Ahmed Chandio

    2014-10-01

    Full Text Available In the past few years, most organizations are gradually diverting their applications and services to Cloud. This is because Cloud paradigm enables (a on-demand accessed and (b large data processing for their applications and users on Internet anywhere in the world. The rapid growth of urbanization in developed and developing countries leads a new emerging concept called Urban Computing, one of the application domains that is rapidly deployed to the Cloud. More precisely, in the concept of Urban Computing, sensors, vehicles, devices, buildings, and roads are used as a component to probe city dynamics. Their data representation is widely available including GPS traces of vehicles. However, their applications are more towards data processing and storage hungry, which is due to their data increment in large volume starts from few dozen of TB (Tera Bytes to thousands of PT (Peta Bytes (i.e. Big Data. To increase the development and the assessment of the applications such as LBS (Location Based Services, a benchmark of Big Data is urgently needed. This research is a novel research on LBS to characterize and analyze the Big Data benchmarks. We focused on map-matching, which is being used as pre-processing step in many LBS applications. In this preliminary work, this paper also describes current status of Big Data benchmarks and our future direction

  11. Comparative analysis of exercise 2 results of the OECD WWER-1000 MSLB benchmark

    International Nuclear Information System (INIS)

    Kolev, N.; Petrov, N.; Royer, E.; Ivanov, B.; Ivanov, K.

    2006-01-01

    In the framework of joint effort between OECD/NEA, US DOE and CEA France a coupled three-dimensional (3D) thermal-hydraulic/neutron kinetics benchmark for WWER-1000 was defined. Phase 2 of this benchmark is labeled W1000CT-2 and consists of calculation of a vessel mixing experiment and main steam line break (MSLB) transients. The reference plant is Kozloduy-6 in Bulgaria. Plant data are available for code validation consisting of one experiment of pump start-up (W1000CT-1) and one experiment of steam generator isolation (W1000CT-2). The validated codes can be used to calculate asymmetric MSLB transients involving similar mixing patterns. This paper summarizes a comparison of the available results for W1000CT-2 Exercise 2 devoted to core-vessel calculation with imposed MSLB vessel boundary conditions. Because of the recent re-calculation of the cross-section libraries, core physics results from PARCS and CRONOS codes could be compared only. The comparison is code-to-code (including BIPR7A/TVS-M lib) and code vs. plant measured data in a steady state close to the MSLB initial state. The results provide a test of the cross-section libraries and show a good agreement of plant measured and computed data. The comparison of full vessel calculations was made from the point of view of vessel mixing, considering mainly the coarse-mesh features of the flow. The FZR and INRNE results from multi-1D calculations with different mixing models are similar, while the FZK calculations with a coarse-3D vessel model show deviations from the others. These deviations seem to be due to an error in the use of a boundary condition after flow reversal (Authors)

  12. Comparison of the PHISICS/RELAP5-3D Ring and Block Model Results for Phase I of the OECD MHTGR-350 Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Gerhard Strydom

    2014-04-01

    The INL PHISICS code system consists of three modules providing improved core simulation capability: INSTANT (performing 3D nodal transport core calculations), MRTAU (depletion and decay heat generation) and a perturbation/mixer module. Coupling of the PHISICS code suite to the thermal hydraulics system code RELAP5-3D has recently been finalized, and as part of the code verification and validation program the exercises defined for Phase I of the OECD/NEA MHTGR 350 MW Benchmark were completed. This paper provides an overview of the MHTGR Benchmark, and presents selected results of the three steady state exercises 1-3 defined for Phase I. For Exercise 1, a stand-alone steady-state neutronics solution for an End of Equilibrium Cycle Modular High Temperature Reactor (MHTGR) was calculated with INSTANT, using the provided geometry, material descriptions, and detailed cross-section libraries. Exercise 2 required the modeling of a stand-alone thermal fluids solution. The RELAP5-3D results of four sub-cases are discussed, consisting of various combinations of coolant bypass flows and material thermophysical properties. Exercise 3 combined the first two exercises in a coupled neutronics and thermal fluids solution, and the coupled code suite PHISICS/RELAP5-3D was used to calculate the results of two sub-cases. The main focus of the paper is a comparison of the traditional RELAP5-3D “ring” model approach vs. a much more detailed model that include kinetics feedback on individual block level and thermal feedbacks on a triangular sub-mesh. The higher fidelity of the block model is illustrated with comparison results on the temperature, power density and flux distributions, and the typical under-predictions produced by the ring model approach are highlighted.

  13. BENCHMARKING – BETWEEN TRADITIONAL & MODERN BUSINESS ENVIRONMENT

    Directory of Open Access Journals (Sweden)

    Mihaela Ungureanu

    2011-09-01

    Full Text Available The concept of benchmarking requires a continuous process of performance improvement of different organizations in order to obtain superiority towards those perceived as market leader’s competitors. This superiority can always be questioned, its relativity originating in the quick growing evolution of the economic environment. The approach supports innovation in relation with traditional methods and it is based on the will of those managers who want to determine limits and seek excellence. The end of the twentieth century is the period of broad expression of benchmarking in various areas and its transformation from a simple quantitative analysis tool, to a resource of information on performance and quality of goods and services.

  14. Benchmarking & European Sustainable Transport Policies

    DEFF Research Database (Denmark)

    Gudmundsson, H.

    2003-01-01

    , Benchmarking is one of the management tools that have recently been introduced in the transport sector. It is rapidly being applied to a wide range of transport operations, services and policies. This paper is a contribution to the discussion of the role of benchmarking in the future efforts to...... contribution to the discussions within the Eusponsored BEST Thematic Network (Benchmarking European Sustainable Transport) which ran from 2000 to 2003....

  15. Benchmarking – A tool for judgment or improvement?

    DEFF Research Database (Denmark)

    Rasmussen, Grane Mikael Gregaard

    2010-01-01

    perceptions of benchmarking will be presented; public benchmarking and best practice benchmarking. These two types of benchmarking are used to characterize and discuss the Danish benchmarking system and to enhance which effects, possibilities and challenges that follow in the wake of using this kind......Change in construction is high on the agenda for the Danish government and a comprehensive effort is done in improving quality and efficiency. This has led to an initiated governmental effort in bringing benchmarking into the Danish construction sector. This paper is an appraisal of benchmarking...... as it is presently carried out in the Danish construction sector. Many different perceptions of benchmarking and the nature of the construction sector, lead to an uncertainty in how to perceive and use benchmarking, hence, generating an uncertainty in understanding the effects of benchmarking. This paper addresses...

  16. Predictive uncertainty reduction in coupled neutron-kinetics/thermal hydraulics modeling of the BWR-TT2 benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Badea, Aurelian F., E-mail: aurelian.badea@kit.edu [Karlsruhe Institute of Technology, Vincenz-Prießnitz-Str. 3, 76131 Karlsruhe (Germany); Cacuci, Dan G. [Center for Nuclear Science and Energy/Dept. of ME, University of South Carolina, 300 Main Street, Columbia, SC 29208 (United States)

    2017-03-15

    Highlights: • BWR Turbine Trip 2 (BWR-TT2) benchmark. • Substantial (up to 50%) reduction of uncertainties in the predicted transient power. • 6660 uncertain model parameters were calibrated. - Abstract: By applying a comprehensive predictive modeling methodology, this work demonstrates a substantial (up to 50%) reduction of uncertainties in the predicted total transient power in the BWR Turbine Trip 2 (BWR-TT2) benchmark while calibrating the numerical simulation of this benchmark, comprising 6090 macroscopic cross sections, and 570 thermal-hydraulics parameters involved in modeling the phase-slip correlation, transient outlet pressure, and total mass flow. The BWR-TT2 benchmark is based on an experiment that was carried out in 1977 in the NPP Peach Bottom 2, involving the closure of the turbine stop valve which caused a pressure wave that propagated with attenuation into the reactor core. The condensation of the steam in the reactor core caused by the pressure increase led to a positive reactivity insertion. The subsequent rise of power was limited by the feedback and the insertion of the control rods. The BWR-TT2 benchmark was modeled with the three-dimensional reactor physics code system DYN3D, by coupling neutron kinetics with two-phase thermal-hydraulics. All 6660 DYN3D model parameters were calibrated by applying a predictive modeling methodology that combines experimental and computational information to produce optimally predicted best-estimate results with reduced predicted uncertainties. Simultaneously, the predictive modeling methodology yields optimally predicted values for the BWR total transient power while reducing significantly the accompanying predicted standard deviations.

  17. Predictive uncertainty reduction in coupled neutron-kinetics/thermal hydraulics modeling of the BWR-TT2 benchmark

    International Nuclear Information System (INIS)

    Badea, Aurelian F.; Cacuci, Dan G.

    2017-01-01

    Highlights: • BWR Turbine Trip 2 (BWR-TT2) benchmark. • Substantial (up to 50%) reduction of uncertainties in the predicted transient power. • 6660 uncertain model parameters were calibrated. - Abstract: By applying a comprehensive predictive modeling methodology, this work demonstrates a substantial (up to 50%) reduction of uncertainties in the predicted total transient power in the BWR Turbine Trip 2 (BWR-TT2) benchmark while calibrating the numerical simulation of this benchmark, comprising 6090 macroscopic cross sections, and 570 thermal-hydraulics parameters involved in modeling the phase-slip correlation, transient outlet pressure, and total mass flow. The BWR-TT2 benchmark is based on an experiment that was carried out in 1977 in the NPP Peach Bottom 2, involving the closure of the turbine stop valve which caused a pressure wave that propagated with attenuation into the reactor core. The condensation of the steam in the reactor core caused by the pressure increase led to a positive reactivity insertion. The subsequent rise of power was limited by the feedback and the insertion of the control rods. The BWR-TT2 benchmark was modeled with the three-dimensional reactor physics code system DYN3D, by coupling neutron kinetics with two-phase thermal-hydraulics. All 6660 DYN3D model parameters were calibrated by applying a predictive modeling methodology that combines experimental and computational information to produce optimally predicted best-estimate results with reduced predicted uncertainties. Simultaneously, the predictive modeling methodology yields optimally predicted values for the BWR total transient power while reducing significantly the accompanying predicted standard deviations.

  18. Storage-Intensive Supercomputing Benchmark Study

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, J; Dossa, D; Gokhale, M; Hysom, D; May, J; Pearce, R; Yoo, A

    2007-10-30

    Critical data science applications requiring frequent access to storage perform poorly on today's computing architectures. This project addresses efficient computation of data-intensive problems in national security and basic science by exploring, advancing, and applying a new form of computing called storage-intensive supercomputing (SISC). Our goal is to enable applications that simply cannot run on current systems, and, for a broad range of data-intensive problems, to deliver an order of magnitude improvement in price/performance over today's data-intensive architectures. This technical report documents much of the work done under LDRD 07-ERD-063 Storage Intensive Supercomputing during the period 05/07-09/07. The following chapters describe: (1) a new file I/O monitoring tool iotrace developed to capture the dynamic I/O profiles of Linux processes; (2) an out-of-core graph benchmark for level-set expansion of scale-free graphs; (3) an entity extraction benchmark consisting of a pipeline of eight components; and (4) an image resampling benchmark drawn from the SWarp program in the LSST data processing pipeline. The performance of the graph and entity extraction benchmarks was measured in three different scenarios: data sets residing on the NFS file server and accessed over the network; data sets stored on local disk; and data sets stored on the Fusion I/O parallel NAND Flash array. The image resampling benchmark compared performance of software-only to GPU-accelerated. In addition to the work reported here, an additional text processing application was developed that used an FPGA to accelerate n-gram profiling for language classification. The n-gram application will be presented at SC07 at the High Performance Reconfigurable Computing Technologies and Applications Workshop. The graph and entity extraction benchmarks were run on a Supermicro server housing the NAND Flash 40GB parallel disk array, the Fusion-io. The Fusion system specs are as follows

  19. BONFIRE: benchmarking computers and computer networks

    OpenAIRE

    Bouckaert, Stefan; Vanhie-Van Gerwen, Jono; Moerman, Ingrid; Phillips, Stephen; Wilander, Jerker

    2011-01-01

    The benchmarking concept is not new in the field of computing or computer networking. With “benchmarking tools”, one usually refers to a program or set of programs, used to evaluate the performance of a solution under certain reference conditions, relative to the performance of another solution. Since the 1970s, benchmarking techniques have been used to measure the performance of computers and computer networks. Benchmarking of applications and virtual machines in an Infrastructure-as-a-Servi...

  20. The Isprs Benchmark on Indoor Modelling

    Science.gov (United States)

    Khoshelham, K.; Díaz Vilariño, L.; Peter, M.; Kang, Z.; Acharya, D.

    2017-09-01

    Automated generation of 3D indoor models from point cloud data has been a topic of intensive research in recent years. While results on various datasets have been reported in literature, a comparison of the performance of different methods has not been possible due to the lack of benchmark datasets and a common evaluation framework. The ISPRS benchmark on indoor modelling aims to address this issue by providing a public benchmark dataset and an evaluation framework for performance comparison of indoor modelling methods. In this paper, we present the benchmark dataset comprising several point clouds of indoor environments captured by different sensors. We also discuss the evaluation and comparison of indoor modelling methods based on manually created reference models and appropriate quality evaluation criteria. The benchmark dataset is available for download at: html"target="_blank">http://www2.isprs.org/commissions/comm4/wg5/benchmark-on-indoor-modelling.html.

  1. SKaMPI: A Comprehensive Benchmark for Public Benchmarking of MPI

    Directory of Open Access Journals (Sweden)

    Ralf Reussner

    2002-01-01

    Full Text Available The main objective of the MPI communication library is to enable portable parallel programming with high performance within the message-passing paradigm. Since the MPI standard has no associated performance model, and makes no performance guarantees, comprehensive, detailed and accurate performance figures for different hardware platforms and MPI implementations are important for the application programmer, both for understanding and possibly improving the behavior of a given program on a given platform, as well as for assuring a degree of predictable behavior when switching to another hardware platform and/or MPI implementation. We term this latter goal performance portability, and address the problem of attaining performance portability by benchmarking. We describe the SKaMPI benchmark which covers a large fraction of MPI, and incorporates well-accepted mechanisms for ensuring accuracy and reliability. SKaMPI is distinguished among other MPI benchmarks by an effort to maintain a public performance database with performance data from different hardware platforms and MPI implementations.

  2. Entropy-based benchmarking methods

    NARCIS (Netherlands)

    Temurshoev, Umed

    2012-01-01

    We argue that benchmarking sign-volatile series should be based on the principle of movement and sign preservation, which states that a bench-marked series should reproduce the movement and signs in the original series. We show that the widely used variants of Denton (1971) method and the growth

  3. Information Literacy and Office Tool Competencies: A Benchmark Study

    Science.gov (United States)

    Heinrichs, John H.; Lim, Jeen-Su

    2010-01-01

    Present information science literature recognizes the importance of information technology to achieve information literacy. The authors report the results of a benchmarking student survey regarding perceived functional skills and competencies in word-processing and presentation tools. They used analysis of variance and regression analysis to…

  4. Stress analysis of R2 pressure vessel. Structural reliability benchmark exercise

    International Nuclear Information System (INIS)

    Vestergaard, N.

    1987-05-01

    The Structural Reliability Benchmark Exercise (SRBE) is sponsored by the EEC as part of the Reactor Safety Programme. The objectives of the SRBE are to evaluate and improve 1) inspection procedures, which use non-destructive methods to locate defects in pressure (reactor) vessels, as well as 2) analytical damage accumulation models, which predict the time to failure of vessels containing defects. In order to focus attention, an experimental presure vessel has been inspected, subjected fatigue loadings and subsequently analysed by several teams using methods of their choice. The present report contains the first part of the analytical damage accumulation analysis. The stress distributions in the welds of the experimental pressure vessel were determined. These stress distributions will be used to determine the driving forces of the damage accumulation models, which will be addressed in a future report. (author)

  5. Power reactor pressure vessel benchmarks

    International Nuclear Information System (INIS)

    Rahn, F.J.

    1978-01-01

    A review is given of the current status of experimental and calculational benchmarks for use in understanding the radiation embrittlement effects in the pressure vessels of operating light water power reactors. The requirements of such benchmarks for application to pressure vessel dosimetry are stated. Recent developments in active and passive neutron detectors sensitive in the ranges of importance to embrittlement studies are summarized and recommendations for improvements in the benchmark are made. (author)

  6. Practice benchmarking in the age of targeted auditing.

    Science.gov (United States)

    Langdale, Ryan P; Holland, Ben F

    2012-11-01

    The frequency and sophistication of health care reimbursement auditing has progressed rapidly in recent years, leaving many oncologists wondering whether their private practices would survive a full-scale Office of the Inspector General (OIG) investigation. The Medicare Part B claims database provides a rich source of information for physicians seeking to understand how their billing practices measure up to their peers, both locally and nationally. This database was dissected by a team of cancer specialists to uncover important benchmarks related to targeted auditing. All critical Medicare charges, payments, denials, and service ratios in this article were derived from the full 2010 Medicare Part B claims database. Relevant claims were limited by using Medicare provider specialty codes 83 (hematology/oncology) and 90 (medical oncology), with an emphasis on claims filed from the physician office place of service (11). All charges, denials, and payments were summarized at the Current Procedural Terminology code level to drive practice benchmarking standards. A careful analysis of this data set, combined with the published audit priorities of the OIG, produced germane benchmarks from which medical oncologists can monitor, measure and improve on common areas of billing fraud, waste or abuse in their practices. Part II of this series and analysis will focus on information pertinent to radiation oncologists.

  7. Shielding benchmark problems, (2)

    International Nuclear Information System (INIS)

    Tanaka, Shun-ichi; Sasamoto, Nobuo; Oka, Yoshiaki; Shin, Kazuo; Tada, Keiko.

    1980-02-01

    Shielding benchmark problems prepared by Working Group of Assessment of Shielding Experiments in the Research Committee on Shielding Design in the Atomic Energy Society of Japan were compiled by Shielding Laboratory in Japan Atomic Energy Research Institute. Fourteen shielding benchmark problems are presented newly in addition to twenty-one problems proposed already, for evaluating the calculational algorithm and accuracy of computer codes based on discrete ordinates method and Monte Carlo method and for evaluating the nuclear data used in codes. The present benchmark problems are principally for investigating the backscattering and the streaming of neutrons and gamma rays in two- and three-dimensional configurations. (author)

  8. Electricity consumption in school buildings - benchmark and web tools; Elforbrug i skoler - benchmark og webvaerktoej

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2006-07-01

    The aim of this project has been to produce benchmarks for electricity consumption in Danish schools in order to encourage electricity conservation. An internet programme has been developed with the aim of facilitating schools' access to benchmarks and to evaluate energy consumption. The overall purpose is to create increased attention to the electricity consumption of each separate school by publishing benchmarks which take the schools' age and number of pupils as well as after school activities into account. Benchmarks can be used to make green accounts and work as markers in e.g. energy conservation campaigns, energy management and for educational purposes. The internet tool can be found on www.energiguiden.dk. (BA)

  9. Piping benchmark problems for the ABB/CE System 80+ Standardized Plant

    International Nuclear Information System (INIS)

    Bezler, P.; DeGrassi, G.; Braverman, J.; Wang, Y.K.

    1994-07-01

    To satisfy the need for verification of the computer programs and modeling techniques that will be used to perform the final piping analyses for the ABB/Combustion Engineering System 80+ Standardized Plant, three benchmark problems were developed. The problems are representative piping systems subjected to representative dynamic loads with solutions developed using the methods being proposed for analysis for the System 80+ standard design. It will be required that the combined license licensees demonstrate that their solution to these problems are in agreement with the benchmark problem set. The first System 80+ piping benchmark is a uniform support motion response spectrum solution for one section of the feedwater piping subjected to safe shutdown seismic loads. The second System 80+ piping benchmark is a time history solution for the feedwater piping subjected to the transient loading induced by a water hammer. The third System 80+ piping benchmark is a time history solution of the pressurizer surge line subjected to the accelerations induced by a main steam line pipe break. The System 80+ reactor is an advanced PWR type

  10. A simplified approach to WWER-440 fuel assembly head benchmark

    International Nuclear Information System (INIS)

    Muehlbauer, P.

    2010-01-01

    The WWER-440 fuel assembly head benchmark was simulated with FLUENT 12 code as a first step of validation of the code for nuclear reactor safety analyses. Results of the benchmark together with comparison of results provided by other participants and results of measurement will be presented in another paper by benchmark organisers. This presentation is therefore focused on our approach to this simulation as illustrated on the case 323-34, which represents a peripheral assembly with five neighbours. All steps of the simulation and some lessons learned are described. Geometry of the computational region supplied as STEP file by organizers of the benchmark was first separated into two parts (inlet part with spacer grid, and the rest of assembly head) in order to keep the size of the computational mesh manageable with regard to the hardware available (HP Z800 workstation with Intel Zeon four-core CPU 3.2 GHz, 32 GB of RAM) and then further modified at places where shape of the geometry would probably lead to highly distorted cells. Both parts of the geometry were connected via boundary profile file generated at cross section, where effect of grid spacers is still felt but the effect of out flow boundary condition used in the computations of the inlet part of geometry is negligible. Computation proceeded in several steps: start with basic mesh, standard k-ε model of turbulence with standard wall functions and first order upwind numerical schemes; after convergence (scaled residuals lower than 10-3) and near-wall meshes local adaptation when needed, realizable k-ε of turbulence was used with second order upwind numerical schemes for momentum and energy equations. During iterations, area-average temperature of thermocouples and area-averaged outlet temperature which are the main figures of merit of the benchmark were also monitored. In this 'blind' phase of the benchmark, effect of spacers was neglected. After results of measurements are available, standard validation

  11. Use of the Benchmarking System for Operational Waste from WWER Reactors

    International Nuclear Information System (INIS)

    2017-06-01

    The focus of this publication is on benchmarking low and intermediate level waste generated and managed during the normal operating life of a WWER, and it identifies and defines the benchmarking parameters selected for WWER type reactors. It includes a brief discussion on why those parameters were selected and their intended benchmarking benefits, and provides a description of the database and graphical user interface selected, designed and developed, including how to use it for data input and data analysis. The CD-ROM accompanying this publication provides an overview of practices at WWER sites, which were to a large extent prepared using the WWER BMS.

  12. Validation of the Continuous-Energy Monte Carlo Criticality-Safety Analysis System MVP and JENDL-3.2 Using the Internationally Evaluated Criticality Benchmarks

    International Nuclear Information System (INIS)

    Mitake, Susumu

    2003-01-01

    Validation of the continuous-energy Monte Carlo criticality-safety analysis system, comprising the MVP code and neutron cross sections based on JENDL-3.2, was examined using benchmarks evaluated in the 'International Handbook of Evaluated Criticality Safety Benchmark Experiments'. Eight experiments (116 configurations) for the plutonium solution and plutonium-uranium mixture systems performed at Valduc, Battelle Pacific Northwest Laboratories, and other facilities were selected and used in the studies. The averaged multiplication factors calculated with MVP and MCNP-4B using the same neutron cross-section libraries based on JENDL-3.2 were in good agreement. Based on methods provided in the Japanese nuclear criticality-safety handbook, the estimated criticality lower-limit multiplication factors to be used as a subcriticality criterion for the criticality-safety evaluation of nuclear facilities were obtained. The analysis proved the applicability of the MVP code to the criticality-safety analysis of nuclear fuel facilities, particularly to the analysis of systems fueled with plutonium and in homogeneous and thermal-energy conditions

  13. HS06 Benchmark for an ARM Server

    Science.gov (United States)

    Kluth, Stefan

    2014-06-01

    We benchmarked an ARM cortex-A9 based server system with a four-core CPU running at 1.1 GHz. The system used Ubuntu 12.04 as operating system and the HEPSPEC 2006 (HS06) benchmarking suite was compiled natively with gcc-4.4 on the system. The benchmark was run for various settings of the relevant gcc compiler options. We did not find significant influence from the compiler options on the benchmark result. The final HS06 benchmark result is 10.4.

  14. HS06 benchmark for an ARM server

    International Nuclear Information System (INIS)

    Kluth, Stefan

    2014-01-01

    We benchmarked an ARM cortex-A9 based server system with a four-core CPU running at 1.1 GHz. The system used Ubuntu 12.04 as operating system and the HEPSPEC 2006 (HS06) benchmarking suite was compiled natively with gcc-4.4 on the system. The benchmark was run for various settings of the relevant gcc compiler options. We did not find significant influence from the compiler options on the benchmark result. The final HS06 benchmark result is 10.4.

  15. Quo Vadis Benchmark Simulation Models? 8th IWA Symposium on Systems Analysis and Integrated Assessment

    DEFF Research Database (Denmark)

    Jeppsson, U.; Alex, J.; Batstone, D,

    2011-01-01

    As the work of the IWA Task Group on Benchmarking of Control Strategies for WWTPs is coming towards an end, it is essential to disseminate the knowledge gained. For this reason, all authors of the IWA Scientific and Technical Report on benchmarking have come together to provide their insights, hi...

  16. Co-ordinated research programme on benchmark study for the seismic analysis and testing of WWER-type nuclear power plants. V. 4F. Paks NPP: Analysis and testing. Working material

    International Nuclear Information System (INIS)

    1999-01-01

    In August 1991, following the SMiRT-11 Conference in Tokyo, a Technical Committee Meeting was held on the 'Seismic safety issues relating to existing NPPs'. The Proceedings of this TCM was subsequently compiled in an IAEA Working Material. One of the main recommendations of this TCM, called for the harmonization of criteria and methods used in Member States in seismic reassessment and upgrading of existing NPPs. Twenty four institutions from thirteen countries participated in the CRP named 'Benchmark study for the seismic analysis and testing of WWER type NPPs'. Two types of WWER reactors (WWER-1000 and WWER-440/213) selected for benchmarking. Kozloduy NPP Units 5/6 and Paks NPP represented these respectively as prototypes. Consistent with the recommendations of the TCM and the working paper prepared by the subsequent Consultants' Meeting, the focal activity of the CRP was the benchmarking exercises. A similar methodology was followed both for Paks NPP and Kozloduy NPP Unit 5. Firstly, the NPP (mainly the reactor building) was tested using a blast loading generated by a series of explosions from buried TNT charges. Records from this test were obtained at several free field locations (both downhole and surface), foundation mat, various elevations of structures as well as some tanks and the stack. Then the benchmark participants were provided with structural drawings, soil data and the free field record of the blast experiment. Their task was to make a blind prediction of the response at preselected locations. The analytical results from these participants were then compared with the results from the test. Although the benchmarking exercises constituted the focus of the CRP, there were many other interesting problems related to the seismic safety of WWER type NPPs which were addressed by the participants. These involved generic studies, i.e. codes and standards used in original WWER designs and their comparison with current international practice; seismic analysis

  17. Performance Targets and External Benchmarking

    DEFF Research Database (Denmark)

    Friis, Ivar; Hansen, Allan; Vámosi, Tamás S.

    Research on relative performance measures, transfer pricing, beyond budgeting initiatives, target costing, piece rates systems and value based management has for decades underlined the importance of external benchmarking in performance management. Research conceptualises external benchmarking...... as a market mechanism that can be brought inside the firm to provide incentives for continuous improvement and the development of competitive advances. However, whereas extant research primarily has focused on the importance and effects of using external benchmarks, less attention has been directed towards...... the conditions upon which the market mechanism is performing within organizations. This paper aims to contribute to research by providing more insight to the conditions for the use of external benchmarking as an element in performance management in organizations. Our study explores a particular type of external...

  18. Summary of the OECD/NRC Boiling Water Reactor Turbine Trip Benchmark - Fifth Workshop (BWR-TT5)

    International Nuclear Information System (INIS)

    2003-01-01

    The reference problem chosen for simulation in a BWR is a Turbine Trip transient, which begins with a sudden Turbine Stop Valve (TSV) closure. The pressure oscillation generated in the main steam piping propagates with relatively little attenuation into the reactor core. The induced core pressure oscillation results in dramatic changes of the core void distribution and fluid flow. The magnitude of the neutron flux transient taking place in the BWR core is strongly affected by the initial rate of pressure rise caused by pressure oscillation and has a strong spatial variation. The correct simulation of the power response to the pressure pulse and subsequent void collapse requires a 3-D core modeling supplemented by 1-D simulation of the remainder of the reactor coolant system. A BWR TT benchmark exercise, based on a well-defined problem with complete set of input specifications and reference experimental data, has been proposed for qualification of the coupled 3-D neutron kinetics/thermal-hydraulic system transient codes. Since this kind of transient is a dynamically complex event with reactor variables changing very rapidly, it constitutes a good benchmark problem to test the coupled codes on both levels: neutronics/thermal-hydraulic coupling and core/plant system coupling. Subsequently, the objectives of the proposed benchmark are: comprehensive feedback testing and examination of the capability of coupled codes to analyze complex transients with coupled core/plant interactions by comparison with actual experimental data. The benchmark consists of three separate exercises: Exercise 1 - Power vs. Time Plant System Simulation with Fixed Axial Power Profile Table (Obtained from Experimental Data). Exercise 2 - Coupled 3-D Kinetics/Core Thermal-Hydraulic BC Model and/or 1-D Kinetics Plant System Simulation. Exercise 3 - Best-Estimate Coupled 3-D Core/Thermal-Hydraulic System Modeling. The purpose of this fifth workshop was to discuss the results from Phase III (best

  19. Nonparametric estimation of benchmark doses in environmental risk assessment

    Science.gov (United States)

    Piegorsch, Walter W.; Xiong, Hui; Bhattacharya, Rabi N.; Lin, Lizhen

    2013-01-01

    Summary An important statistical objective in environmental risk analysis is estimation of minimum exposure levels, called benchmark doses (BMDs), that induce a pre-specified benchmark response in a dose-response experiment. In such settings, representations of the risk are traditionally based on a parametric dose-response model. It is a well-known concern, however, that if the chosen parametric form is misspecified, inaccurate and possibly unsafe low-dose inferences can result. We apply a nonparametric approach for calculating benchmark doses, based on an isotonic regression method for dose-response estimation with quantal-response data (Bhattacharya and Kong, 2007). We determine the large-sample properties of the estimator, develop bootstrap-based confidence limits on the BMDs, and explore the confidence limits’ small-sample properties via a short simulation study. An example from cancer risk assessment illustrates the calculations. PMID:23914133

  20. Financial benchmarking the example of confectionery industry companies

    Directory of Open Access Journals (Sweden)

    Vasilić Marina

    2014-01-01

    Full Text Available Being a managerial tool of proven efficiency when it comes to managing companies in cri­sis periods, benchmarking concept is still insufficiently known and applied in the Republic of Serbia. The idea of this paper was to reveal its possibilities through the aspect of finan­cial benchmarking, showing its simplicity and benefits even from the point of an external analyst. This was achieved through the analysis of two biggest competitors on the market of confectionery products of the Republic of Serbia, using secondary data analysis. Through multidimensional set of performance measures based on profit as the ultimate goal, but also including value for shareholders, liquidity and capitalization, we have confirmed the leader's market position and found its sources, which are the key learning points for the follower to adopt in order to improve its performance.

  1. Benchmarking in Czech Higher Education

    OpenAIRE

    Plaček Michal; Ochrana František; Půček Milan

    2015-01-01

    The first part of this article surveys the current experience with the use of benchmarking at Czech universities specializing in economics and management. The results indicate that collaborative benchmarking is not used on this level today, but most actors show some interest in its introduction. The expression of the need for it and the importance of benchmarking as a very suitable performance-management tool in less developed countries are the impetus for the second part of our article. Base...

  2. The calculational VVER burnup Credit Benchmark No.3 results with the ENDF/B-VI rev.5 (1999)

    Energy Technology Data Exchange (ETDEWEB)

    Rodriguez Gual, Maritza [Centro de Tecnologia Nuclear, La Habana (Cuba). E-mail: mrgual@ctn.isctn.edu.cu

    2000-07-01

    The purpose of this papers to present the results of CB3 phase of the VVER calculational benchmark with the recent evaluated nuclear data library ENDF/B-VI Rev.5 (1999). This results are compared with the obtained from the other participants in the calculations (Czech Republic, Finland, Hungary, Slovaquia, Spain and the United Kingdom). The phase (CB3) of the VVER calculation benchmark is similar to the Phase II-A of the OECD/NEA/INSC BUC Working Group benchmark for PWR. The cases without burnup profile (BP) were performed with the WIMS/D-4 code. The rest of the cases have been carried with DOTIII discrete ordinates code. The neutron library used was the ENDF/B-VI rev. 5 (1999). The WIMS/D-4 (69 groups) is used to collapse cross sections from the ENDF/B-VI Rev. 5 (1999) to 36 groups working library for 2-D calculations. This work also comprises the results of CB1 (obtained with ENDF/B-VI rev. 5 (1999), too) and CB3 for cases with Burnup of 30 MWd/TU and cooling time of 1 and 5 years and for case with Burnup of 40 MWd/TU and cooling time of 1 year. (author)

  3. A NRC-BNL benchmark evaluation of seismic analysis methods for non-classically damped coupled systems

    International Nuclear Information System (INIS)

    Xu, J.; DeGrassi, G.; Chokshi, N.

    2004-01-01

    Under the auspices of the U.S. Nuclear Regulatory Commission (NRC), Brookhaven National Laboratory (BNL) developed a comprehensive program to evaluate state-of-the-art methods and computer programs for seismic analysis of typical coupled nuclear power plant (NPP) systems with non-classical damping. In this program, four benchmark models of coupled building-piping/equipment systems with different damping characteristics were developed and analyzed by BNL for a suite of earthquakes. The BNL analysis was carried out by the Wilson-θ time domain integration method with the system-damping matrix computed using a synthesis formulation as presented in a companion paper [Nucl. Eng. Des. (2002)]. These benchmark problems were subsequently distributed to and analyzed by program participants applying their uniquely developed methods and computer programs. This paper is intended to offer a glimpse at the program, and provide a summary of major findings and principle conclusions with some representative results. The participant's analysis results established using complex modal time history methods showed good comparison with the BNL solutions, while the analyses produced with either complex-mode response spectrum methods or classical normal-mode response spectrum method, in general, produced more conservative results, when averaged over a suite of earthquakes. However, when coupling due to damping is significant, complex-mode response spectrum methods performed better than the classical normal-mode response spectrum method. Furthermore, as part of the program objectives, a parametric assessment is also presented in this paper, aimed at evaluation of the applicability of various analysis methods to problems with different dynamic characteristics unique to coupled NPP systems. It is believed that the findings and insights learned from this program will be useful in developing new acceptance criteria and providing guidance for future regulatory activities involving license

  4. Benchmark simulation models, quo vadis?

    Science.gov (United States)

    Jeppsson, U; Alex, J; Batstone, D J; Benedetti, L; Comas, J; Copp, J B; Corominas, L; Flores-Alsina, X; Gernaey, K V; Nopens, I; Pons, M-N; Rodríguez-Roda, I; Rosen, C; Steyer, J-P; Vanrolleghem, P A; Volcke, E I P; Vrecko, D

    2013-01-01

    As the work of the IWA Task Group on Benchmarking of Control Strategies for wastewater treatment plants (WWTPs) is coming to an end, it is essential to disseminate the knowledge gained. For this reason, all authors of the IWA Scientific and Technical Report on benchmarking have come together to provide their insights, highlighting areas where knowledge may still be deficient and where new opportunities are emerging, and to propose potential avenues for future development and application of the general benchmarking framework and its associated tools. The paper focuses on the topics of temporal and spatial extension, process modifications within the WWTP, the realism of models, control strategy extensions and the potential for new evaluation tools within the existing benchmark system. We find that there are major opportunities for application within all of these areas, either from existing work already being done within the context of the benchmarking simulation models (BSMs) or applicable work in the wider literature. Of key importance is increasing capability, usability and transparency of the BSM package while avoiding unnecessary complexity.

  5. A Seafloor Benchmark for 3-dimensional Geodesy

    Science.gov (United States)

    Chadwell, C. D.; Webb, S. C.; Nooner, S. L.

    2014-12-01

    We have developed an inexpensive, permanent seafloor benchmark to increase the longevity of seafloor geodetic measurements. The benchmark provides a physical tie to the sea floor lasting for decades (perhaps longer) on which geodetic sensors can be repeatedly placed and removed with millimeter resolution. Global coordinates estimated with seafloor geodetic techniques will remain attached to the benchmark allowing for the interchange of sensors as they fail or become obsolete, or for the sensors to be removed and used elsewhere, all the while maintaining a coherent series of positions referenced to the benchmark. The benchmark has been designed to free fall from the sea surface with transponders attached. The transponder can be recalled via an acoustic command sent from the surface to release from the benchmark and freely float to the sea surface for recovery. The duration of the sensor attachment to the benchmark will last from a few days to a few years depending on the specific needs of the experiment. The recovered sensors are then available to be reused at other locations, or again at the same site in the future. Three pins on the sensor frame mate precisely and unambiguously with three grooves on the benchmark. To reoccupy a benchmark a Remotely Operated Vehicle (ROV) uses its manipulator arm to place the sensor pins into the benchmark grooves. In June 2014 we deployed four benchmarks offshore central Oregon. We used the ROV Jason to successfully demonstrate the removal and replacement of packages onto the benchmark. We will show the benchmark design and its operational capabilities. Presently models of megathrust slip within the Cascadia Subduction Zone (CSZ) are mostly constrained by the sub-aerial GPS vectors from the Plate Boundary Observatory, a part of Earthscope. More long-lived seafloor geodetic measures are needed to better understand the earthquake and tsunami risk associated with a large rupture of the thrust fault within the Cascadia subduction zone

  6. Aerodynamic Benchmarking of the Deepwind Design

    DEFF Research Database (Denmark)

    Bedona, Gabriele; Schmidt Paulsen, Uwe; Aagaard Madsen, Helge

    2015-01-01

    The aerodynamic benchmarking for the DeepWind rotor is conducted comparing different rotor geometries and solutions and keeping the comparison as fair as possible. The objective for the benchmarking is to find the most suitable configuration in order to maximize the power production and minimize...... the blade solicitation and the cost of energy. Different parameters are considered for the benchmarking study. The DeepWind blade is characterized by a shape similar to the Troposkien geometry but asymmetric between the top and bottom parts: this shape is considered as a fixed parameter in the benchmarking...

  7. Vver-1000 Mox core computational benchmark

    International Nuclear Information System (INIS)

    2006-01-01

    The NEA Nuclear Science Committee has established an Expert Group that deals with the status and trends of reactor physics, fuel performance and fuel cycle issues related to disposing of weapons-grade plutonium in mixed-oxide fuel. The objectives of the group are to provide NEA member countries with up-to-date information on, and to develop consensus regarding, core and fuel cycle issues associated with burning weapons-grade plutonium in thermal water reactors (PWR, BWR, VVER-1000, CANDU) and fast reactors (BN-600). These issues concern core physics, fuel performance and reliability, and the capability and flexibility of thermal water reactors and fast reactors to dispose of weapons-grade plutonium in standard fuel cycles. The activities of the NEA Expert Group on Reactor-based Plutonium Disposition are carried out in close co-operation (jointly, in most cases) with the NEA Working Party on Scientific Issues in Reactor Systems (WPRS). A prominent part of these activities include benchmark studies. At the time of preparation of this report, the following benchmarks were completed or in progress: VENUS-2 MOX Core Benchmarks: carried out jointly with the WPRS (formerly the WPPR) (completed); VVER-1000 LEU and MOX Benchmark (completed); KRITZ-2 Benchmarks: carried out jointly with the WPRS (formerly the WPPR) (completed); Hollow and Solid MOX Fuel Behaviour Benchmark (completed); PRIMO MOX Fuel Performance Benchmark (ongoing); VENUS-2 MOX-fuelled Reactor Dosimetry Calculation (ongoing); VVER-1000 In-core Self-powered Neutron Detector Calculational Benchmark (started); MOX Fuel Rod Behaviour in Fast Power Pulse Conditions (started); Benchmark on the VENUS Plutonium Recycling Experiments Configuration 7 (started). This report describes the detailed results of the benchmark investigating the physics of a whole VVER-1000 reactor core using two-thirds low-enriched uranium (LEU) and one-third MOX fuel. It contributes to the computer code certification process and to the

  8. 75 FR 10704 - International Services Surveys: BE-180, Benchmark Survey of Financial Services Transactions...

    Science.gov (United States)

    2010-03-09

    ...] RIN 0691-AA73 International Services Surveys: BE-180, Benchmark Survey of Financial Services Transactions Between U.S. Financial Services Providers and Foreign Persons AGENCY: Bureau of Economic Analysis... BE-180, Benchmark Survey of Financial Services Transactions between U.S. Financial Services Providers...

  9. BENCHMARKING - PRACTICAL TOOLS IDENTIFY KEY SUCCESS FACTORS

    Directory of Open Access Journals (Sweden)

    Olga Ju. Malinina

    2016-01-01

    Full Text Available The article gives a practical example of the application of benchmarking techniques. The object of study selected fashion store Company «HLB & M Hennes & Mauritz», located in the shopping center «Gallery», Krasnodar. Hennes & Mauritz. The purpose of this article is to identify the best ways to develop a fashionable brand clothing store Hennes & Mauritz on the basis of benchmarking techniques. On the basis of conducted market research is a comparative analysis of the data from different perspectives. The result of the author’s study is a generalization of the ndings, the development of the key success factors that will allow to plan a successful trading activities in the future, based on the best experience of competitors.

  10. Shielding benchmark problems

    International Nuclear Information System (INIS)

    Tanaka, Shun-ichi; Sasamoto, Nobuo; Oka, Yoshiaki; Kawai, Masayoshi; Nakazawa, Masaharu.

    1978-09-01

    Shielding benchmark problems were prepared by the Working Group of Assessment of Shielding Experiments in the Research Comittee on Shielding Design of the Atomic Energy Society of Japan, and compiled by the Shielding Laboratory of Japan Atomic Energy Research Institute. Twenty-one kinds of shielding benchmark problems are presented for evaluating the calculational algorithm and the accuracy of computer codes based on the discrete ordinates method and the Monte Carlo method and for evaluating the nuclear data used in the codes. (author)

  11. Dynamic benchmarking of simulation codes

    International Nuclear Information System (INIS)

    Henry, R.E.; Paik, C.Y.; Hauser, G.M.

    1996-01-01

    Computer simulation of nuclear power plant response can be a full-scope control room simulator, an engineering simulator to represent the general behavior of the plant under normal and abnormal conditions, or the modeling of the plant response to conditions that would eventually lead to core damage. In any of these, the underlying foundation for their use in analysing situations, training of vendor/utility personnel, etc. is how well they represent what has been known from industrial experience, large integral experiments and separate effects tests. Typically, simulation codes are benchmarked with some of these; the level of agreement necessary being dependent upon the ultimate use of the simulation tool. However, these analytical models are computer codes, and as a result, the capabilities are continually enhanced, errors are corrected, new situations are imposed on the code that are outside of the original design basis, etc. Consequently, there is a continual need to assure that the benchmarks with important transients are preserved as the computer code evolves. Retention of this benchmarking capability is essential to develop trust in the computer code. Given the evolving world of computer codes, how is this retention of benchmarking capabilities accomplished? For the MAAP4 codes this capability is accomplished through a 'dynamic benchmarking' feature embedded in the source code. In particular, a set of dynamic benchmarks are included in the source code and these are exercised every time the archive codes are upgraded and distributed to the MAAP users. Three different types of dynamic benchmarks are used: plant transients; large integral experiments; and separate effects tests. Each of these is performed in a different manner. The first is accomplished by developing a parameter file for the plant modeled and an input deck to describe the sequence; i.e. the entire MAAP4 code is exercised. The pertinent plant data is included in the source code and the computer

  12. Benchmarking Diagnostic Algorithms on an Electrical Power System Testbed

    Science.gov (United States)

    Kurtoglu, Tolga; Narasimhan, Sriram; Poll, Scott; Garcia, David; Wright, Stephanie

    2009-01-01

    Diagnostic algorithms (DAs) are key to enabling automated health management. These algorithms are designed to detect and isolate anomalies of either a component or the whole system based on observations received from sensors. In recent years a wide range of algorithms, both model-based and data-driven, have been developed to increase autonomy and improve system reliability and affordability. However, the lack of support to perform systematic benchmarking of these algorithms continues to create barriers for effective development and deployment of diagnostic technologies. In this paper, we present our efforts to benchmark a set of DAs on a common platform using a framework that was developed to evaluate and compare various performance metrics for diagnostic technologies. The diagnosed system is an electrical power system, namely the Advanced Diagnostics and Prognostics Testbed (ADAPT) developed and located at the NASA Ames Research Center. The paper presents the fundamentals of the benchmarking framework, the ADAPT system, description of faults and data sets, the metrics used for evaluation, and an in-depth analysis of benchmarking results obtained from testing ten diagnostic algorithms on the ADAPT electrical power system testbed.

  13. The analysis of one-dimensional reactor kinetics benchmark computations

    International Nuclear Information System (INIS)

    Sidell, J.

    1975-11-01

    During March 1973 the European American Committee on Reactor Physics proposed a series of simple one-dimensional reactor kinetics problems, with the intention of comparing the relative efficiencies of the numerical methods employed in various codes, which are currently in use in many national laboratories. This report reviews the contributions submitted to this benchmark exercise and attempts to assess the relative merits and drawbacks of the various theoretical and computer methods. (author)

  14. Benchmarking von Krankenhausinformationssystemen – eine vergleichende Analyse deutschsprachiger Benchmarkingcluster

    Directory of Open Access Journals (Sweden)

    Jahn, Franziska

    2015-08-01

    Full Text Available Benchmarking is a method of strategic information management used by many hospitals today. During the last years, several benchmarking clusters have been established within the German-speaking countries. They support hospitals in comparing and positioning their information system’s and information management’s costs, performance and efficiency against other hospitals. In order to differentiate between these benchmarking clusters and to provide decision support in selecting an appropriate benchmarking cluster, a classification scheme is developed. The classification scheme observes both general conditions and examined contents of the benchmarking clusters. It is applied to seven benchmarking clusters which have been active in the German-speaking countries within the last years. Currently, performance benchmarking is the most frequent benchmarking type, whereas the observed benchmarking clusters differ in the number of benchmarking partners and their cooperation forms. The benchmarking clusters also deal with different benchmarking subjects. Assessing costs and quality application systems, physical data processing systems, organizational structures of information management and IT services processes are the most frequent benchmarking subjects. There is still potential for further activities within the benchmarking clusters to measure strategic and tactical information management, IT governance and quality of data and data-processing processes. Based on the classification scheme and the comparison of the benchmarking clusters, we derive general recommendations for benchmarking of hospital information systems.

  15. Co-ordinated research programme on benchmark study for the seismic analysis and testing of WWER-type nuclear power plants. V. 4D. Paks NPP: Analysis and testing. Working material

    International Nuclear Information System (INIS)

    1996-01-01

    The Co-ordinated research programme on the benchmark study for the seismic analysis and testing of WWER-type nuclear power plants was initiated subsequent to the request from representatives of Member States. The conclusions adopted at the Technical Committee Meeting on Seismic Issues related to existing nuclear power plants held in Tokyo in 1991 called for the harmonization of methods and criteria used in Member States in issues related to seismic safety. The Consulltants' Meeting which followed resulted in producing a working document for CRP. It was decided that a benchmark study is the most effective way to achieve the principal objective. Two types of WWER reactors (WWER-440/213 and WWER-1000) were selected as prototypes for the benchmark exercise to be tested on a full scale using explosions and/or vibration generators. The two prototypes are Kozloduy Units 5/6 for WWER-1000 and Paks for WWER-440/213 nuclear power plants. This volume of Working material contains reports on seismic margin assessment and earthquake experience based methods for WWER-440/213 type NPPs; structural analysis and site inspection for site requalification; structural response of Paks NPP reactor building; analysis and testing of model worm type tanks on shaking table; vibration test of a worm tank model; evaluation of potential hazard for operating WWER control rods under seismic excitation

  16. Benchmarking of venous thromboembolism prophylaxis practice with ENT.UK guidelines.

    Science.gov (United States)

    Al-Qahtani, Ali S

    2017-05-01

    The aim of this study was to benchmark our guidelines of prevention of venous thromboembolism (VTE) in ENT surgical population against ENT.UK guidelines, and also to encourage healthcare providers to utilize benchmarking as an effective method of improving performance. The study design is prospective descriptive analysis. The setting of this study is tertiary referral centre (Assir Central Hospital, Abha, Saudi Arabia). In this study, we are benchmarking our practice guidelines of the prevention of VTE in the ENT surgical population against that of ENT.UK guidelines to mitigate any gaps. ENT guidelines 2010 were downloaded from the ENT.UK Website. Our guidelines were compared with the possibilities that either our performance meets or fall short of ENT.UK guidelines. Immediate corrective actions will take place if there is quality chasm between the two guidelines. ENT.UK guidelines are evidence-based and updated which may serve as role-model for adoption and benchmarking. Our guidelines were accordingly amended to contain all factors required in providing a quality service to ENT surgical patients. While not given appropriate attention, benchmarking is a useful tool in improving quality of health care. It allows learning from others' practices and experiences, and works towards closing any quality gaps. In addition, benchmarking clinical outcomes is critical for quality improvement and informing decisions concerning service provision. It is recommended to be included on the list of quality improvement methods of healthcare services.

  17. Potential for reducing global carbon emissions from electricity production-A benchmarking analysis

    International Nuclear Information System (INIS)

    Ang, B.W.; Zhou, P.; Tay, L.P.

    2011-01-01

    We present five performance indicators for electricity generation for 129 countries using the 2005 data. These indicators, measured at the national level, are the aggregate CO 2 intensity of electricity production, the efficiencies of coal, oil and gas generation and the share of electricity produced from non-fossil fuels. We conduct a study on the potential for reducing global energy-related CO 2 emissions from electricity production through simple benchmarking. This is performed based on the last four performance indicators and the construction of a cumulative curve for each of these indicators. It is found that global CO 2 emissions from electricity production would be reduced by 19% if all these indicators are benchmarked at the 50th percentile. Not surprisingly, the emission reduction potential measured in absolute terms is the highest for large countries such as China, India, Russia and the United States. When the potential is expressed as a percentage of a country's own emissions, few of these countries appear in the top-five list. - Research highlights: → We study variations in emissions per kWh of electricity generated among countries. → We analyze emissions from electricity production through benchmarking. → Estimates of reduction in emissions are made based on different assumptions.

  18. An automated protocol for performance benchmarking a widefield fluorescence microscope.

    Science.gov (United States)

    Halter, Michael; Bier, Elianna; DeRose, Paul C; Cooksey, Gregory A; Choquette, Steven J; Plant, Anne L; Elliott, John T

    2014-11-01

    Widefield fluorescence microscopy is a highly used tool for visually assessing biological samples and for quantifying cell responses. Despite its widespread use in high content analysis and other imaging applications, few published methods exist for evaluating and benchmarking the analytical performance of a microscope. Easy-to-use benchmarking methods would facilitate the use of fluorescence imaging as a quantitative analytical tool in research applications, and would aid the determination of instrumental method validation for commercial product development applications. We describe and evaluate an automated method to characterize a fluorescence imaging system's performance by benchmarking the detection threshold, saturation, and linear dynamic range to a reference material. The benchmarking procedure is demonstrated using two different materials as the reference material, uranyl-ion-doped glass and Schott 475 GG filter glass. Both are suitable candidate reference materials that are homogeneously fluorescent and highly photostable, and the Schott 475 GG filter glass is currently commercially available. In addition to benchmarking the analytical performance, we also demonstrate that the reference materials provide for accurate day to day intensity calibration. Published 2014 Wiley Periodicals Inc. Published 2014 Wiley Periodicals Inc. This article is a US government work and, as such, is in the public domain in the United States of America.

  19. Medical school benchmarking - from tools to programmes.

    Science.gov (United States)

    Wilkinson, Tim J; Hudson, Judith N; Mccoll, Geoffrey J; Hu, Wendy C Y; Jolly, Brian C; Schuwirth, Lambert W T

    2015-02-01

    Benchmarking among medical schools is essential, but may result in unwanted effects. To apply a conceptual framework to selected benchmarking activities of medical schools. We present an analogy between the effects of assessment on student learning and the effects of benchmarking on medical school educational activities. A framework by which benchmarking can be evaluated was developed and applied to key current benchmarking activities in Australia and New Zealand. The analogy generated a conceptual framework that tested five questions to be considered in relation to benchmarking: what is the purpose? what are the attributes of value? what are the best tools to assess the attributes of value? what happens to the results? and, what is the likely "institutional impact" of the results? If the activities were compared against a blueprint of desirable medical graduate outcomes, notable omissions would emerge. Medical schools should benchmark their performance on a range of educational activities to ensure quality improvement and to assure stakeholders that standards are being met. Although benchmarking potentially has positive benefits, it could also result in perverse incentives with unforeseen and detrimental effects on learning if it is undertaken using only a few selected assessment tools.

  20. Development of parallel benchmark code by sheet metal forming simulator 'ITAS'

    International Nuclear Information System (INIS)

    Watanabe, Hiroshi; Suzuki, Shintaro; Minami, Kazuo

    1999-03-01

    This report describes the development of parallel benchmark code by sheet metal forming simulator 'ITAS'. ITAS is a nonlinear elasto-plastic analysis program by the finite element method for the purpose of the simulation of sheet metal forming. ITAS adopts the dynamic analysis method that computes displacement of sheet metal at every time unit and utilizes the implicit method with the direct linear equation solver. Therefore the simulator is very robust. However, it requires a lot of computational time and memory capacity. In the development of the parallel benchmark code, we designed the code by MPI programming to reduce the computational time. In numerical experiments on the five kinds of parallel super computers at CCSE JAERI, i.e., SP2, SR2201, SX-4, T94 and VPP300, good performances are observed. The result will be shown to the public through WWW so that the benchmark results may become a guideline of research and development of the parallel program. (author)

  1. Co-ordinated research programme on benchmark study for the seismic analysis and testing of WWER-type nuclear power plants. V. 4E. Paks NPP: Analysis and testing. Working material

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-07-01

    In August 1991, following the SMiRT-11 Conference in Tokyo, a Technical Committee Meeting was held on the 'Seismic safety issues relating to existing NPPs'. The Proceedings of this TCM was subsequently compiled in an IAEA Working Material. One of the main recommendations of this TCM, called for the harmonization of criteria and methods used in Member States in seismic reassessment and upgrading of existing NPPs. Twenty four institutions from thirteen countries participated in the CRP named 'Benchmark study for the seismic analysis and testing of WWER type NPPs'. Two types of WWER reactors (WWER-1000 and WWER-440/213) selected for benchmarking. Kozloduy NPP Units 5/6 and Paks NPP represented these respectively as prototypes. Consistent with the recommendations of the TCM and the working paper prepared by the subsequent Consultants' Meeting, the focal activity of the CRP was the benchmarking exercises. A similar methodology was followed both for Paks NPP and Kozloduy NPP Unit 5. Firstly, the NPP (mainly the reactor building) was tested using a blast loading generated by a series of explosions from buried TNT charges. Records from this test were obtained at several free field locations (both downhole and surface), foundation mat, various elevations of structures as well as some tanks and the stack. Then the benchmark participants were provided with structural drawings, soil data and the free field record of the blast experiment. Their task was to make a blind prediction of the response at preselected locations. The analytical results from these participants were then compared with the results from the test. Although the benchmarking exercises constituted the focus of the CRP, there were many other interesting problems related to the seismic safety of WWER type NPPs which were addressed by the participants. These involved generic studies, i.e. codes and standards used in original WWER designs and their comparison with current international practice; seismic analysis

  2. Co-ordinated research programme on benchmark study for the seismic analysis and testing of WWER-type nuclear power plants. V. 4E. Paks NPP: Analysis and testing. Working material

    International Nuclear Information System (INIS)

    1995-01-01

    In August 1991, following the SMiRT-11 Conference in Tokyo, a Technical Committee Meeting was held on the 'Seismic safety issues relating to existing NPPs'. The Proceedings of this TCM was subsequently compiled in an IAEA Working Material. One of the main recommendations of this TCM, called for the harmonization of criteria and methods used in Member States in seismic reassessment and upgrading of existing NPPs. Twenty four institutions from thirteen countries participated in the CRP named 'Benchmark study for the seismic analysis and testing of WWER type NPPs'. Two types of WWER reactors (WWER-1000 and WWER-440/213) selected for benchmarking. Kozloduy NPP Units 5/6 and Paks NPP represented these respectively as prototypes. Consistent with the recommendations of the TCM and the working paper prepared by the subsequent Consultants' Meeting, the focal activity of the CRP was the benchmarking exercises. A similar methodology was followed both for Paks NPP and Kozloduy NPP Unit 5. Firstly, the NPP (mainly the reactor building) was tested using a blast loading generated by a series of explosions from buried TNT charges. Records from this test were obtained at several free field locations (both downhole and surface), foundation mat, various elevations of structures as well as some tanks and the stack. Then the benchmark participants were provided with structural drawings, soil data and the free field record of the blast experiment. Their task was to make a blind prediction of the response at preselected locations. The analytical results from these participants were then compared with the results from the test. Although the benchmarking exercises constituted the focus of the CRP, there were many other interesting problems related to the seismic safety of WWER type NPPs which were addressed by the participants. These involved generic studies, i.e. codes and standards used in original WWER designs and their comparison with current international practice; seismic analysis

  3. Building Bridges Between Geoscience and Data Science through Benchmark Data Sets

    Science.gov (United States)

    Thompson, D. R.; Ebert-Uphoff, I.; Demir, I.; Gel, Y.; Hill, M. C.; Karpatne, A.; Güereque, M.; Kumar, V.; Cabral, E.; Smyth, P.

    2017-12-01

    The changing nature of observational field data demands richer and more meaningful collaboration between data scientists and geoscientists. Thus, among other efforts, the Working Group on Case Studies of the NSF-funded RCN on Intelligent Systems Research To Support Geosciences (IS-GEO) is developing a framework to strengthen such collaborations through the creation of benchmark datasets. Benchmark datasets provide an interface between disciplines without requiring extensive background knowledge. The goals are to create (1) a means for two-way communication between geoscience and data science researchers; (2) new collaborations, which may lead to new approaches for data analysis in the geosciences; and (3) a public, permanent repository of complex data sets, representative of geoscience problems, useful to coordinate efforts in research and education. The group identified 10 key elements and characteristics for ideal benchmarks. High impact: A problem with high potential impact. Active research area: A group of geoscientists should be eager to continue working on the topic. Challenge: The problem should be challenging for data scientists. Data science generality and versatility: It should stimulate development of new general and versatile data science methods. Rich information content: Ideally the data set provides stimulus for analysis at many different levels. Hierarchical problem statement: A hierarchy of suggested analysis tasks, from relatively straightforward to open-ended tasks. Means for evaluating success: Data scientists and geoscientists need means to evaluate whether the algorithms are successful and achieve intended purpose. Quick start guide: Introduction for data scientists on how to easily read the data to enable rapid initial data exploration. Geoscience context: Summary for data scientists of the specific data collection process, instruments used, any pre-processing and the science questions to be answered. Citability: A suitable identifier to

  4. Issues in Benchmark Metric Selection

    Science.gov (United States)

    Crolotte, Alain

    It is true that a metric can influence a benchmark but will esoteric metrics create more problems than they will solve? We answer this question affirmatively by examining the case of the TPC-D metric which used the much debated geometric mean for the single-stream test. We will show how a simple choice influenced the benchmark and its conduct and, to some extent, DBMS development. After examining other alternatives our conclusion is that the “real” measure for a decision-support benchmark is the arithmetic mean.

  5. Benchmarking clinical photography services in the NHS.

    Science.gov (United States)

    Arbon, Giles

    2015-01-01

    Benchmarking is used in services across the National Health Service (NHS) using various benchmarking programs. Clinical photography services do not have a program in place and services have to rely on ad hoc surveys of other services. A trial benchmarking exercise was undertaken with 13 services in NHS Trusts. This highlights valuable data and comparisons that can be used to benchmark and improve services throughout the profession.

  6. Benchmarking Danish Industries

    DEFF Research Database (Denmark)

    Gammelgaard, Britta; Bentzen, Eric; Aagaard Andreassen, Mette

    2003-01-01

    compatible survey. The International Manufacturing Strategy Survey (IMSS) doesbring up the question of supply chain management, but unfortunately, we did not have access to thedatabase. Data from the members of the SCOR-model, in the form of benchmarked performance data,may exist, but are nonetheless...... not public. The survey is a cooperative project "Benchmarking DanishIndustries" with CIP/Aalborg University, the Danish Technological University, the DanishTechnological Institute and Copenhagen Business School as consortia partners. The project has beenfunded by the Danish Agency for Trade and Industry...

  7. Definition and Analysis of Heavy Water Reactor Benchmarks for Testing New Wims-D Libraries; Definicion y Analisis de Benchmarks de Reactores de Agua Pesada para Pruebas de Nuevas Bibliotecas de Datos Wims-D

    Energy Technology Data Exchange (ETDEWEB)

    Leszczynski, Francisco [Comision Nacional de Energia Atomica, Centro Atomico Bariloche (Argentina)

    2000-07-01

    This work is part of the IAEA-WIMS Library Update Project (WLUP). A group of heavy water reactor benchmarks have been selected for testing new WIMS-D libraries, including calculations with WIMSD5B program and the analysis of results.These benchmarks cover a wide variety of reactors and conditions, from fresh fuels to high burnup, and from natural to enriched uranium.Besides, each benchmark includes variations in lattice pitch and in coolants (normally heavy water and void).Multiplication factors with critical experimental bucklings and other parameters are calculated and compared with experimental reference values.The WIMS libraries used for the calculations were generated with basic data from JEF-2.2 Rev.3 (JEF) and ENDF/B-VI iNReleaseln 5 (E6) Results obtained with WIMS-86 (W86) library, included with WIMSD5B package, from Windfrith, UK with adjusted data, are included also, for showing the improvements obtained with the new -not adjusted- libraries.The calculations with WIMSD5B were made with two methods (input program options): PIJ (two-dimension collision probability method) and DSN (one-dimension Sn method, with homogenization of materials by ring).The general conclusions are: the library based on JEF data and the DSN meted give the best results, that in average are acceptable.

  8. Comparative analysis of results between CASMO, MCNP and Serpent for a suite of Benchmark problems on BWR reactors

    International Nuclear Information System (INIS)

    Xolocostli M, J. V.; Vargas E, S.; Gomez T, A. M.; Reyes F, M. del C.; Del Valle G, E.

    2014-10-01

    In this paper a comparison is made in analyzing the suite of Benchmark problems for reactors type BWR between CASMO-4, MCNP6 and Serpent code. The Benchmark problem consists of two different geometries: a fuel cell of a pin and assembly type BWR. To facilitate the study of reactors physics in the fuel pin their nuclear characteristics are provided to detail, such as burnt dependence, the reactivity of selected nuclide, etc. With respect to the fuel assembly, the presented results are regarding to infinite multiplication factor for burning different steps and different vacuum conditions. Making the analysis of this set of Benchmark problems provides comprehensive test problems for the next fuels generation of BWR reactors with high extended burned. It is important to note that when making this comparison the purpose is to validate the methodologies used in modeling for different operating conditions, if the case is of other BWR assembly. The results will be within a range with some uncertainty, considering that does not depend on code that is used. Escuela Superior de Fisica y Matematicas of Instituto Politecnico Nacional (IPN (Mexico) has accumulated some experience in using Serpent, due to the potential of this code over other commercial codes such as CASMO and MCNP. The obtained results for the infinite multiplication factor are encouraging and motivate the studies to continue with the generation of the X S of a core to a next step a respective nuclear data library is constructed and this can be used by codes developed as part of the development project of the Mexican Analysis Platform of Nuclear Reactors AZTLAN. (Author)

  9. Benchmarking of human resources management

    Directory of Open Access Journals (Sweden)

    David M. Akinnusi

    2008-11-01

    Full Text Available This paper reviews the role of human resource management (HRM which, today, plays a strategic partnership role in management. The focus of the paper is on HRM in the public sector, where much hope rests on HRM as a means of transforming the public service and achieving much needed service delivery. However, a critical evaluation of HRM practices in the public sector reveals that these services leave much to be desired. The paper suggests the adoption of benchmarking as a process to revamp HRM in the public sector so that it is able to deliver on its promises. It describes the nature and process of benchmarking and highlights the inherent difficulties in applying benchmarking in HRM. It concludes with some suggestions for a plan of action. The process of identifying “best” practices in HRM requires the best collaborative efforts of HRM practitioners and academicians. If used creatively, benchmarking has the potential to bring about radical and positive changes in HRM in the public sector. The adoption of the benchmarking process is, in itself, a litmus test of the extent to which HRM in the public sector has grown professionally.

  10. Integrating Best Practice and Performance Indicators To Benchmark the Performance of a School System. Benchmarking Paper 940317.

    Science.gov (United States)

    Cuttance, Peter

    This paper provides a synthesis of the literature on the role of benchmarking, with a focus on its use in the public sector. Benchmarking is discussed in the context of quality systems, of which it is an important component. The paper describes the basic types of benchmarking, pertinent research about its application in the public sector, the…

  11. Benchmarking and Sustainable Transport Policy

    DEFF Research Database (Denmark)

    Gudmundsson, Henrik; Wyatt, Andrew; Gordon, Lucy

    2004-01-01

    Order to learn from the best. In 2000 the European Commission initiated research to explore benchmarking as a tool to promote policies for ‘sustainable transport’. This paper reports findings and recommendations on how to address this challenge. The findings suggest that benchmarking is a valuable...... tool that may indeed help to move forward the transport policy agenda. However, there are major conditions and limitations. First of all it is not always so straightforward to delimit, measure and compare transport services in order to establish a clear benchmark. Secondly ‘sustainable transport......’ evokes a broad range of concerns that are hard to address fully at the level of specific practices. Thirdly policies are not directly comparable across space and context. For these reasons attempting to benchmark ‘sustainable transport policies’ against one another would be a highly complex task, which...

  12. Benchmark Analysis of EBR-II Shutdown Heat Removal Tests

    International Nuclear Information System (INIS)

    2017-08-01

    This publication presents the results and main achievements of an IAEA coordinated research project to verify and validate system and safety codes used in the analyses of liquid metal thermal hydraulics and neutronics phenomena in sodium cooled fast reactors. The publication will be of use to the researchers and professionals currently working on relevant fast reactors programmes. In addition, it is intended to support the training of the next generation of analysts and designers through international benchmark exercises

  13. Piping benchmark problems for the Westinghouse AP600 Standardized Plant

    International Nuclear Information System (INIS)

    Bezler, P.; DeGrassi, G.; Braverman, J.; Wang, Y.K.

    1997-01-01

    To satisfy the need for verification of the computer programs and modeling techniques that will be used to perform the final piping analyses for the Westinghouse AP600 Standardized Plant, three benchmark problems were developed. The problems are representative piping systems subjected to representative dynamic loads with solutions developed using the methods being proposed for analysis for the AP600 standard design. It will be required that the combined license licensees demonstrate that their solutions to these problems are in agreement with the benchmark problem set

  14. Benchmark analysis of forecasted seasonal temperature over different climatic areas

    Science.gov (United States)

    Giunta, G.; Salerno, R.; Ceppi, A.; Ercolani, G.; Mancini, M.

    2015-12-01

    From a long-term perspective, an improvement of seasonal forecasting, which is often exclusively based on climatology, could provide a new capability for the management of energy resources in a time scale of just a few months. This paper regards a benchmark analysis in relation to long-term temperature forecasts over Italy in the year 2010, comparing the eni-kassandra meteo forecast (e-kmf®) model, the Climate Forecast System-National Centers for Environmental Prediction (CFS-NCEP) model, and the climatological reference (based on 25-year data) with observations. Statistical indexes are used to understand the reliability of the prediction of 2-m monthly air temperatures with a perspective of 12 weeks ahead. The results show how the best performance is achieved by the e-kmf® system which improves the reliability for long-term forecasts compared to climatology and the CFS-NCEP model. By using the reliable high-performance forecast system, it is possible to optimize the natural gas portfolio and management operations, thereby obtaining a competitive advantage in the European energy market.

  15. Co-ordinated research programme on benchmark study for the seismic analysis and testing of WWER-type nuclear power plants. V. 2. Generic material: Codes, standards, criteria. Working material

    International Nuclear Information System (INIS)

    1995-01-01

    The Co-ordinated research programme on the benchmark study for the seismic analysis and testing of WWER-type nuclear power plants was initiated subsequent to the request from representatives of Member States. The conclusions adopted at the Technical Committee Meeting on Seismic Issues related to existing nuclear power plants held in Tokyo in 1991 called for the harmonization of methods and criteria used in Member States in issues related to seismic safety. The Consulltants' Meeting which followed resulted in producing a working document for CRP. It was decided that a benchmark study is the most effective way to achieve the principal objective. Two types of WWER reactors (WWER-440/213 and WWER-1000) were selected as prototypes for the benchmark exercise to be tested on a full scale using explosions and/or vibration generators. The two prototypes are Kozloduy Units 5/6 for WWER-1000 and Paks for WWER-440/213 nuclear power plants. This volume of Working material contains reports related to generic material, namely codes, standards and criteria for benchmark analysis

  16. Co-ordinated research programme on benchmark study for the seismic analysis and testing of WWER-type nuclear power plants. V. 2. Generic material: Codes, standards, criteria. Working material

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-07-01

    The Co-ordinated research programme on the benchmark study for the seismic analysis and testing of WWER-type nuclear power plants was initiated subsequent to the request from representatives of Member States. The conclusions adopted at the Technical Committee Meeting on Seismic Issues related to existing nuclear power plants held in Tokyo in 1991 called for the harmonization of methods and criteria used in Member States in issues related to seismic safety. The Consulltants' Meeting which followed resulted in producing a working document for CRP. It was decided that a benchmark study is the most effective way to achieve the principal objective. Two types of WWER reactors (WWER-440/213 and WWER-1000) were selected as prototypes for the benchmark exercise to be tested on a full scale using explosions and/or vibration generators. The two prototypes are Kozloduy Units 5/6 for WWER-1000 and Paks for WWER-440/213 nuclear power plants. This volume of Working material contains reports related to generic material, namely codes, standards and criteria for benchmark analysis.

  17. Benchmarking for controllere: metoder, teknikker og muligheder

    DEFF Research Database (Denmark)

    Bukh, Per Nikolaj; Sandalgaard, Niels Erik; Dietrichson, Lars Grubbe

    2008-01-01

    Benchmarking indgår på mange måder i både private og offentlige virksomheders ledelsespraksis. I økonomistyring anvendes benchmark-baserede indikatorer (eller nøgletal), eksempelvis ved fastlæggelse af mål i resultatkontrakter eller for at angive det ønskede niveau for visse nøgletal i et Balanced...... Scorecard eller tilsvarende målstyringsmodeller. Artiklen redegør for begrebet benchmarking ved at præsentere og diskutere forskellige facetter af det, samt redegør for fire forskellige anvendelser af benchmarking for at vise begrebets bredde og væsentligheden af at klarlægge formålet med et...... benchmarkingprojekt. Dernæst bliver forskellen på resultatbenchmarking og procesbenchmarking behandlet, hvorefter brugen af intern hhv. ekstern benchmarking, samt brugen af benchmarking i budgetlægning og budgetopfølgning, behandles....

  18. Adaptive unified continuum FEM modeling of a 3D FSI benchmark problem.

    Science.gov (United States)

    Jansson, Johan; Degirmenci, Niyazi Cem; Hoffman, Johan

    2017-09-01

    In this paper, we address a 3D fluid-structure interaction benchmark problem that represents important characteristics of biomedical modeling. We present a goal-oriented adaptive finite element methodology for incompressible fluid-structure interaction based on a streamline diffusion-type stabilization of the balance equations for mass and momentum for the entire continuum in the domain, which is implemented in the Unicorn/FEniCS software framework. A phase marker function and its corresponding transport equation are introduced to select the constitutive law, where the mesh tracks the discontinuous fluid-structure interface. This results in a unified simulation method for fluids and structures. We present detailed results for the benchmark problem compared with experiments, together with a mesh convergence study. Copyright © 2016 John Wiley & Sons, Ltd.

  19. A Proposal of Indicators and Policy Framework for Innovation Benchmark in Europe

    OpenAIRE

    García Manjón, Juan Vicente

    2010-01-01

    The implementation of innovation policies has been adopted at European level from a common perspective. The European Council (2000) established open methods of coordination (OMC) in order to gain mutual understanding and achieving greater convergence on innovation policies, constituting a benchmarking procedure. However, the development of benchmarking analysis for innovation policies faces two major inconveniences: the lack of accepted innovation policy frameworks and the existence of sui...

  20. Professional Performance and Bureaucratic Benchmarking Information

    DEFF Research Database (Denmark)

    Schneider, Melanie L.; Mahlendorf, Matthias D.; Schäffer, Utz

    Prior research documents positive effects of benchmarking information provision on performance and attributes this to social comparisons. However, the effects on professional recipients are unclear. Studies of professional control indicate that professional recipients often resist bureaucratic...... controls because of organizational-professional conflicts. We therefore analyze the association between bureaucratic benchmarking information provision and professional performance and suggest that the association is more positive if prior professional performance was low. We test our hypotheses based...... on archival, publicly disclosed, professional performance data for 191 German orthopedics departments, matched with survey data on bureaucratic benchmarking information given to chief orthopedists by the administration. We find a positive association between bureaucratic benchmarking information provision...

  1. Energy benchmarking for shopping centers in Gulf Coast region

    International Nuclear Information System (INIS)

    Juaidi, Adel; AlFaris, Fadi; Montoya, Francisco G.; Manzano-Agugliaro, Francisco

    2016-01-01

    Building sector consumes a significant amount of energy worldwide (up to 40% of the total global energy); moreover, by the year 2030 the consumption is expected to increase by 50%. One of the reasons is that the performance of buildings and its components degrade over the years. In recent years, energy benchmarking for government office buildings, large scale public buildings and large commercial buildings is one of the key energy saving projects for promoting the development of building energy efficiency and sustainable energy savings in Gulf Cooperation Council (GCC) countries. Benchmarking would increase the purchase of energy efficient equipment, reducing energy bills, CO_2 emissions and conventional air pollution. This paper focuses on energy benchmarking for shopping centers in Gulf Coast Region. In addition, this paper will analyze a sample of shopping centers data in Gulf Coast Region (Dubai, Ajman, Sharjah, Oman and Bahrain). It aims to develop a benchmark for these shopping centers by highlighting the status of energy consumption performance. This research will support the sustainability movement in Gulf area through classifying the shopping centers into: Poor, Usual and Best Practices in terms of energy efficiency. According to the benchmarking analysis in this paper, the shopping centers best energy management practices in the Gulf Coast Region are the buildings that consume less than 810 kW h/m"2/yr, whereas the poor building practices are the centers that consume greater than 1439 kW h/m"2/yr. The conclusions of this work can be used as a reference for shopping centres benchmarking with similar climate. - Highlights: •The energy consumption data of shopping centers in Gulf Coast Region were gathered. •A benchmarking of energy consumption for the public areas for the shopping centers in the Gulf Coast Region was developed. •The shopping centers have the usual practice in the region between 810 kW h/m"2/yr and 1439 kW h/m"2/yr.

  2. EPA's Benchmark Dose Modeling Software

    Science.gov (United States)

    The EPA developed the Benchmark Dose Software (BMDS) as a tool to help Agency risk assessors facilitate applying benchmark dose (BMD) method’s to EPA’s human health risk assessment (HHRA) documents. The application of BMD methods overcomes many well know limitations ...

  3. Accelerator shielding benchmark problems

    International Nuclear Information System (INIS)

    Hirayama, H.; Ban, S.; Nakamura, T.

    1993-01-01

    Accelerator shielding benchmark problems prepared by Working Group of Accelerator Shielding in the Research Committee on Radiation Behavior in the Atomic Energy Society of Japan were compiled by Radiation Safety Control Center of National Laboratory for High Energy Physics. Twenty-five accelerator shielding benchmark problems are presented for evaluating the calculational algorithm, the accuracy of computer codes and the nuclear data used in codes. (author)

  4. State of the art of second international exercise on benchmarks in BWR reactors

    International Nuclear Information System (INIS)

    Verdu, G.; Munoz-Cobo, J. L.; Palomo, M. J.; Escriva, A.; Ginestar, D.

    1998-01-01

    This is a second in series of Benchmarks based on data from operating Swedish BWRs. The first one concerned measurements made in cycles 14,15 16 and 17 at Ringhals 1 Nuclear Power Plant and addressed predictive power of analytical tools used in BWR stability analysis. Part of the data was disclosed only after participants had provided their results. This work has been published in the report: NEA/NSC/DOC(96)22, November 1996. In this report it was recognised that there is a need for better qualification of the applied noise analysis methods. A follow up Benchmark was thus proposed dedicated to the analysis of time series data and including the evaluation of both global and regional stability of Forsmarks 1 and 2 Nuclear Power Plant. In this second Benchmark have participated Forsmarks Kraftgrupp AB,NEA Nuclear Science Committee, CSN Consejo de Seguridad Nuclear and Department of Chemical and Nuclear Engineering of Polytechnic University of Valencia. (Author)

  5. Developing and modeling of the 'Laguna Verde' BWR CRDA benchmark

    International Nuclear Information System (INIS)

    Solis-Rodarte, J.; Fu, H.; Ivanov, K.N.; Matsui, Y.; Hotta, A.

    2002-01-01

    Reactivity initiated accidents (RIA) and design basis transients are one of the most important aspects related to nuclear power reactor safety. These events are re-evaluated whenever core alterations (modifications) are made as part of the nuclear safety analysis performed to a new design. These modifications usually include, but are not limited to, power upgrades, longer cycles, new fuel assembly and control rod designs, etc. The results obtained are compared with pre-established bounding analysis values to see if the new core design fulfills the requirements of safety constraints imposed on the design. The control rod drop accident (CRDA) is the design basis transient for the reactivity events of BWR technology. The CRDA is a very localized event depending on the control rod insertion position and the fuel assemblies surrounding the control rod falling from the core. A numerical benchmark was developed based on the CRDA RIA design basis accident to further asses the performance of coupled 3D neutron kinetics/thermal-hydraulics codes. The CRDA in a BWR is a mostly neutronic driven event. This benchmark is based on a real operating nuclear power plant - unit 1 of the Laguna Verde (LV1) nuclear power plant (NPP). The definition of the benchmark is presented briefly together with the benchmark specifications. Some of the cross-sections were modified in order to make the maximum control rod worth greater than one dollar. The transient is initiated at steady-state by dropping the control rod with maximum worth at full speed. The 'Laguna Verde' (LV1) BWR CRDA transient benchmark is calculated using two coupled codes: TRAC-BF1/NEM and TRAC-BF1/ENTREE. Neutron kinetics and thermal hydraulics models were developed for both codes. Comparison of the obtained results is presented along with some discussion of the sensitivity of results to some modeling assumptions

  6. Funding and financing mechanisms for infrastructure delivery: multi-sector analysis of benchmarking of South Africa against developed countries

    CSIR Research Space (South Africa)

    Matji, MP

    2015-05-01

    Full Text Available -1 AMPEAK Asset Management Conference 2015 Funding and financing mechanisms for infrastructure delivery: multi-sector analysis of benchmarking of South Africa against developed countries Matji, MP and Ruiters, C Abstract: For developing..., the researcher identifies financing opportunities for infrastructure delivery in South Africa and how such opportunities can be explored, taking into account political dynamics and legislative sector-based frameworks. Keywords: Asset Management, Financing...

  7. Estimating the Need for Palliative Radiation Therapy: A Benchmarking Approach

    Energy Technology Data Exchange (ETDEWEB)

    Mackillop, William J., E-mail: william.mackillop@krcc.on.ca [Cancer Care and Epidemiology, Queen' s Cancer Research Institute, Queen' s University, Kingston, Ontario (Canada); Department of Public Health Sciences, Queen' s University, Kingston, Ontario (Canada); Department of Oncology, Queen' s University, Kingston, Ontario (Canada); Kong, Weidong [Cancer Care and Epidemiology, Queen' s Cancer Research Institute, Queen' s University, Kingston, Ontario (Canada)

    2016-01-01

    Purpose: Palliative radiation therapy (PRT) benefits many patients with incurable cancer, but the overall need for PRT is unknown. Our primary objective was to estimate the appropriate rate of use of PRT in Ontario. Methods and Materials: The Ontario Cancer Registry identified patients who died of cancer in Ontario between 2006 and 2010. Comprehensive RT records were linked to the registry. Multivariate analysis identified social and health system-related factors affecting the use of PRT, enabling us to define a benchmark population of patients with unimpeded access to PRT. The proportion of cases treated at any time (PRT{sub lifetime}), the proportion of cases treated in the last 2 years of life (PRT{sub 2y}), and number of courses of PRT per thousand cancer deaths were measured in the benchmark population. These benchmarks were standardized to the characteristics of the overall population, and province-wide PRT rates were then compared to benchmarks. Results: Cases diagnosed at hospitals with no RT on-site and residents of poorer communities and those who lived farther from an RT center, were significantly less likely than others to receive PRT. However, availability of RT at the diagnosing hospital was the dominant factor. Neither socioeconomic status nor distance from home to nearest RT center had a significant effect on the use of PRT in patients diagnosed at a hospital with RT facilities. The benchmark population therefore consisted of patients diagnosed at a hospital with RT facilities. The standardized benchmark for PRT{sub lifetime} was 33.9%, and the corresponding province-wide rate was 28.5%. The standardized benchmark for PRT{sub 2y} was 32.4%, and the corresponding province-wide rate was 27.0%. The standardized benchmark for the number of courses of PRT per thousand cancer deaths was 652, and the corresponding province-wide rate was 542. Conclusions: Approximately one-third of patients who die of cancer in Ontario need PRT, but many of them are never

  8. Benchmarks of Global Clean Energy Manufacturing

    Energy Technology Data Exchange (ETDEWEB)

    Sandor, Debra [National Renewable Energy Lab. (NREL), Golden, CO (United States); Chung, Donald [National Renewable Energy Lab. (NREL), Golden, CO (United States); Keyser, David [National Renewable Energy Lab. (NREL), Golden, CO (United States); Mann, Margaret [National Renewable Energy Lab. (NREL), Golden, CO (United States); Engel-Cox, Jill [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2017-01-01

    The Clean Energy Manufacturing Analysis Center (CEMAC), sponsored by the U.S. Department of Energy (DOE) Office of Energy Efficiency and Renewable Energy (EERE), provides objective analysis and up-to-date data on global supply chains and manufacturing of clean energy technologies. Benchmarks of Global Clean Energy Manufacturing sheds light on several fundamental questions about the global clean technology manufacturing enterprise: How does clean energy technology manufacturing impact national economies? What are the economic opportunities across the manufacturing supply chain? What are the global dynamics of clean energy technology manufacturing?

  9. 2015/2016 Quality Risk Management Benchmarking Survey.

    Science.gov (United States)

    Waldron, Kelly; Ramnarine, Emma; Hartman, Jeffrey

    2017-01-01

    This paper investigates the concept of quality risk management (QRM) maturity as it applies to the pharmaceutical and biopharmaceutical industries, using the results and analysis from a QRM benchmarking survey conducted in 2015 and 2016. QRM maturity can be defined as the effectiveness and efficiency of a quality risk management program, moving beyond "check-the-box" compliance with guidelines such as ICH Q9 Quality Risk Management , to explore the value QRM brings to business and quality operations. While significant progress has been made towards full adoption of QRM principles and practices across industry, the full benefits of QRM have not yet been fully realized. The results of the QRM Benchmarking Survey indicate that the pharmaceutical and biopharmaceutical industries are approximately halfway along the journey towards full QRM maturity. LAY ABSTRACT: The management of risks associated with medicinal product quality and patient safety are an important focus for the pharmaceutical and biopharmaceutical industries. These risks are identified, analyzed, and controlled through a defined process called quality risk management (QRM), which seeks to protect the patient from potential quality-related risks. This paper summarizes the outcomes of a comprehensive survey of industry practitioners performed in 2015 and 2016 that aimed to benchmark the level of maturity with regard to the application of QRM. The survey results and subsequent analysis revealed that the pharmaceutical and biopharmaceutical industries have made significant progress in the management of quality risks over the last ten years, and they are roughly halfway towards reaching full maturity of QRM. © PDA, Inc. 2017.

  10. Deviating From the Benchmarks

    DEFF Research Database (Denmark)

    Rocha, Vera; Van Praag, Mirjam; Carneiro, Anabela

    This paper studies three related questions: To what extent otherwise similar startups employ different quantities and qualities of human capital at the moment of entry? How persistent are initial human capital choices over time? And how does deviating from human capital benchmarks influence firm......, founders human capital, and the ownership structure of startups (solo entrepreneurs versus entrepreneurial teams). We then study the survival implications of exogenous deviations from these benchmarks, based on spline models for survival data. Our results indicate that (especially negative) deviations from...... the benchmark can be substantial, are persistent over time, and hinder the survival of firms. The implications may, however, vary according to the sector and the ownership structure at entry. Given the stickiness of initial choices, wrong human capital decisions at entry turn out to be a close to irreversible...

  11. 2009 South American benchmarking study: natural gas transportation companies

    Energy Technology Data Exchange (ETDEWEB)

    Jordan, Nathalie [Gas TransBoliviano S.A. (Bolivia); Walter, Juliana S. [TRANSPETRO, Rio de Janeiro, RJ (Brazil)

    2009-07-01

    In the current business environment large corporations are constantly seeking to adapt their strategies. Benchmarking is an important tool for continuous improvement and decision-making. Benchmarking is a methodology that determines which aspects are the most important to be improved upon, and it proposes establishing a competitive parameter in an analysis of the best practices and processes, applying continuous improvement driven by the best organizations in their class. At the beginning of 2008, GTB (Gas TransBoliviano S.A.) contacted several South American gas transportation companies to carry out a regional benchmarking study in 2009. In this study, the key performance indicators of the South American companies, whose reality is similar, for example, in terms of prices, availability of labor, and community relations, will be compared. Within this context, a comparative study of the results, the comparative evaluation among natural gas transportation companies, is becoming an essential management instrument to help with decision-making. (author)

  12. Benchmark ultra-cool dwarfs in widely separated binary systems

    Directory of Open Access Journals (Sweden)

    Jones H.R.A.

    2011-07-01

    Full Text Available Ultra-cool dwarfs as wide companions to subgiants, giants, white dwarfs and main sequence stars can be very good benchmark objects, for which we can infer physical properties with minimal reference to theoretical models, through association with the primary stars. We have searched for benchmark ultra-cool dwarfs in widely separated binary systems using SDSS, UKIDSS, and 2MASS. We then estimate spectral types using SDSS spectroscopy and multi-band colors, place constraints on distance, and perform proper motions calculations for all candidates which have sufficient epoch baseline coverage. Analysis of the proper motion and distance constraints show that eight of our ultra-cool dwarfs are members of widely separated binary systems. Another L3.5 dwarf, SDSS 0832, is shown to be a companion to the bright K3 giant η Cancri. Such primaries can provide age and metallicity constraints for any companion objects, yielding excellent benchmark objects. This is the first wide ultra-cool dwarf + giant binary system identified.

  13. Benchmark Two-Good Utility Functions

    NARCIS (Netherlands)

    de Jaegher, K.

    Benchmark two-good utility functions involving a good with zero income elasticity and unit income elasticity are well known. This paper derives utility functions for the additional benchmark cases where one good has zero cross-price elasticity, unit own-price elasticity, and zero own price

  14. Developing integrated benchmarks for DOE performance measurement

    Energy Technology Data Exchange (ETDEWEB)

    Barancik, J.I.; Kramer, C.F.; Thode, Jr. H.C.

    1992-09-30

    The objectives of this task were to describe and evaluate selected existing sources of information on occupational safety and health with emphasis on hazard and exposure assessment, abatement, training, reporting, and control identifying for exposure and outcome in preparation for developing DOE performance benchmarks. Existing resources and methodologies were assessed for their potential use as practical performance benchmarks. Strengths and limitations of current data resources were identified. Guidelines were outlined for developing new or improved performance factors, which then could become the basis for selecting performance benchmarks. Data bases for non-DOE comparison populations were identified so that DOE performance could be assessed relative to non-DOE occupational and industrial groups. Systems approaches were described which can be used to link hazards and exposure, event occurrence, and adverse outcome factors, as needed to generate valid, reliable, and predictive performance benchmarks. Data bases were identified which contain information relevant to one or more performance assessment categories . A list of 72 potential performance benchmarks was prepared to illustrate the kinds of information that can be produced through a benchmark development program. Current information resources which may be used to develop potential performance benchmarks are limited. There is need to develop an occupational safety and health information and data system in DOE, which is capable of incorporating demonstrated and documented performance benchmarks prior to, or concurrent with the development of hardware and software. A key to the success of this systems approach is rigorous development and demonstration of performance benchmark equivalents to users of such data before system hardware and software commitments are institutionalized.

  15. Benchmark analysis of SPERT-IV reactor with Monte Carlo code MVP

    International Nuclear Information System (INIS)

    Motalab, M.A.; Mahmood, M.S.; Khan, M.J.H.; Badrun, N.H.; Lyric, Z.I.; Altaf, M.H.

    2014-01-01

    Highlights: • MVP was used for SPERT-IV core modeling. • Neutronics analysis of SPERT-IV reactor was performed. • Calculation performed to estimate critical rod height, excess reactivity. • Neutron flux, time integrated neutron flux and Cd-ratio also calculated. • Calculated values agree with experimental data. - Abstract: The benchmark experiment of the SPERT-IV D-12/25 reactor core has been analyzed with the Monte Carlo code MVP using the cross-section libraries based on JENDL-3.3. The MVP simulation was performed for the clean and cold core. The estimated values of K eff at the experimental critical rod height and the core excess reactivity were within 5% with the experimental data. Thermal neutron flux profiles at different vertical and horizontal positions of the core were also estimated. Cadmium Ratio at different point of the core was also estimated. All estimated results have been compared with the experimental results. Generally good agreement has been found between experimentally determined and the calculated results

  16. Benchmarking gate-based quantum computers

    Science.gov (United States)

    Michielsen, Kristel; Nocon, Madita; Willsch, Dennis; Jin, Fengping; Lippert, Thomas; De Raedt, Hans

    2017-11-01

    With the advent of public access to small gate-based quantum processors, it becomes necessary to develop a benchmarking methodology such that independent researchers can validate the operation of these processors. We explore the usefulness of a number of simple quantum circuits as benchmarks for gate-based quantum computing devices and show that circuits performing identity operations are very simple, scalable and sensitive to gate errors and are therefore very well suited for this task. We illustrate the procedure by presenting benchmark results for the IBM Quantum Experience, a cloud-based platform for gate-based quantum computing.

  17. A Heterogeneous Medium Analytical Benchmark

    International Nuclear Information System (INIS)

    Ganapol, B.D.

    1999-01-01

    A benchmark, called benchmark BLUE, has been developed for one-group neutral particle (neutron or photon) transport in a one-dimensional sub-critical heterogeneous plane parallel medium with surface illumination. General anisotropic scattering is accommodated through the Green's Function Method (GFM). Numerical Fourier transform inversion is used to generate the required Green's functions which are kernels to coupled integral equations that give the exiting angular fluxes. The interior scalar flux is then obtained through quadrature. A compound iterative procedure for quadrature order and slab surface source convergence provides highly accurate benchmark qualities (4- to 5- places of accuracy) results

  18. Monitoring Based Commissioning: Benchmarking Analysis of 24 UC/CSU/IOU Projects

    Energy Technology Data Exchange (ETDEWEB)

    Mills, Evan; Mathew, Paul

    2009-04-01

    Buildings rarely perform as intended, resulting in energy use that is higher than anticipated. Building commissioning has emerged as a strategy for remedying this problem in non-residential buildings. Complementing traditional hardware-based energy savings strategies, commissioning is a 'soft' process of verifying performance and design intent and correcting deficiencies. Through an evaluation of a series of field projects, this report explores the efficacy of an emerging refinement of this practice, known as monitoring-based commissioning (MBCx). MBCx can also be thought of as monitoring-enhanced building operation that incorporates three components: (1) Permanent energy information systems (EIS) and diagnostic tools at the whole-building and sub-system level; (2) Retro-commissioning based on the information from these tools and savings accounting emphasizing measurement as opposed to estimation or assumptions; and (3) On-going commissioning to ensure efficient building operations and measurement-based savings accounting. MBCx is thus a measurement-based paradigm which affords improved risk-management by identifying problems and opportunities that are missed with periodic commissioning. The analysis presented in this report is based on in-depth benchmarking of a portfolio of MBCx energy savings for 24 buildings located throughout the University of California and California State University systems. In the course of the analysis, we developed a quality-control/quality-assurance process for gathering and evaluating raw data from project sites and then selected a number of metrics to use for project benchmarking and evaluation, including appropriate normalizations for weather and climate, accounting for variations in central plant performance, and consideration of differences in building types. We performed a cost-benefit analysis of the resulting dataset, and provided comparisons to projects from a larger commissioning 'Meta-analysis' database. A

  19. Benchmarking i eksternt regnskab og revision

    DEFF Research Database (Denmark)

    Thinggaard, Frank; Kiertzner, Lars

    2001-01-01

    løbende i en benchmarking-proces. Dette kapitel vil bredt undersøge, hvor man med nogen ret kan få benchmarking-begrebet knyttet til eksternt regnskab og revision. Afsnit 7.1 beskæftiger sig med det eksterne årsregnskab, mens afsnit 7.2 tager fat i revisionsområdet. Det sidste afsnit i kapitlet opsummerer...... betragtningerne om benchmarking i forbindelse med begge områder....

  20. EPRI depletion benchmark calculations using PARAGON

    International Nuclear Information System (INIS)

    Kucukboyaci, Vefa N.

    2015-01-01

    Highlights: • PARAGON depletion calculations are benchmarked against the EPRI reactivity decrement experiments. • Benchmarks cover a wide range of enrichments, burnups, cooling times, and burnable absorbers, and different depletion and storage conditions. • Results from PARAGON-SCALE scheme are more conservative relative to the benchmark data. • ENDF/B-VII based data reduces the excess conservatism and brings the predictions closer to benchmark reactivity decrement values. - Abstract: In order to conservatively apply burnup credit in spent fuel pool criticality analyses, code validation for both fresh and used fuel is required. Fresh fuel validation is typically done by modeling experiments from the “International Handbook.” A depletion validation can determine a bias and bias uncertainty for the worth of the isotopes not found in the fresh fuel critical experiments. Westinghouse’s burnup credit methodology uses PARAGON™ (Westinghouse 2-D lattice physics code) and its 70-group cross-section library, which have been benchmarked, qualified, and licensed both as a standalone transport code and as a nuclear data source for core design simulations. A bias and bias uncertainty for the worth of depletion isotopes, however, are not available for PARAGON. Instead, the 5% decrement approach for depletion uncertainty is used, as set forth in the Kopp memo. Recently, EPRI developed a set of benchmarks based on a large set of power distribution measurements to ascertain reactivity biases. The depletion reactivity has been used to create 11 benchmark cases for 10, 20, 30, 40, 50, and 60 GWd/MTU and 3 cooling times 100 h, 5 years, and 15 years. These benchmark cases are analyzed with PARAGON and the SCALE package and sensitivity studies are performed using different cross-section libraries based on ENDF/B-VI.3 and ENDF/B-VII data to assess that the 5% decrement approach is conservative for determining depletion uncertainty

  1. Effects of exposure imprecision on estimation of the benchmark dose

    DEFF Research Database (Denmark)

    Budtz-Jørgensen, Esben; Keiding, Niels; Grandjean, Philippe

    2004-01-01

    In regression analysis failure to adjust for imprecision in the exposure variable is likely to lead to underestimation of the exposure effect. However, the consequences of exposure error for determination of safe doses of toxic substances have so far not received much attention. The benchmark...... approach is one of the most widely used methods for development of exposure limits. An important advantage of this approach is that it can be applied to observational data. However, in this type of data, exposure markers are seldom measured without error. It is shown that, if the exposure error is ignored......, then the benchmark approach produces results that are biased toward higher and less protective levels. It is therefore important to take exposure measurement error into account when calculating benchmark doses. Methods that allow this adjustment are described and illustrated in data from an epidemiological study...

  2. Encoding color information for visual tracking: Algorithms and benchmark.

    Science.gov (United States)

    Liang, Pengpeng; Blasch, Erik; Ling, Haibin

    2015-12-01

    While color information is known to provide rich discriminative clues for visual inference, most modern visual trackers limit themselves to the grayscale realm. Despite recent efforts to integrate color in tracking, there is a lack of comprehensive understanding of the role color information can play. In this paper, we attack this problem by conducting a systematic study from both the algorithm and benchmark perspectives. On the algorithm side, we comprehensively encode 10 chromatic models into 16 carefully selected state-of-the-art visual trackers. On the benchmark side, we compile a large set of 128 color sequences with ground truth and challenge factor annotations (e.g., occlusion). A thorough evaluation is conducted by running all the color-encoded trackers, together with two recently proposed color trackers. A further validation is conducted on an RGBD tracking benchmark. The results clearly show the benefit of encoding color information for tracking. We also perform detailed analysis on several issues, including the behavior of various combinations between color model and visual tracker, the degree of difficulty of each sequence for tracking, and how different challenge factors affect the tracking performance. We expect the study to provide the guidance, motivation, and benchmark for future work on encoding color in visual tracking.

  3. Do Medicare Advantage Plans Minimize Costs? Investigating the Relationship Between Benchmarks, Costs, and Rebates.

    Science.gov (United States)

    Zuckerman, Stephen; Skopec, Laura; Guterman, Stuart

    2017-12-01

    Medicare Advantage (MA), the program that allows people to receive their Medicare benefits through private health plans, uses a benchmark-and-bidding system to induce plans to provide benefits at lower costs. However, prior research suggests medical costs, profits, and other plan costs are not as low under this system as they might otherwise be. To examine how well the current system encourages MA plans to bid their lowest cost by examining the relationship between costs and bonuses (rebates) and the benchmarks Medicare uses in determining plan payments. Regression analysis using 2015 data for HMO and local PPO plans. Costs and rebates are higher for MA plans in areas with higher benchmarks, and plan costs vary less than benchmarks do. A one-dollar increase in benchmarks is associated with 32-cent-higher plan costs and a 52-cent-higher rebate, even when controlling for market and plan factors that can affect costs. This suggests the current benchmark-and-bidding system allows plans to bid higher than local input prices and other market conditions would seem to warrant. To incentivize MA plans to maximize efficiency and minimize costs, Medicare could change the way benchmarks are set or used.

  4. Ad hoc committee on reactor physics benchmarks

    International Nuclear Information System (INIS)

    Diamond, D.J.; Mosteller, R.D.; Gehin, J.C.

    1996-01-01

    In the spring of 1994, an ad hoc committee on reactor physics benchmarks was formed under the leadership of two American Nuclear Society (ANS) organizations. The ANS-19 Standards Subcommittee of the Reactor Physics Division and the Computational Benchmark Problem Committee of the Mathematics and Computation Division had both seen a need for additional benchmarks to help validate computer codes used for light water reactor (LWR) neutronics calculations. Although individual organizations had employed various means to validate the reactor physics methods that they used for fuel management, operations, and safety, additional work in code development and refinement is under way, and to increase accuracy, there is a need for a corresponding increase in validation. Both organizations thought that there was a need to promulgate benchmarks based on measured data to supplement the LWR computational benchmarks that have been published in the past. By having an organized benchmark activity, the participants also gain by being able to discuss their problems and achievements with others traveling the same route

  5. Energy efficiency benchmarking of energy-intensive industries in Taiwan

    International Nuclear Information System (INIS)

    Chan, David Yih-Liang; Huang, Chi-Feng; Lin, Wei-Chun; Hong, Gui-Bing

    2014-01-01

    Highlights: • Analytical tool was applied to estimate the energy efficiency indicator of energy intensive industries in Taiwan. • The carbon dioxide emission intensity in selected energy-intensive industries is also evaluated in this study. • The obtained energy efficiency indicator can serve as a base case for comparison to the other regions in the world. • This analysis results can serve as a benchmark for selected energy-intensive industries. - Abstract: Taiwan imports approximately 97.9% of its primary energy as rapid economic development has significantly increased energy and electricity demands. Increased energy efficiency is necessary for industry to comply with energy-efficiency indicators and benchmarking. Benchmarking is applied in this work as an analytical tool to estimate the energy-efficiency indicators of major energy-intensive industries in Taiwan and then compare them to other regions of the world. In addition, the carbon dioxide emission intensity in the iron and steel, chemical, cement, textile and pulp and paper industries are evaluated in this study. In the iron and steel industry, the energy improvement potential of blast furnace–basic oxygen furnace (BF–BOF) based on BPT (best practice technology) is about 28%. Between 2007 and 2011, the average specific energy consumption (SEC) of styrene monomer (SM), purified terephthalic acid (PTA) and low-density polyethylene (LDPE) was 9.6 GJ/ton, 5.3 GJ/ton and 9.1 GJ/ton, respectively. The energy efficiency of pulping would be improved by 33% if BAT (best available technology) were applied. The analysis results can serve as a benchmark for these industries and as a base case for stimulating changes aimed at more efficient energy utilization

  6. SU-E-T-148: Benchmarks and Pre-Treatment Reviews: A Study of Quality Assurance Effectiveness

    International Nuclear Information System (INIS)

    Lowenstein, J; Nguyen, H; Roll, J; Walsh, A; Tailor, A; Followill, D

    2015-01-01

    Purpose: To determine the impact benchmarks and pre-treatment reviews have on improving the quality of submitted clinical trial data. Methods: Benchmarks are used to evaluate a site’s ability to develop a treatment that meets a specific protocol’s treatment guidelines prior to placing their first patient on the protocol. A pre-treatment review is an actual patient placed on the protocol in which the dosimetry and contour volumes are evaluated to be per protocol guidelines prior to allowing the beginning of the treatment. A key component of these QA mechanisms is that sites are provided timely feedback to educate them on how to plan per the protocol and prevent protocol deviations on patients accrued to a protocol. For both benchmarks and pre-treatment reviews a dose volume analysis (DVA) was performed using MIM softwareTM. For pre-treatment reviews a volume contour evaluation was also performed. Results: IROC Houston performed a QA effectiveness analysis of a protocol which required both benchmarks and pre-treatment reviews. In 70 percent of the patient cases submitted, the benchmark played an effective role in assuring that the pre-treatment review of the cases met protocol requirements. The 35 percent of sites failing the benchmark subsequently modified there planning technique to pass the benchmark before being allowed to submit a patient for pre-treatment review. However, in 30 percent of the submitted cases the pre-treatment review failed where the majority (71 percent) failed the DVA. 20 percent of sites submitting patients failed to correct their dose volume discrepancies indicated by the benchmark case. Conclusion: Benchmark cases and pre-treatment reviews can be an effective QA tool to educate sites on protocol guidelines and to minimize deviations. Without the benchmark cases it is possible that 65 percent of the cases undergoing a pre-treatment review would have failed to meet the protocols requirements.Support: U24-CA-180803

  7. Analysis of mineral phases in coal utilizing factor analysis

    International Nuclear Information System (INIS)

    Roscoe, B.A.; Hopke, P.K.

    1982-01-01

    The mineral phase inclusions of coal are discussed. The contribution of these to a coal sample are determined utilizing several techniques. Neutron activation analysis in conjunction with coal washability studies have produced some information on the general trends of elemental variation in the mineral phases. These results have been enhanced by the use of various statistical techniques. The target transformation factor analysis is specifically discussed and shown to be able to produce elemental profiles of the mineral phases in coal. A data set consisting of physically fractionated coal samples was generated. These samples were analyzed by neutron activation analysis and then their elemental concentrations examined using TTFA. Information concerning the mineral phases in coal can thus be acquired from factor analysis even with limited data. Additional data may permit the resolution of additional mineral phases as well as refinement of theose already identified

  8. Computational Chemistry Comparison and Benchmark Database

    Science.gov (United States)

    SRD 101 NIST Computational Chemistry Comparison and Benchmark Database (Web, free access)   The NIST Computational Chemistry Comparison and Benchmark Database is a collection of experimental and ab initio thermochemical properties for a selected set of molecules. The goals are to provide a benchmark set of molecules for the evaluation of ab initio computational methods and allow the comparison between different ab initio computational methods for the prediction of thermochemical properties.

  9. Development of an integrated energy benchmark for a multi-family housing complex using district heating

    International Nuclear Information System (INIS)

    Jeong, Jaewook; Hong, Taehoon; Ji, Changyoon; Kim, Jimin; Lee, Minhyun; Jeong, Kwangbok

    2016-01-01

    Highlights: • The energy benchmarks for MFHC using district heating were developed. • We consider heating, hot water, electricity, and water energy consumption. • The benchmarks cover the site EUI, source EUI, and CO_2 emission intensity. • The benchmarks were developed through data mining and statistical methodologies. • The developed benchmarks provide fair criteria to evaluate energy efficiency. - Abstract: The reliable benchmarks are required to evaluate building energy efficiency fairly. This study aims to develop the energy benchmarks and relevant process for a multi-family housing complex (MFHC), which is responsible for huge CO_2 emissions in South Korea. A database, including the information on building attributes and energy consumption of 503 MFHCs, was established. The database was classified into three groups based on average enclosed area per household (AEA) through data mining techniques. The benchmarks of site energy use intensity (EUI), source EUI, and CO_2 emission intensity (CEI) were developed from Groups 1, 2, and 3. Representatively, the developed benchmarks of CEI for Groups 1, 2, and 3 were 28.17, 24.16, and 20.96 kg-CO_2/m"2 y, respectively. A comparative analysis using the operational rating identified that the developed benchmarks could solve the irrationality of the original benchmarks from overall database. In the case of the original benchmarks, 93% of small-AEA-groups and 16% of large-AEA-groups received lower grades. In the case of the developed benchmark, the upper and lower grades in Groups 1–3 were both adjusted to 50%. The proposed process for developing energy benchmark is applicable to evaluate the energy efficiency of other buildings, in other regions.

  10. MTCB: A Multi-Tenant Customizable database Benchmark

    NARCIS (Netherlands)

    van der Zijden, WIm; Hiemstra, Djoerd; van Keulen, Maurice

    2017-01-01

    We argue that there is a need for Multi-Tenant Customizable OLTP systems. Such systems need a Multi-Tenant Customizable Database (MTC-DB) as a backing. To stimulate the development of such databases, we propose the benchmark MTCB. Benchmarks for OLTP exist and multi-tenant benchmarks exist, but no

  11. Internal Benchmarking for Institutional Effectiveness

    Science.gov (United States)

    Ronco, Sharron L.

    2012-01-01

    Internal benchmarking is an established practice in business and industry for identifying best in-house practices and disseminating the knowledge about those practices to other groups in the organization. Internal benchmarking can be done with structures, processes, outcomes, or even individuals. In colleges or universities with multicampuses or a…

  12. Dependency Analysis Guidance Nordic/German Working Group on Common Cause Failure analysis. Phase 2, Development of Harmonized Approach and Applications for Common Cause Failure Quantification

    Energy Technology Data Exchange (ETDEWEB)

    Becker, Guenter; Johanson, Gunnar; Lindberg, Sandra; Vaurio, Jussi

    2009-03-15

    The Regulatory Code SSMFS 2008:1 of Swedish Radiation Safety Authority (SSM) includes requirements regarding the performance of probabilistic safety assessments (PSA), as well as PSA activities in general. Therefore, the follow-up of these activities is part of the inspection tasks of SSM. According to the SSMFS 2008:1, the safety analyses shall be based on a systematic identification and evaluation of such events, event sequences and other conditions which may lead to a radiological accident. The research report Nordic/German Working Group on Common cause Failure analysis. Phase 2 project report: Development of Harmonized Approach and Applications for Common Cause Failure Quantification has been developed under a contract with the Nordic PSA Group (NPSAG) and its German counterpart VGB, with the aim to create a common experience base for defence and analysis of dependent failures i.e. Common Cause Failures CCF. Phase 2 in this project if a deepened data analyses of CCF events and a demonstration on how the so called impact vectors can be constructed and on how CCF parameters are estimated. The word Guidance in the report title is used in order to indicate a common methodological guidance accepted by the NPSAG, based on current state of the art concerning the analysis of dependent failures and adapted to conditions relevant for Nordic sites. This will make it possible for the utilities to perform cost effective improvements and analyses. The report presents a common attempt by the authorities and the utilities to create a methodology and experience base for defence and analysis of dependent failures. The performed benchmark application has shown how important the interpretation of base data is to obtain robust CCF data and data analyses results. Good features were found in all benchmark approaches. The obtained experiences and approaches should now be used in harmonised procedures. A next step could be to develop and agree on event and formula driven impact vector

  13. Dependency Analysis Guidance Nordic/German Working Group on Common Cause Failure analysis. Phase 2, Development of Harmonized Approach and Applications for Common Cause Failure Quantification

    International Nuclear Information System (INIS)

    Becker, Guenter; Johanson, Gunnar; Lindberg, Sandra; Vaurio, Jussi

    2009-03-01

    The Regulatory Code SSMFS 2008:1 of Swedish Radiation Safety Authority (SSM) includes requirements regarding the performance of probabilistic safety assessments (PSA), as well as PSA activities in general. Therefore, the follow-up of these activities is part of the inspection tasks of SSM. According to the SSMFS 2008:1, the safety analyses shall be based on a systematic identification and evaluation of such events, event sequences and other conditions which may lead to a radiological accident. The research report Nordic/German Working Group on Common cause Failure analysis. Phase 2 project report: Development of Harmonized Approach and Applications for Common Cause Failure Quantification has been developed under a contract with the Nordic PSA Group (NPSAG) and its German counterpart VGB, with the aim to create a common experience base for defence and analysis of dependent failures i.e. Common Cause Failures CCF. Phase 2 in this project if a deepened data analyses of CCF events and a demonstration on how the so called impact vectors can be constructed and on how CCF parameters are estimated. The word Guidance in the report title is used in order to indicate a common methodological guidance accepted by the NPSAG, based on current state of the art concerning the analysis of dependent failures and adapted to conditions relevant for Nordic sites. This will make it possible for the utilities to perform cost effective improvements and analyses. The report presents a common attempt by the authorities and the utilities to create a methodology and experience base for defence and analysis of dependent failures. The performed benchmark application has shown how important the interpretation of base data is to obtain robust CCF data and data analyses results. Good features were found in all benchmark approaches. The obtained experiences and approaches should now be used in harmonised procedures. A next step could be to develop and agree on event and formula driven impact vector

  14. Benchmarking the evaluated proton differential cross sections suitable for the EBS analysis of natSi and 16O

    Science.gov (United States)

    Kokkoris, M.; Dede, S.; Kantre, K.; Lagoyannis, A.; Ntemou, E.; Paneta, V.; Preketes-Sigalas, K.; Provatas, G.; Vlastou, R.; Bogdanović-Radović, I.; Siketić, Z.; Obajdin, N.

    2017-08-01

    The evaluated proton differential cross sections suitable for the Elastic Backscattering Spectroscopy (EBS) analysis of natSi and 16O, as obtained from SigmaCalc 2.0, have been benchmarked over a wide energy and angular range at two different accelerator laboratories, namely at N.C.S.R. 'Demokritos', Athens, Greece and at Ruđer Bošković Institute (RBI), Zagreb, Croatia, using a variety of high-purity thick targets of known stoichiometry. The results are presented in graphical and tabular forms, while the observed discrepancies, as well as, the limits in accuracy of the benchmarking procedure, along with target related effects, are thoroughly discussed and analysed. In the case of oxygen the agreement between simulated and experimental spectra was generally good, while for silicon serious discrepancies were observed above Ep,lab = 2.5 MeV, suggesting that a further tuning of the appropriate nuclear model parameters in the evaluated differential cross-section datasets is required.

  15. Benchmarking of radiological departments. Starting point for successful process optimization

    International Nuclear Information System (INIS)

    Busch, Hans-Peter

    2010-01-01

    Continuous optimization of the process of organization and medical treatment is part of the successful management of radiological departments. The focus of this optimization can be cost units such as CT and MRI or the radiological parts of total patient treatment. Key performance indicators for process optimization are cost- effectiveness, service quality and quality of medical treatment. The potential for improvements can be seen by comparison (benchmark) with other hospitals and radiological departments. Clear definitions of key data and criteria are absolutely necessary for comparability. There is currently little information in the literature regarding the methodology and application of benchmarks especially from the perspective of radiological departments and case-based lump sums, even though benchmarking has frequently been applied to radiological departments by hospital management. The aim of this article is to describe and discuss systematic benchmarking as an effective starting point for successful process optimization. This includes the description of the methodology, recommendation of key parameters and discussion of the potential for cost-effectiveness analysis. The main focus of this article is cost-effectiveness (efficiency and effectiveness) with respect to cost units and treatment processes. (orig.)

  16. Benchmark simulation models, quo vadis?

    DEFF Research Database (Denmark)

    Jeppsson, U.; Alex, J; Batstone, D. J.

    2013-01-01

    As the work of the IWA Task Group on Benchmarking of Control Strategies for wastewater treatment plants (WWTPs) is coming to an end, it is essential to disseminate the knowledge gained. For this reason, all authors of the IWA Scientific and Technical Report on benchmarking have come together to p...

  17. Benchmark for Strategic Performance Improvement.

    Science.gov (United States)

    Gohlke, Annette

    1997-01-01

    Explains benchmarking, a total quality management tool used to measure and compare the work processes in a library with those in other libraries to increase library performance. Topics include the main groups of upper management, clients, and staff; critical success factors for each group; and benefits of benchmarking. (Author/LRW)

  18. Revaluering benchmarking - A topical theme for the construction industry

    DEFF Research Database (Denmark)

    Rasmussen, Grane Mikael Gregaard

    2011-01-01

    and questioning the concept objectively. This paper addresses the underlying nature of benchmarking, and accounts for the importance of focusing attention on the sociological impacts benchmarking has in organizations. To understand these sociological impacts, benchmarking research needs to transcend...... the perception of benchmarking systems as secondary and derivative and instead studying benchmarking as constitutive of social relations and as irredeemably social phenomena. I have attempted to do so in this paper by treating benchmarking using a calculative practice perspective, and describing how...

  19. Establishing benchmarks and metrics for utilization management.

    Science.gov (United States)

    Melanson, Stacy E F

    2014-01-01

    The changing environment of healthcare reimbursement is rapidly leading to a renewed appreciation of the importance of utilization management in the clinical laboratory. The process of benchmarking of laboratory operations is well established for comparing organizational performance to other hospitals (peers) and for trending data over time through internal benchmarks. However, there are relatively few resources available to assist organizations in benchmarking for laboratory utilization management. This article will review the topic of laboratory benchmarking with a focus on the available literature and services to assist in managing physician requests for laboratory testing. © 2013.

  20. How Benchmarking and Higher Education Came Together

    Science.gov (United States)

    Levy, Gary D.; Ronco, Sharron L.

    2012-01-01

    This chapter introduces the concept of benchmarking and how higher education institutions began to use benchmarking for a variety of purposes. Here, benchmarking is defined as a strategic and structured approach whereby an organization compares aspects of its processes and/or outcomes to those of another organization or set of organizations to…

  1. Benchmark for Evaluating Moving Object Indexes

    DEFF Research Database (Denmark)

    Chen, Su; Jensen, Christian Søndergaard; Lin, Dan

    2008-01-01

    that targets techniques for the indexing of the current and near-future positions of moving objects. This benchmark enables the comparison of existing and future indexing techniques. It covers important aspects of such indexes that have not previously been covered by any benchmark. Notable aspects covered......Progress in science and engineering relies on the ability to measure, reliably and in detail, pertinent properties of artifacts under design. Progress in the area of database-index design thus relies on empirical studies based on prototype implementations of indexes. This paper proposes a benchmark...... include update efficiency, query efficiency, concurrency control, and storage requirements. Next, the paper applies the benchmark to half a dozen notable moving-object indexes, thus demonstrating the viability of the benchmark and offering new insight into the performance properties of the indexes....

  2. Benchmarking infrastructure for mutation text mining.

    Science.gov (United States)

    Klein, Artjom; Riazanov, Alexandre; Hindle, Matthew M; Baker, Christopher Jo

    2014-02-25

    Experimental research on the automatic extraction of information about mutations from texts is greatly hindered by the lack of consensus evaluation infrastructure for the testing and benchmarking of mutation text mining systems. We propose a community-oriented annotation and benchmarking infrastructure to support development, testing, benchmarking, and comparison of mutation text mining systems. The design is based on semantic standards, where RDF is used to represent annotations, an OWL ontology provides an extensible schema for the data and SPARQL is used to compute various performance metrics, so that in many cases no programming is needed to analyze results from a text mining system. While large benchmark corpora for biological entity and relation extraction are focused mostly on genes, proteins, diseases, and species, our benchmarking infrastructure fills the gap for mutation information. The core infrastructure comprises (1) an ontology for modelling annotations, (2) SPARQL queries for computing performance metrics, and (3) a sizeable collection of manually curated documents, that can support mutation grounding and mutation impact extraction experiments. We have developed the principal infrastructure for the benchmarking of mutation text mining tasks. The use of RDF and OWL as the representation for corpora ensures extensibility. The infrastructure is suitable for out-of-the-box use in several important scenarios and is ready, in its current state, for initial community adoption.

  3. Benchmarking infrastructure for mutation text mining

    Science.gov (United States)

    2014-01-01

    Background Experimental research on the automatic extraction of information about mutations from texts is greatly hindered by the lack of consensus evaluation infrastructure for the testing and benchmarking of mutation text mining systems. Results We propose a community-oriented annotation and benchmarking infrastructure to support development, testing, benchmarking, and comparison of mutation text mining systems. The design is based on semantic standards, where RDF is used to represent annotations, an OWL ontology provides an extensible schema for the data and SPARQL is used to compute various performance metrics, so that in many cases no programming is needed to analyze results from a text mining system. While large benchmark corpora for biological entity and relation extraction are focused mostly on genes, proteins, diseases, and species, our benchmarking infrastructure fills the gap for mutation information. The core infrastructure comprises (1) an ontology for modelling annotations, (2) SPARQL queries for computing performance metrics, and (3) a sizeable collection of manually curated documents, that can support mutation grounding and mutation impact extraction experiments. Conclusion We have developed the principal infrastructure for the benchmarking of mutation text mining tasks. The use of RDF and OWL as the representation for corpora ensures extensibility. The infrastructure is suitable for out-of-the-box use in several important scenarios and is ready, in its current state, for initial community adoption. PMID:24568600

  4. Benchmarking: A Process for Improvement.

    Science.gov (United States)

    Peischl, Thomas M.

    One problem with the outcome-based measures used in higher education is that they measure quantity but not quality. Benchmarking, or the use of some external standard of quality to measure tasks, processes, and outputs, is partially solving that difficulty. Benchmarking allows for the establishment of a systematic process to indicate if outputs…

  5. Mesoscale Benchmark Demonstration Problem 1: Mesoscale Simulations of Intra-granular Fission Gas Bubbles in UO2 under Post-irradiation Thermal Annealing

    Energy Technology Data Exchange (ETDEWEB)

    Li, Yulan; Hu, Shenyang Y.; Montgomery, Robert; Gao, Fei; Sun, Xin; Tonks, Michael; Biner, Bullent; Millet, Paul; Tikare, Veena; Radhakrishnan, Balasubramaniam; Andersson , David

    2012-04-11

    A study was conducted to evaluate the capabilities of different numerical methods used to represent microstructure behavior at the mesoscale for irradiated material using an idealized benchmark problem. The purpose of the mesoscale benchmark problem was to provide a common basis to assess several mesoscale methods with the objective of identifying the strengths and areas of improvement in the predictive modeling of microstructure evolution. In this work, mesoscale models (phase-field, Potts, and kinetic Monte Carlo) developed by PNNL, INL, SNL, and ORNL were used to calculate the evolution kinetics of intra-granular fission gas bubbles in UO2 fuel under post-irradiation thermal annealing conditions. The benchmark problem was constructed to include important microstructural evolution mechanisms on the kinetics of intra-granular fission gas bubble behavior such as the atomic diffusion of Xe atoms, U vacancies, and O vacancies, the effect of vacancy capture and emission from defects, and the elastic interaction of non-equilibrium gas bubbles. An idealized set of assumptions was imposed on the benchmark problem to simplify the mechanisms considered. The capability and numerical efficiency of different models are compared against selected experimental and simulation results. These comparisons find that the phase-field methods, by the nature of the free energy formulation, are able to represent a larger subset of the mechanisms influencing the intra-granular bubble growth and coarsening mechanisms in the idealized benchmark problem as compared to the Potts and kinetic Monte Carlo methods. It is recognized that the mesoscale benchmark problem as formulated does not specifically highlight the strengths of the discrete particle modeling used in the Potts and kinetic Monte Carlo methods. Future efforts are recommended to construct increasingly more complex mesoscale benchmark problems to further verify and validate the predictive capabilities of the mesoscale modeling

  6. Benefits of the delta K of depletion benchmarks for burnup credit validation

    International Nuclear Information System (INIS)

    Lancaster, D.; Machiels, A.

    2012-01-01

    Pressurized Water Reactor (PWR) burnup credit validation is demonstrated using the benchmarks for quantifying fuel reactivity decrements, published as 'Benchmarks for Quantifying Fuel Reactivity Depletion Uncertainty,' EPRI Report 1022909 (August 2011). This demonstration uses the depletion module TRITON available in the SCALE 6.1 code system followed by criticality calculations using KENO-Va. The difference between the predicted depletion reactivity and the benchmark's depletion reactivity is a bias for the criticality calculations. The uncertainty in the benchmarks is the depletion reactivity uncertainty. This depletion bias and uncertainty is used with the bias and uncertainty from fresh UO 2 critical experiments to determine the criticality safety limits on the neutron multiplication factor, k eff . The analysis shows that SCALE 6.1 with the ENDF/B-VII 238-group cross section library supports the use of a depletion bias of only 0.0015 in delta k if cooling is ignored and 0.0025 if cooling is credited. The uncertainty in the depletion bias is 0.0064. Reliance on the ENDF/B V cross section library produces much larger disagreement with the benchmarks. The analysis covers numerous combinations of depletion and criticality options. In all cases, the historical uncertainty of 5% of the delta k of depletion ('Kopp memo') was shown to be conservative for fuel with more than 30 GWD/MTU burnup. Since this historically assumed burnup uncertainty is not a function of burnup, the Kopp memo's recommended bias and uncertainty may be exceeded at low burnups, but its absolute magnitude is small. (authors)

  7. Middleware Evaluation and Benchmarking for Use in Mission Operations Centers

    Science.gov (United States)

    Antonucci, Rob; Waktola, Waka

    2005-01-01

    Middleware technologies have been promoted as timesaving, cost-cutting alternatives to the point-to-point communication used in traditional mission operations systems. However, missions have been slow to adopt the new technology. The lack of existing middleware-based missions has given rise to uncertainty about middleware's ability to perform in an operational setting. Most mission architects are also unfamiliar with the technology and do not know the benefits and detriments to architectural choices - or even what choices are available. We will present the findings of a study that evaluated several middleware options specifically for use in a mission operations system. We will address some common misconceptions regarding the applicability of middleware-based architectures, and we will identify the design decisions and tradeoffs that must be made when choosing a middleware solution. The Middleware Comparison and Benchmark Study was conducted at NASA Goddard Space Flight Center to comprehensively evaluate candidate middleware products, compare and contrast the performance of middleware solutions with the traditional point- to-point socket approach, and assess data delivery and reliability strategies. The study focused on requirements of the Global Precipitation Measurement (GPM) mission, validating the potential use of middleware in the GPM mission ground system. The study was jointly funded by GPM and the Goddard Mission Services Evolution Center (GMSEC), a virtual organization for providing mission enabling solutions and promoting the use of appropriate new technologies for mission support. The study was broken into two phases. To perform the generic middleware benchmarking and performance analysis, a network was created with data producers and consumers passing data between themselves. The benchmark monitored the delay, throughput, and reliability of the data as the characteristics were changed. Measurements were taken under a variety of topologies, data demands

  8. The OECD/NEA/NSC PBMR 400 MW coupled neutronics thermal hydraulics transient benchmark: transient results - 290

    International Nuclear Information System (INIS)

    Strydom, G.; Reitsma, F.; Ngeleka, P.T.; Ivanov, K.N.

    2010-01-01

    The PBMR is a High-Temperature Gas-cooled Reactor (HTGR) concept developed to be built in South Africa. The analysis tools used for core neutronic design and core safety analysis need to be verified and validated, and code-to-code comparisons are an essential part of the V and V plans. As part of this plan the PBMR 400 MWth design and a representative set of transient exercises are defined as an OECD benchmark. The scope of the benchmark is to establish a series of well defined multi-dimensional computational benchmark problems with a common given set of cross sections, to compare methods and tools in coupled neutronics and thermal hydraulics analysis with a specific focus on transient events. This paper describes the current status of the benchmark project and shows the results for the six transient exercises, consisting of three Loss of Cooling Accidents, two Control Rod Withdrawal transients, a power load-follow transient, and a Helium over-cooling Accident. The participants' results are compared using a statistical method and possible areas of future code improvement are identified. (authors)

  9. Benchmarking Computational Fluid Dynamics for Application to PWR Fuel

    International Nuclear Information System (INIS)

    Smith, L.D. III; Conner, M.E.; Liu, B.; Dzodzo, B.; Paramonov, D.V.; Beasley, D.E.; Langford, H.M.; Holloway, M.V.

    2002-01-01

    The present study demonstrates a process used to develop confidence in Computational Fluid Dynamics (CFD) as a tool to investigate flow and temperature distributions in a PWR fuel bundle. The velocity and temperature fields produced by a mixing spacer grid of a PWR fuel assembly are quite complex. Before using CFD to evaluate these flow fields, a rigorous benchmarking effort should be performed to ensure that reasonable results are obtained. Westinghouse has developed a method to quantitatively benchmark CFD tools against data at conditions representative of the PWR. Several measurements in a 5 x 5 rod bundle were performed. Lateral flow-field testing employed visualization techniques and Particle Image Velocimetry (PIV). Heat transfer testing involved measurements of the single-phase heat transfer coefficient downstream of the spacer grid. These test results were used to compare with CFD predictions. Among the parameters optimized in the CFD models based on this comparison with data include computational mesh, turbulence model, and boundary conditions. As an outcome of this effort, a methodology was developed for CFD modeling that provides confidence in the numerical results. (authors)

  10. Piping benchmark problems for the General Electric Advanced Boiling Water Reactor

    International Nuclear Information System (INIS)

    Bezler, P.; DeGrassi, G.; Braverman, J.; Wang, Y.K.

    1993-08-01

    To satisfy the need for verification of the computer programs and modeling techniques that will be used to perform the final piping analyses for an advanced boiling water reactor standard design, three benchmark problems were developed. The problems are representative piping systems subjected to representative dynamic loads with solutions developed using the methods being proposed for analysis for the advanced reactor standard design. It will be required that the combined license holders demonstrate that their solutions to these problems are in agreement with the benchmark problem set

  11. Radiation Detection Computational Benchmark Scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Shaver, Mark W.; Casella, Andrew M.; Wittman, Richard S.; McDonald, Ben S.

    2013-09-24

    Modeling forms an important component of radiation detection development, allowing for testing of new detector designs, evaluation of existing equipment against a wide variety of potential threat sources, and assessing operation performance of radiation detection systems. This can, however, result in large and complex scenarios which are time consuming to model. A variety of approaches to radiation transport modeling exist with complementary strengths and weaknesses for different problems. This variety of approaches, and the development of promising new tools (such as ORNL’s ADVANTG) which combine benefits of multiple approaches, illustrates the need for a means of evaluating or comparing different techniques for radiation detection problems. This report presents a set of 9 benchmark problems for comparing different types of radiation transport calculations, identifying appropriate tools for classes of problems, and testing and guiding the development of new methods. The benchmarks were drawn primarily from existing or previous calculations with a preference for scenarios which include experimental data, or otherwise have results with a high level of confidence, are non-sensitive, and represent problem sets of interest to NA-22. From a technical perspective, the benchmarks were chosen to span a range of difficulty and to include gamma transport, neutron transport, or both and represent different important physical processes and a range of sensitivity to angular or energy fidelity. Following benchmark identification, existing information about geometry, measurements, and previous calculations were assembled. Monte Carlo results (MCNP decks) were reviewed or created and re-run in order to attain accurate computational times and to verify agreement with experimental data, when present. Benchmark information was then conveyed to ORNL in order to guide testing and development of hybrid calculations. The results of those ADVANTG calculations were then sent to PNNL for

  12. The role of benchmarking for yardstick competition

    International Nuclear Information System (INIS)

    Burns, Phil; Jenkins, Cloda; Riechmann, Christoph

    2005-01-01

    With the increasing interest in yardstick regulation, there is a need to understand the most appropriate method for realigning tariffs at the outset. Benchmarking is the tool used for such realignment and is therefore a necessary first-step in the implementation of yardstick competition. A number of concerns have been raised about the application of benchmarking, making some practitioners reluctant to move towards yardstick based regimes. We assess five of the key concerns often discussed and find that, in general, these are not as great as perceived. The assessment is based on economic principles and experiences with applying benchmarking to regulated sectors, e.g. in the electricity and water industries in the UK, The Netherlands, Austria and Germany in recent years. The aim is to demonstrate that clarity on the role of benchmarking reduces the concern about its application in different regulatory regimes. We find that benchmarking can be used in regulatory settlements, although the range of possible benchmarking approaches that are appropriate will be small for any individual regulatory question. Benchmarking is feasible as total cost measures and environmental factors are better defined in practice than is commonly appreciated and collusion is unlikely to occur in environments with more than 2 or 3 firms (where shareholders have a role in monitoring and rewarding performance). Furthermore, any concern about companies under-recovering costs is a matter to be determined through the regulatory settlement and does not affect the case for using benchmarking as part of that settlement. (author)

  13. Shielding benchmark test

    International Nuclear Information System (INIS)

    Kawai, Masayoshi

    1984-01-01

    Iron data in JENDL-2 have been tested by analyzing shielding benchmark experiments for neutron transmission through iron block performed at KFK using CF-252 neutron source and at ORNL using collimated neutron beam from reactor. The analyses are made by a shielding analysis code system RADHEAT-V4 developed at JAERI. The calculated results are compared with the measured data. As for the KFK experiments, the C/E values are about 1.1. For the ORNL experiments, the calculated values agree with the measured data within an accuracy of 33% for the off-center geometry. The d-t neutron transmission measurements through carbon sphere made at LLNL are also analyzed preliminarily by using the revised JENDL data for fusion neutronics calculation. (author)

  14. Monte Carlo benchmark calculations for 400MWTH PBMR core

    International Nuclear Information System (INIS)

    Kim, H. C.; Kim, J. K.; Kim, S. Y.; Noh, J. M.

    2007-01-01

    A large interest in high-temperature gas-cooled reactors (HTGR) has been initiated in connection with hydrogen production in recent years. In this study, as a part of work for establishing Monte Carlo computation system for HTGR core analysis, some benchmark calculations for pebble-type HTGR were carried out using MCNP5 code. The core of the 400MW t h Pebble-bed Modular Reactor (PBMR) was selected as a benchmark model. Recently, the IAEA CRP5 neutronics and thermal-hydraulics benchmark problem was proposed for the testing of existing methods for HTGRs to analyze the neutronics and thermal-hydraulic behavior for the design and safety evaluations of the PBMR. This study deals with the neutronic benchmark problems, for fresh fuel and cold conditions (Case F-1), and first core loading with given number densities (Case F-2), proposed for PBMR. After the detailed MCNP modeling of the whole facility, benchmark calculations were performed. Spherical fuel region of a fuel pebble is divided into cubic lattice element in order to model a fuel pebble which contains, on average, 15000 CFPs (Coated Fuel Particles). Each element contains one CFP at its center. In this study, the side length of each cubic lattice element to have the same amount of fuel was calculated to be 0.1635 cm. The remaining volume of each lattice element was filled with graphite. All of different 5 concentric shells of CFP were modeled. The PBMR annular core consists of approximately 452000 pebbles in the benchmark problems. In Case F-1 where the core was filled with only fresh fuel pebble, a BCC(body-centered-cubic) lattice model was employed in order to achieve the random packing core with the packing fraction of 0.61. The BCC lattice was also employed with the size of the moderator pebble increased in a manner that reproduces the specified F/M ratio of 1:2 while preserving the packing fraction of 0.61 in Case F-2. The calculations were pursued with ENDF/B-VI cross-section library and used sab2002 S(α,

  15. Analysis of the European results on the HTTR's core physics benchmarks

    International Nuclear Information System (INIS)

    Raepsaet, X.; Damian, F.; Ohlig, U.A.; Brockmann, H.J.; Haas, J.B.M. de; Wallerboss, E.M.

    2002-01-01

    Within the frame of the European contract HTR-N1 calculations are performed on the benchmark problems of the HTTR's start-up core physics experiments initially proposed by the IAEA in a Co-ordinated Research Programme. Three European partners, the FZJ in Germany, NRG and IRI in the Netherlands, and CEA in France, have joined this work package with the aim to validate their calculational methods. Pre-test and post-test calculational results, obtained by the partners, are compared with each other and with the experiment. Parts of the discrepancies between experiment and pre-test predictions are analysed and tackled by different treatments. In the case of the Monte Carlo code TRIPOLI4, used by CEA, the discrepancy between measurement and calculation at the first criticality is reduced to Δk/k∼0.85%, when considering the revised data of the HTTR benchmark. In the case of the diffusion codes, this discrepancy is reduced to: Δk/k∼0.8% (FZJ) and 2.7 or 1.8% (CEA). (author)

  16. Providing Nuclear Criticality Safety Analysis Education through Benchmark Experiment Evaluation

    International Nuclear Information System (INIS)

    Bess, John D.; Briggs, J. Blair; Nigg, David W.

    2009-01-01

    One of the challenges that today's new workforce of nuclear criticality safety engineers face is the opportunity to provide assessment of nuclear systems and establish safety guidelines without having received significant experience or hands-on training prior to graduation. Participation in the International Criticality Safety Benchmark Evaluation Project (ICSBEP) and/or the International Reactor Physics Experiment Evaluation Project (IRPhEP) provides students and young professionals the opportunity to gain experience and enhance critical engineering skills.

  17. Safety, codes and standards for hydrogen installations. Metrics development and benchmarking

    Energy Technology Data Exchange (ETDEWEB)

    Harris, Aaron P. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Dedrick, Daniel E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); LaFleur, Angela Christine [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); San Marchi, Christopher W. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2014-04-01

    Automakers and fuel providers have made public commitments to commercialize light duty fuel cell electric vehicles and fueling infrastructure in select US regions beginning in 2014. The development, implementation, and advancement of meaningful codes and standards is critical to enable the effective deployment of clean and efficient fuel cell and hydrogen solutions in the energy technology marketplace. Metrics pertaining to the development and implementation of safety knowledge, codes, and standards are important to communicate progress and inform future R&D investments. This document describes the development and benchmarking of metrics specific to the development of hydrogen specific codes relevant for hydrogen refueling stations. These metrics will be most useful as the hydrogen fuel market transitions from pre-commercial to early-commercial phases. The target regions in California will serve as benchmarking case studies to quantify the success of past investments in research and development supporting safety codes and standards R&D.

  18. RB reactor benchmark cores

    International Nuclear Information System (INIS)

    Pesic, M.

    1998-01-01

    A selected set of the RB reactor benchmark cores is presented in this paper. The first results of validation of the well-known Monte Carlo MCNP TM code and adjoining neutron cross section libraries are given. They confirm the idea for the proposal of the new U-D 2 O criticality benchmark system and support the intention to include this system in the next edition of the recent OECD/NEA Project: International Handbook of Evaluated Criticality Safety Experiment, in near future. (author)

  19. Benchmark for evaluation and validation of reactor simulations (BEAVRS)

    Energy Technology Data Exchange (ETDEWEB)

    Horelik, N.; Herman, B.; Forget, B.; Smith, K. [Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, 77 Massachusetts Avenue, Cambridge, MA 02139 (United States)

    2013-07-01

    Advances in parallel computing have made possible the development of high-fidelity tools for the design and analysis of nuclear reactor cores, and such tools require extensive verification and validation. This paper introduces BEAVRS, a new multi-cycle full-core Pressurized Water Reactor (PWR) depletion benchmark based on two operational cycles of a commercial nuclear power plant that provides a detailed description of fuel assemblies, burnable absorbers, in-core fission detectors, core loading patterns, and numerous in-vessel components. This benchmark enables analysts to develop extremely detailed reactor core models that can be used for testing and validation of coupled neutron transport, thermal-hydraulics, and fuel isotopic depletion. The benchmark also provides measured reactor data for Hot Zero Power (HZP) physics tests, boron letdown curves, and three-dimensional in-core flux maps from fifty-eight instrumented assemblies. Initial comparisons between calculations performed with MIT's OpenMC Monte Carlo neutron transport code and measured cycle 1 HZP test data are presented, and these results display an average deviation of approximately 100 pcm for the various critical configurations and control rod worth measurements. Computed HZP radial fission detector flux maps also agree reasonably well with the available measured data. All results indicate that this benchmark will be extremely useful in validation of coupled-physics codes and uncertainty quantification of in-core physics computational predictions. The detailed BEAVRS specification and its associated data package is hosted online at the MIT Computational Reactor Physics Group web site (http://crpg.mit.edu/), where future revisions and refinements to the benchmark specification will be made publicly available. (authors)

  20. Benchmarking specialty hospitals, a scoping review on theory and practice.

    Science.gov (United States)

    Wind, A; van Harten, W H

    2017-04-04

    Although benchmarking may improve hospital processes, research on this subject is limited. The aim of this study was to provide an overview of publications on benchmarking in specialty hospitals and a description of study characteristics. We searched PubMed and EMBASE for articles published in English in the last 10 years. Eligible articles described a project stating benchmarking as its objective and involving a specialty hospital or specific patient category; or those dealing with the methodology or evaluation of benchmarking. Of 1,817 articles identified in total, 24 were included in the study. Articles were categorized into: pathway benchmarking, institutional benchmarking, articles on benchmark methodology or -evaluation and benchmarking using a patient registry. There was a large degree of variability:(1) study designs were mostly descriptive and retrospective; (2) not all studies generated and showed data in sufficient detail; and (3) there was variety in whether a benchmarking model was just described or if quality improvement as a consequence of the benchmark was reported upon. Most of the studies that described a benchmark model described the use of benchmarking partners from the same industry category, sometimes from all over the world. Benchmarking seems to be more developed in eye hospitals, emergency departments and oncology specialty hospitals. Some studies showed promising improvement effects. However, the majority of the articles lacked a structured design, and did not report on benchmark outcomes. In order to evaluate the effectiveness of benchmarking to improve quality in specialty hospitals, robust and structured designs are needed including a follow up to check whether the benchmark study has led to improvements.

  1. Full sphere hydrodynamic and dynamo benchmarks

    KAUST Repository

    Marti, P.

    2014-01-26

    Convection in planetary cores can generate fluid flow and magnetic fields, and a number of sophisticated codes exist to simulate the dynamic behaviour of such systems. We report on the first community activity to compare numerical results of computer codes designed to calculate fluid flow within a whole sphere. The flows are incompressible and rapidly rotating and the forcing of the flow is either due to thermal convection or due to moving boundaries. All problems defined have solutions that alloweasy comparison, since they are either steady, slowly drifting or perfectly periodic. The first two benchmarks are defined based on uniform internal heating within the sphere under the Boussinesq approximation with boundary conditions that are uniform in temperature and stress-free for the flow. Benchmark 1 is purely hydrodynamic, and has a drifting solution. Benchmark 2 is a magnetohydrodynamic benchmark that can generate oscillatory, purely periodic, flows and magnetic fields. In contrast, Benchmark 3 is a hydrodynamic rotating bubble benchmark using no slip boundary conditions that has a stationary solution. Results from a variety of types of code are reported, including codes that are fully spectral (based on spherical harmonic expansions in angular coordinates and polynomial expansions in radius), mixed spectral and finite difference, finite volume, finite element and also a mixed Fourier-finite element code. There is good agreement between codes. It is found that in Benchmarks 1 and 2, the approximation of a whole sphere problem by a domain that is a spherical shell (a sphere possessing an inner core) does not represent an adequate approximation to the system, since the results differ from whole sphere results. © The Authors 2014. Published by Oxford University Press on behalf of The Royal Astronomical Society.

  2. Development of a California commercial building benchmarking database

    International Nuclear Information System (INIS)

    Kinney, Satkartar; Piette, Mary Ann

    2002-01-01

    Building energy benchmarking is a useful starting point for commercial building owners and operators to target energy savings opportunities. There are a number of tools and methods for benchmarking energy use. Benchmarking based on regional data can provides more relevant information for California buildings than national tools such as Energy Star. This paper discusses issues related to benchmarking commercial building energy use and the development of Cal-Arch, a building energy benchmarking database for California. Currently Cal-Arch uses existing survey data from California's Commercial End Use Survey (CEUS), a largely underutilized wealth of information collected by California's major utilities. Doe's Commercial Building Energy Consumption Survey (CBECS) is used by a similar tool, Arch, and by a number of other benchmarking tools. Future versions of Arch/Cal-Arch will utilize additional data sources including modeled data and individual buildings to expand the database

  3. MANAGING BENCHMARKING IN A CORPORATE ENVIRONMENT

    Directory of Open Access Journals (Sweden)

    D.M. Mouton

    2012-01-01

    Full Text Available

    ENGLISH ABSTRACT: Most new generation organisations have management models and processes for measuring and managing organisational performance. However, the application of these models and the direction the company needs to take are not always clearly established. Benchmarking can be defined as the search for industry best practices that lead to superior performance. The emphasis is on “best” and “superior”. There are no limitations on the search; the more creative the thinking, the greater the potential reward. Unlike traditional competitive analysis that focuses on outputs, benchmarking is applied to key operational processes within the business. Processes are compared and the best process is adapted into the organisation. Benchmarking is not guaranteed to be successful though, it needs to be managed and nurtured in the organisation and allowed to grow throughout the organisation to finally become a way of life. It also needs to be integrated into key business processes in order to ensure that the benefits can be reaped into the distant future. This paper provide guidelines for creating, managing and sustaining a benchmarking capability in a corporation.

    AFRIKAANSE OPSOMMING: Die nuwe generasie van ondernemings beskik oor bestuursmodelle en –prosesse wat meting en die bestuur van ondernemingsvertoning in die hand werk. Die wyse waarop die modelle toegepas word en hoe die onderneming sy besluite moet vorm is nog nie deeglik uitgetrap nie. Praktykvergelykings ("Benchmarking" word beskryf as die soeke na beste bedryfspraktyke wat lei tot uitstekende vertoning. Die klem word geplaas op die woorde "beste" en "uitstekende". Die soektog word geensins beperk nie; hoe meer kreatief die benadering, des te beter is die potensiële beloning. Waar tradisionele mededingingsnanalise ondernemingsuitsette onder die loep neem word praktykvergelyking togepas op sleutelprosesse in die bedryf van die onderneming. Prosesse word met mekaar

  4. How benchmarking can improve patient nutrition.

    Science.gov (United States)

    Ellis, Jane

    Benchmarking is a tool that originated in business to enable organisations to compare their services with industry-wide best practice. Early last year the Department of Health published The Essence of Care, a benchmarking toolkit adapted for use in health care. It focuses on eight elements of care that are crucial to patients' experiences. Nurses and other health care professionals at a London NHS trust have begun a trust-wide benchmarking project. The aim is to improve patients' experiences of health care by sharing and comparing information, and by identifying examples of good practice and areas for improvement. The project began with two of the eight elements of The Essence of Care, with the intention of covering the rest later. This article describes the benchmarking process for nutrition and some of the consequent improvements in care.

  5. XWeB: The XML Warehouse Benchmark

    Science.gov (United States)

    Mahboubi, Hadj; Darmont, Jérôme

    With the emergence of XML as a standard for representing business data, new decision support applications are being developed. These XML data warehouses aim at supporting On-Line Analytical Processing (OLAP) operations that manipulate irregular XML data. To ensure feasibility of these new tools, important performance issues must be addressed. Performance is customarily assessed with the help of benchmarks. However, decision support benchmarks do not currently support XML features. In this paper, we introduce the XML Warehouse Benchmark (XWeB), which aims at filling this gap. XWeB derives from the relational decision support benchmark TPC-H. It is mainly composed of a test data warehouse that is based on a unified reference model for XML warehouses and that features XML-specific structures, and its associate XQuery decision support workload. XWeB's usage is illustrated by experiments on several XML database management systems.

  6. Uplatnění metody benchmarking v rámci Facility management

    OpenAIRE

    Jiroutová, Monika

    2009-01-01

    This bachelor study dissertates about the possibilities of benchmarking application in the field of Facility Management. Theoretical part describes basic characteristics and elementary terms and methods of benchmarking process in Facility Management. In the practical part ten companies providing facility services are compared on the basis of a number of indices. Every company is briefly described. On the results of performed analysis the evolution of the Facility Management in Czech Republic ...

  7. How to benchmark methods for structure-based virtual screening of large compound libraries.

    Science.gov (United States)

    Christofferson, Andrew J; Huang, Niu

    2012-01-01

    Structure-based virtual screening is a useful computational technique for ligand discovery. To systematically evaluate different docking approaches, it is important to have a consistent benchmarking protocol that is both relevant and unbiased. Here, we describe the designing of a benchmarking data set for docking screen assessment, a standard docking screening process, and the analysis and presentation of the enrichment of annotated ligands among a background decoy database.

  8. Preliminary analysis of the proposed BN-600 benchmark core

    International Nuclear Information System (INIS)

    John, T.M.

    2000-01-01

    The Indira Gandhi Centre for Atomic Research is actively involved in the design of Fast Power Reactors in India. The core physics calculations are performed by the computer codes that are developed in-house or by the codes obtained from other laboratories and suitably modified to meet the computational requirements. The basic philosophy of the core physics calculations is to use the diffusion theory codes with the 25 group nuclear cross sections. The parameters that are very sensitive is the core leakage, like the power distribution at the core blanket interface etc. are calculated using transport theory codes under the DSN approximations. All these codes use the finite difference approximation as the method to treat the spatial variation of the neutron flux. Criticality problems having geometries that are irregular to be represented by the conventional codes are solved using Monte Carlo methods. These codes and methods have been validated by the analysis of various critical assemblies and calculational benchmarks. Reactor core design procedure at IGCAR consists of: two and three dimensional diffusion theory calculations (codes ALCIALMI and 3DB); auxiliary calculations, (neutron balance, power distributions, etc. are done by codes that are developed in-house); transport theory corrections from two dimensional transport calculations (DOT); irregular geometry treated by Monte Carlo method (KENO); cross section data library used CV2M (25 group)

  9. Experiences with installing and benchmarking SCALE 4.0 on workstations

    International Nuclear Information System (INIS)

    Montierth, L.M.; Briggs, J.B.

    1992-01-01

    The advent of economical, high-speed workstations has placed on the criticality engineer's desktop the means to perform computational analysis that was previously possible only on mainframe computers. With this capability comes the need to modify and maintain criticality codes for use on a variety of different workstations. Due to the use of nonstandard coding, compiler differences [in lieu of American National Standards Institute (ANSI) standards], and other machine idiosyncrasies, there is a definite need to systematically test and benchmark all codes ported to workstations. Once benchmarked, a user environment must be maintained to ensure that user code does not become corrupted. The goal in creating a workstation version of the criticality safety analysis sequence (CSAS) codes in SCALE 4.0 was to start with the Cray versions and change as little source code as possible yet produce as generic a code as possible. To date, this code has been ported to the IBM RISC 6000, Data General AViiON 400, Silicon Graphics 4D-35 (all using the same source code), and to the Hewlett Packard Series 700 workstations. The code is maintained under a configuration control procedure. In this paper, the authors address considerations that pertain to the installation and benchmarking of CSAS

  10. IAEA sodium void reactivity benchmark calculations

    International Nuclear Information System (INIS)

    Hill, R.N.; Finck, P.J.

    1992-01-01

    In this paper, the IAEA-1 992 ''Benchmark Calculation of Sodium Void Reactivity Effect in Fast Reactor Core'' problem is evaluated. The proposed design is a large axially heterogeneous oxide-fueled fast reactor as described in Section 2; the core utilizes a sodium plenum above the core to enhance leakage effects. The calculation methods used in this benchmark evaluation are described in Section 3. In Section 4, the calculated core performance results for the benchmark reactor model are presented; and in Section 5, the influence of steel and interstitial sodium heterogeneity effects is estimated

  11. Benchmark Imagery FY11 Technical Report

    Energy Technology Data Exchange (ETDEWEB)

    Roberts, R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Pope, P. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2011-06-14

    This report details the work performed in FY11 under project LL11-GS-PD06, “Benchmark Imagery for Assessing Geospatial Semantic Extraction Algorithms.” The original LCP for the Benchmark Imagery project called for creating a set of benchmark imagery for verifying and validating algorithms that extract semantic content from imagery. More specifically, the first year was slated to deliver real imagery that had been annotated, the second year to deliver real imagery that had composited features, and the final year was to deliver synthetic imagery modeled after the real imagery.

  12. Results from the IAEA benchmark of spallation models

    International Nuclear Information System (INIS)

    Leray, S.; David, J.C.; Khandaker, M.; Mank, G.; Mengoni, A.; Otsuka, N.; Filges, D.; Gallmeier, F.; Konobeyev, A.; Michel, R.

    2011-01-01

    Spallation reactions play an important role in a wide domain of applications. In the simulation codes used in this field, the nuclear interaction cross-sections and characteristics are computed by spallation models. The International Atomic Energy Agency (IAEA) has recently organised a benchmark of the spallation models used or that could be used in the future into high-energy transport codes. The objectives were, first, to assess the prediction capabilities of the different spallation models for the different mass and energy regions and the different exit channels and, second, to understand the reason for the success or deficiency of the models. Results of the benchmark concerning both the analysis of the prediction capabilities of the models and the first conclusions on the physics of spallation models are presented. (authors)

  13. Nondestructive Damage Assessment of Composite Structures Based on Wavelet Analysis of Modal Curvatures: State-of-the-Art Review and Description of Wavelet-Based Damage Assessment Benchmark

    Directory of Open Access Journals (Sweden)

    Andrzej Katunin

    2015-01-01

    Full Text Available The application of composite structures as elements of machines and vehicles working under various operational conditions causes degradation and occurrence of damage. Considering that composites are often used for responsible elements, for example, parts of aircrafts and other vehicles, it is extremely important to maintain them properly and detect, localize, and identify the damage occurring during their operation in possible early stage of its development. From a great variety of nondestructive testing methods developed to date, the vibration-based methods seem to be ones of the least expensive and simultaneously effective with appropriate processing of measurement data. Over the last decades a great popularity of vibration-based structural testing has been gained by wavelet analysis due to its high sensitivity to a damage. This paper presents an overview of results of numerous researchers working in the area of vibration-based damage assessment supported by the wavelet analysis and the detailed description of the Wavelet-based Structural Damage Assessment (WavStructDamAs Benchmark, which summarizes the author’s 5-year research in this area. The benchmark covers example problems of damage identification in various composite structures with various damage types using numerous wavelet transforms and supporting tools. The benchmark is openly available and allows performing the analysis on the example problems as well as on its own problems using available analysis tools.

  14. The OECD/NEA/NSC PBMR400 MW coupled neutronics thermal hydraulics transient benchmark - Steady-state results and status

    International Nuclear Information System (INIS)

    Reitsma, F.; Han, J.; Ivanov, K.; Sartori, E.

    2008-01-01

    The PBMR is a High-Temperature Gas-cooled Reactor (HTGR) concept developed to be built in South Africa. The analysis tools used for core neutronic design and core safety analysis need to be verified and validated. Since only a few pebble-bed HTR experimental facilities or plant data are available the use of code-to-code comparisons are an essential part of the V and V plans. As part of this plan the PBMR 400 MW design and a representative set of transient cases is defined as an OECD benchmark. The scope of the benchmark is to establish a series of well-defined multi-dimensional computational benchmark problems with a common given set of cross-sections, to compare methods and tools in coupled neutronics and thermal hydraulics analysis with a specific focus on transient events. The OECD benchmark includes steady-state and transients cases. Although the focus of the benchmark is on the modelling of the transient behaviour of the PBMR core, it was also necessary to define some steady-state cases to ensure consistency between the different approaches before results of transient cases could be compared. This paper describes the status of the benchmark project and shows the results for the three steady state exercises defined as a standalone neutronics calculation, a standalone thermal-hydraulic core calculation, and a coupled neutronics/thermal-hydraulic simulation. (authors)

  15. Review for session K - benchmarks

    International Nuclear Information System (INIS)

    McCracken, A.K.

    1980-01-01

    Eight of the papers to be considered in Session K are directly concerned, at least in part, with the Pool Critical Assembly (P.C.A.) benchmark at Oak Ridge. The remaining seven papers in this session, the subject of this review, are concerned with a variety of topics related to the general theme of Benchmarks and will be considered individually

  16. DeltaSA tool for source apportionment benchmarking, description and sensitivity analysis

    Science.gov (United States)

    Pernigotti, D.; Belis, C. A.

    2018-05-01

    DeltaSA is an R-package and a Java on-line tool developed at the EC-Joint Research Centre to assist and benchmark source apportionment applications. Its key functionalities support two critical tasks in this kind of studies: the assignment of a factor to a source in factor analytical models (source identification) and the model performance evaluation. The source identification is based on the similarity between a given factor and source chemical profiles from public databases. The model performance evaluation is based on statistical indicators used to compare model output with reference values generated in intercomparison exercises. The references values are calculated as the ensemble average of the results reported by participants that have passed a set of testing criteria based on chemical profiles and time series similarity. In this study, a sensitivity analysis of the model performance criteria is accomplished using the results of a synthetic dataset where "a priori" references are available. The consensus modulated standard deviation punc gives the best choice for the model performance evaluation when a conservative approach is adopted.

  17. Co-ordinated research programme on benchmark study for the seismic analysis and testing of WWER-type nuclear power plants. V. 3B. Kozloduy NPP units 5/6: Analysis/testing. Working material

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-07-01

    The Co-ordinated research programme on the benchmark study for the seismic analysis and testing of WWER-type nuclear power plants was initiated subsequent to the request from representatives of Member States. The conclusions adopted at the Technical Committee Meeting on Seismic Issues related to existing nuclear power plants held in Tokyo in 1991 called for the harmonization of methods and criteria used in Member States in issues related to seismic safety. The Consulltants' Meeting which followed resulted in producing a working document for CRP. It was decided that a benchmark study is the most effective way to achieve the principal objective. Two types of WWER reactors (WWER-440/213 and WWER-1000) were selected as prototypes for the benchmark exercise to be tested on a full scale using explosions and/or vibration generators. This volume of Working material contains reports related analyses and testing of Kozloduy nuclear power plant, units 5 and 6.

  18. Co-ordinated research programme on benchmark study for the seismic analysis and testing of WWER-type nuclear power plants. V. 3A. Kozloduy NPP units 5/6: Analysis/testing. Working material

    International Nuclear Information System (INIS)

    1995-01-01

    The Co-ordinated research programme on the benchmark study for the seismic analysis and testing of WWER-type nuclear power plants was initiated subsequent to the request from representatives of Member States. The conclusions adopted at the Technical Committee Meeting on Seismic Issues related to existing nuclear power plants held in Tokyo in 1991 called for the harmonization of methods and criteria used in Member States in issues related to seismic safety. The Consulltants' Meeting which followed resulted in producing a working document for CRP. It was decided that a benchmark study is the most effective way to achieve the principal objective. Two types of WWER reactors (WWER-440/213 and WWER-1000) were selected as prototypes for the benchmark exercise to be tested on a full scale using explosions and/or vibration generators. This volume of Working material contains reports related analyses and testing of Kozloduy nuclear power plant, units 5 and 6

  19. Benchmark and physics testing of LIFE-4C. Summary

    International Nuclear Information System (INIS)

    Liu, Y.Y.

    1984-06-01

    LIFE-4C is a steady-state/transient analysis code developed for performance evaluation of carbide [(U,Pu)C and UC] fuel elements in advanced LMFBRs. This paper summarizes selected results obtained during a crucial step in the development of LIFE-4C - benchmark and physics testing

  20. Statistical benchmarking in utility regulation: Role, standards and methods

    International Nuclear Information System (INIS)

    Newton Lowry, Mark; Getachew, Lullit

    2009-01-01

    Statistical benchmarking is being used with increasing frequency around the world in utility rate regulation. We discuss how and where benchmarking is in use for this purpose and the pros and cons of regulatory benchmarking. We then discuss alternative performance standards and benchmarking methods in regulatory applications. We use these to propose guidelines for the appropriate use of benchmarking in the rate setting process. The standards, which we term the competitive market and frontier paradigms, have a bearing on method selection. These along with regulatory experience suggest that benchmarking can either be used for prudence review in regulation or to establish rates or rate setting mechanisms directly

  1. Benchmarking of industrial control systems via case-based reasoning

    International Nuclear Information System (INIS)

    Hadjiiski, M.; Boshnakov, K.; Georgiev, Z.

    2013-01-01

    Full text: The recent development of information and communication technologies enables the establishment of virtual consultation centers related to the control of specific processes that are widely presented worldwide as the location of the installations does not have influence on the results. The centers can provide consultations regarding the quality of the process control and overall enterprise management as correction factors such as weather conditions, product or service and associated technology, production level, quality of feedstock used and others can be also taken into account. The benchmarking technique is chosen as a tool for analyzing and comparing the quality of the assessed control systems in individual plants. It is a process of gathering, analyzing and comparing data on the characteristics of comparable units to assess and compare these characteristics and improve the performance of the particular process, enterprise or organization. By comparing the different processes and the adoption of the best practices energy efficiency could be improved and hence the competitiveness of the participating organizations will increase. In the presented work algorithm for benchmarking and parametric optimization of a given control system is developed by applying the approaches of Case-Based Reasoning (CBR) and Data Envelopment Analysis (DEA). Expert knowledge and approaches for optimal tuning of control systems are combined. Two of the most common systems for automatic control of different variables in the case of biological wastewater treatment are presented and discussed. Based on analysis of the processes, different cases are defined. By using DEA analysis the relative efficiencies of 10 systems for automatic control of dissolved oxygen are estimated. The designed and implemented in the current work CBR and DEA are applicable for the purposed of virtual consultation centers. Key words: benchmarking technique, energy efficiency, Case-Based Reasoning (CBR

  2. Development of a California commercial building benchmarking database

    Energy Technology Data Exchange (ETDEWEB)

    Kinney, Satkartar; Piette, Mary Ann

    2002-05-17

    Building energy benchmarking is a useful starting point for commercial building owners and operators to target energy savings opportunities. There are a number of tools and methods for benchmarking energy use. Benchmarking based on regional data can provides more relevant information for California buildings than national tools such as Energy Star. This paper discusses issues related to benchmarking commercial building energy use and the development of Cal-Arch, a building energy benchmarking database for California. Currently Cal-Arch uses existing survey data from California's Commercial End Use Survey (CEUS), a largely underutilized wealth of information collected by California's major utilities. Doe's Commercial Building Energy Consumption Survey (CBECS) is used by a similar tool, Arch, and by a number of other benchmarking tools. Future versions of Arch/Cal-Arch will utilize additional data sources including modeled data and individual buildings to expand the database.

  3. Co-ordinated research programme on benchmark study for the seismic analysis and testing of WWER-type nuclear power plants. V. 3E. Kozloduy NPP units 5/6: Analysis/testing. Working material

    International Nuclear Information System (INIS)

    1996-01-01

    The Co-ordinated research programme on the benchmark study for the seismic analysis and testing of WWER-type nuclear power plants was initiated subsequent to the request from representatives of Member States. The conclusions adopted at the Technical Committee Meeting on Seismic Issues related to existing nuclear power plants held in Tokyo in 1991 called for the harmonization of methods and criteria used in Member States in issues related to seismic safety. The Consulltants' Meeting which followed resulted in producing a working document for CRP. It was decided that a benchmark study is the most effective way to achieve the principal objective. Two types of WWER reactors (WWER-440/213 and WWER-1000) were selected as prototypes for the benchmark exercise to be tested on a full scale using explosions and/or vibration generators. The two prototypes are Kozloduy Units 5/6 for WWER-1000 and Paks for WWER-440/213 nuclear power plants. This volume of Working material contains reports on data related to floor response spectra of Kozloduy NPP; calculational-experimental examination and ensuring of equipment and pipelines seismic resistance at starting and operating WWER-type NPPs; analysis of design floor response spectra and testing of the electrical systems; experimental investigations and seismic analysis Kozloduy NPP; testing of components on the shaking table facilities and contribution to full scale dynamic testing of Kozloduy NPP; seismic evaluation of the main steam line, piping systems, containment pre-stressing and steel ventilation chimney of Kozloduy NPP

  4. Benchmark calculation programme concerning typical LMFBR structures

    International Nuclear Information System (INIS)

    Donea, J.; Ferrari, G.; Grossetie, J.C.; Terzaghi, A.

    1982-01-01

    This programme, which is part of a comprehensive activity aimed at resolving difficulties encountered in using design procedures based on ASME Code Case N-47, should allow to get confidence in computer codes which are supposed to provide a realistic prediction of the LMFBR component behaviour. The calculations started on static analysis of typical structures made of non linear materials stressed by cyclic loads. The fluid structure interaction analysis is also being considered. Reasons and details of the different benchmark calculations are described, results obtained are commented and future computational exercise indicated

  5. Co-ordinated research programme on benchmark study for the seismic analysis and testing of WWER-type nuclear power plants. V. 4B. Paks NPP: Analysis/testing. Working material

    International Nuclear Information System (INIS)

    1995-01-01

    The Co-ordinated research programme on the benchmark study for the seismic analysis and testing of WWER-type nuclear power plants was initiated subsequent to the request from representatives of Member States. The conclusions adopted at the Technical Committee Meeting on Seismic Issues related to existing nuclear power plants held in Tokyo in 1991 called for the harmonization of methods and criteria used in Member States in issues related to seismic safety. The Consulltants' Meeting which followed resulted in producing a working document for CRP. It was decided that a benchmark study is the most effective way to achieve the principal objective. Two types of WWER reactors (WWER-440/213 and WWER-1000) were selected as prototypes for the benchmark exercise to be tested on a full scale using explosions and/or vibration generators. The two prototypes are Kozloduy Units 5/6 for WWER-1000 and Paks for WWER-440/213 nuclear power plants. This volume of Working material contains reports on dynamic study of the main building of the Paks NPP; shake table investigation at Paks NPP and the Final report of the Co-ordinated Research Programme

  6. Co-ordinated research programme on benchmark study for the seismic analysis and testing of WWER-type nuclear power plants. V. 4A. Paks NPP: Analysis/testing. Working material

    International Nuclear Information System (INIS)

    1995-01-01

    The Co-ordinated research programme on the benchmark study for the seismic analysis and testing of WWER-type nuclear power plants was initiated subsequent to the request from representatives of Member States. The conclusions adopted at the Technical Committee Meeting on Seismic Issues related to existing nuclear power plants held in Tokyo in 1991 called for the harmonization of methods and criteria used in Member States in issues related to seismic safety. The Consulltants' Meeting which followed resulted in producing a working document for CRP. It was decided that a benchmark study is the most effective way to achieve the principal objective. Two types of WWER reactors (WWER-440/213 and WWER-1000) were selected as prototypes for the benchmark exercise to be tested on a full scale using explosions and/or vibration generators. The two prototypes are Kozloduy Units 5/6 for WWER-1000 and Paks for WWER-440/213 nuclear power plants. This volume of Working material contains reports on data related to seismic analyses of structures of Paks and Kozloduy reactor buildings and WWER-440/213 primary coolant loops with different antiseismic devices

  7. Co-ordinated research programme on benchmark study for the seismic analysis and testing of WWER-type nuclear power plants. V. 4A. Paks NPP: Analysis/testing. Working material

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-07-01

    The Co-ordinated research programme on the benchmark study for the seismic analysis and testing of WWER-type nuclear power plants was initiated subsequent to the request from representatives of Member States. The conclusions adopted at the Technical Committee Meeting on Seismic Issues related to existing nuclear power plants held in Tokyo in 1991 called for the harmonization of methods and criteria used in Member States in issues related to seismic safety. The Consulltants' Meeting which followed resulted in producing a working document for CRP. It was decided that a benchmark study is the most effective way to achieve the principal objective. Two types of WWER reactors (WWER-440/213 and WWER-1000) were selected as prototypes for the benchmark exercise to be tested on a full scale using explosions and/or vibration generators. The two prototypes are Kozloduy Units 5/6 for WWER-1000 and Paks for WWER-440/213 nuclear power plants. This volume of Working material contains reports on data related to seismic analyses of structures of Paks and Kozloduy reactor buildings and WWER-440/213 primary coolant loops with different antiseismic devices.

  8. A BENCHMARKING ANALYSIS FOR FIVE RADIONUCLIDE VADOSE ZONE MODELS (CHAIN, MULTIMED_DP, FECTUZ, HYDRUS, AND CHAIN 2D) IN SOIL SCREENING LEVEL CALCULATIONS

    Science.gov (United States)

    Five radionuclide vadose zone models with different degrees of complexity (CHAIN, MULTIMED_DP, FECTUZ, HYDRUS, and CHAIN 2D) were selected for use in soil screening level (SSL) calculations. A benchmarking analysis between the models was conducted for a radionuclide (99Tc) rele...

  9. Co-ordinated research programme on benchmark study for the seismic analysis and testing of WWER-type nuclear power plants. V. 4G. Paks NPP: Analysis and testing. Working material

    International Nuclear Information System (INIS)

    1999-01-01

    In August 1991, following the SMiRT-11 Conference in Tokyo, a Technical Committee Meeting was held on the 'Seismic safety issues relating to existing NPPs'. The Proceedings of this TCM was subsequently compiled in an IAEA Working Material. One of the main recommendations of this TCM, called for the harmonization of criteria and methods used in Member States in seismic reassessment and upgrading of existing NPPs. Twenty four institutions from thirteen countries participated in the CRP named 'Benchmark study for the seismic analysis and testing of WWER type NPPs'. Two types of WWER reactors (WWER-1000 and WWER-440/213) selected for benchmarking. Kozloduy NPP Units 5/6 and Paks NPP represented these respectively as prototypes. Consistent with the recommendations of the TCM and the working paper prepared by the subsequent Consultants' Meeting, the focal activity of the CRP was the benchmarking exercises. A similar methodology was followed both for Paks NPP and Kozloduy NPP Unit 5. Firstly, the NPP (mainly the reactor building) was tested using a blast loading generated by a series of explosions from buried TNT charges. Records from this test were obtained at several free field locations (both downhole and surface), foundation mat, various elevations of structures as well as some tanks and the stack. Then the benchmark participants were provided with structural drawings, soil data and the free field record of the blast experiment. Their task was to make a blind prediction of the response at preselected locations. The analytical results from these participants were then compared with the results from the test. Although the benchmarking exercises constituted the focus of the CRP, there were many other interesting problems related to the seismic safety of WWER type NPPs which were addressed by the participants. These involved generic studies, i.e. codes and standards used in original WWER designs and their comparison with current international practice; seismic analysis

  10. 40 CFR 141.172 - Disinfection profiling and benchmarking.

    Science.gov (United States)

    2010-07-01

    ... benchmarking. 141.172 Section 141.172 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED... Disinfection-Systems Serving 10,000 or More People § 141.172 Disinfection profiling and benchmarking. (a... sanitary surveys conducted by the State. (c) Disinfection benchmarking. (1) Any system required to develop...

  11. Raising Quality and Achievement. A College Guide to Benchmarking.

    Science.gov (United States)

    Owen, Jane

    This booklet introduces the principles and practices of benchmarking as a way of raising quality and achievement at further education colleges in Britain. Section 1 defines the concept of benchmarking. Section 2 explains what benchmarking is not and the steps that should be taken before benchmarking is initiated. The following aspects and…

  12. Gatemon Benchmarking and Two-Qubit Operation

    Science.gov (United States)

    Casparis, Lucas; Larsen, Thorvald; Olsen, Michael; Petersson, Karl; Kuemmeth, Ferdinand; Krogstrup, Peter; Nygard, Jesper; Marcus, Charles

    Recent experiments have demonstrated superconducting transmon qubits with semiconductor nanowire Josephson junctions. These hybrid gatemon qubits utilize field effect tunability singular to semiconductors to allow complete qubit control using gate voltages, potentially a technological advantage over conventional flux-controlled transmons. Here, we present experiments with a two-qubit gatemon circuit. We characterize qubit coherence and stability and use randomized benchmarking to demonstrate single-qubit gate errors of ~0.5 % for all gates, including voltage-controlled Z rotations. We show coherent capacitive coupling between two gatemons and coherent SWAP operations. Finally, we perform a two-qubit controlled-phase gate with an estimated fidelity of ~91 %, demonstrating the potential of gatemon qubits for building scalable quantum processors. We acknowledge financial support from Microsoft Project Q and the Danish National Research Foundation.

  13. Benchmarking and Performance Management

    Directory of Open Access Journals (Sweden)

    Adrian TANTAU

    2010-12-01

    Full Text Available The relevance of the chosen topic is explained by the meaning of the firm efficiency concept - the firm efficiency means the revealed performance (how well the firm performs in the actual market environment given the basic characteristics of the firms and their markets that are expected to drive their profitability (firm size, market power etc.. This complex and relative performance could be due to such things as product innovation, management quality, work organization, some other factors can be a cause even if they are not directly observed by the researcher. The critical need for the management individuals/group to continuously improve their firm/company’s efficiency and effectiveness, the need for the managers to know which are the success factors and the competitiveness determinants determine consequently, what performance measures are most critical in determining their firm’s overall success. Benchmarking, when done properly, can accurately identify both successful companies and the underlying reasons for their success. Innovation and benchmarking firm level performance are critical interdependent activities. Firm level variables, used to infer performance, are often interdependent due to operational reasons. Hence, the managers need to take the dependencies among these variables into account when forecasting and benchmarking performance. This paper studies firm level performance using financial ratio and other type of profitability measures. It uses econometric models to describe and then propose a method to forecast and benchmark performance.

  14. Supply chain integration scales validation and benchmark values

    Directory of Open Access Journals (Sweden)

    Juan A. Marin-Garcia

    2013-06-01

    Full Text Available Purpose: The clarification of the constructs of the supply chain integration (clients, suppliers, external and internal, the creation of a measurement instrument based on a list of items taken from earlier papers, the validation of these scales and a preliminary benchmark to interpret the scales by percentiles based on a set of control variables (size of the plant, country, sector and degree of vertical integration. Design/methodology/approach: Our empirical analysis is based on the HPM project database (2005-2007 timeframe. The international sample is made up of 266 plants across ten countries: Austria, Canada, Finland, Germany, Italy, Japan, Korea, Spain, Sweden and the USA. In each country. We analized the descriptive statistics, internal consistency testing to purify the items (inter-item correlations, Cronbach’s alpha, squared multiple correlation, corrected item-total correlation, exploratory factor analysis, and finally, a confirmatory factor analysis to check the convergent and discriminant validity of the scales. The analyses will be done with the SPSS and EQS programme using the maximum likelihood parameter estimation method. Findings: The four proposed scales show excellent psychometric properties. Research limitations/implications: with a clearer and more concise designation of the supply chain integration measurement scales more reliable and accurate data could be taken to analyse the relations between these constructs with other variables of interest to the academic l fields. Practical implications: providing scales that are valid as a diagnostic tool for best practices, as well as providing a benchmark with which to compare the score for each individual plant against a collection of industrial companies from the machinery, electronics and transportation sectors. Originality/value: supply chain integration may be a major factor in explaining the performance of companies. The results are nevertheless inconclusive, the vast range

  15. How to Advance TPC Benchmarks with Dependability Aspects

    Science.gov (United States)

    Almeida, Raquel; Poess, Meikel; Nambiar, Raghunath; Patil, Indira; Vieira, Marco

    Transactional systems are the core of the information systems of most organizations. Although there is general acknowledgement that failures in these systems often entail significant impact both on the proceeds and reputation of companies, the benchmarks developed and managed by the Transaction Processing Performance Council (TPC) still maintain their focus on reporting bare performance. Each TPC benchmark has to pass a list of dependability-related tests (to verify ACID properties), but not all benchmarks require measuring their performances. While TPC-E measures the recovery time of some system failures, TPC-H and TPC-C only require functional correctness of such recovery. Consequently, systems used in TPC benchmarks are tuned mostly for performance. In this paper we argue that nowadays systems should be tuned for a more comprehensive suite of dependability tests, and that a dependability metric should be part of TPC benchmark publications. The paper discusses WHY and HOW this can be achieved. Two approaches are introduced and discussed: augmenting each TPC benchmark in a customized way, by extending each specification individually; and pursuing a more unified approach, defining a generic specification that could be adjoined to any TPC benchmark.

  16. Proposal and analysis of the benchmark problem suite for reactor physics study of LWR next generation fuels

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2001-10-01

    In order to investigate the calculation accuracy of the nuclear characteristics of LWR next generation fuels, the Research Committee on Reactor Physics organized by JAERI has established the Working Party on Reactor Physics for LWR Next Generation Fuels. The next generation fuels mean the ones aiming for further extended burn-up such as 70 GWd/t over the current design. The Working Party has proposed six benchmark problems, which consists of pin-cell, PWR fuel assembly and BWR fuel assembly geometries loaded with uranium and MOX fuels, respectively. The specifications of the benchmark problem neglect some of the current limitations such as 5 wt% {sup 235}U to achieve the above-mentioned target. Eleven organizations in the Working Party have carried out the analyses of the benchmark problems. As a result, status of accuracy with the current data and method and some problems to be solved in the future were clarified. In this report, details of the benchmark problems, result by each organization, and their comparisons are presented. (author)

  17. Benchmarking the MCNP code for Monte Carlo modelling of an in vivo neutron activation analysis system.

    Science.gov (United States)

    Natto, S A; Lewis, D G; Ryde, S J

    1998-01-01

    The Monte Carlo computer code MCNP (version 4A) has been used to develop a personal computer-based model of the Swansea in vivo neutron activation analysis (IVNAA) system. The model included specification of the neutron source (252Cf), collimators, reflectors and shielding. The MCNP model was 'benchmarked' against fast neutron and thermal neutron fluence data obtained experimentally from the IVNAA system. The Swansea system allows two irradiation geometries using 'short' and 'long' collimators, which provide alternative dose rates for IVNAA. The data presented here relate to the short collimator, although results of similar accuracy were obtained using the long collimator. The fast neutron fluence was measured in air at a series of depths inside the collimator. The measurements agreed with the MCNP simulation within the statistical uncertainty (5-10%) of the calculations. The thermal neutron fluence was measured and calculated inside the cuboidal water phantom. The depth of maximum thermal fluence was 3.2 cm (measured) and 3.0 cm (calculated). The width of the 50% thermal fluence level across the phantom at its mid-depth was found to be the same by both MCNP and experiment. This benchmarking exercise has given us a high degree of confidence in MCNP as a tool for the design of IVNAA systems.

  18. Ontology for Semantic Data Integration in the Domain of IT Benchmarking.

    Science.gov (United States)

    Pfaff, Matthias; Neubig, Stefan; Krcmar, Helmut

    2018-01-01

    A domain-specific ontology for IT benchmarking has been developed to bridge the gap between a systematic characterization of IT services and their data-based valuation. Since information is generally collected during a benchmark exercise using questionnaires on a broad range of topics, such as employee costs, software licensing costs, and quantities of hardware, it is commonly stored as natural language text; thus, this information is stored in an intrinsically unstructured form. Although these data form the basis for identifying potentials for IT cost reductions, neither a uniform description of any measured parameters nor the relationship between such parameters exists. Hence, this work proposes an ontology for the domain of IT benchmarking, available at https://w3id.org/bmontology. The design of this ontology is based on requirements mainly elicited from a domain analysis, which considers analyzing documents and interviews with representatives from Small- and Medium-Sized Enterprises and Information and Communications Technology companies over the last eight years. The development of the ontology and its main concepts is described in detail (i.e., the conceptualization of benchmarking events, questionnaires, IT services, indicators and their values) together with its alignment with the DOLCE-UltraLite foundational ontology.

  19. Energy saving in WWTP: Daily benchmarking under uncertainty and data availability limitations.

    Science.gov (United States)

    Torregrossa, D; Schutz, G; Cornelissen, A; Hernández-Sancho, F; Hansen, J

    2016-07-01

    Efficient management of Waste Water Treatment Plants (WWTPs) can produce significant environmental and economic benefits. Energy benchmarking can be used to compare WWTPs, identify targets and use these to improve their performance. Different authors have performed benchmark analysis on monthly or yearly basis but their approaches suffer from a time lag between an event, its detection, interpretation and potential actions. The availability of on-line measurement data on many WWTPs should theoretically enable the decrease of the management response time by daily benchmarking. Unfortunately this approach is often impossible because of limited data availability. This paper proposes a methodology to perform a daily benchmark analysis under database limitations. The methodology has been applied to the Energy Online System (EOS) developed in the framework of the project "INNERS" (INNovative Energy Recovery Strategies in the urban water cycle). EOS calculates a set of Key Performance Indicators (KPIs) for the evaluation of energy and process performances. In EOS, the energy KPIs take in consideration the pollutant load in order to enable the comparison between different plants. For example, EOS does not analyse the energy consumption but the energy consumption on pollutant load. This approach enables the comparison of performances for plants with different loads or for a single plant under different load conditions. The energy consumption is measured by on-line sensors, while the pollutant load is measured in the laboratory approximately every 14 days. Consequently, the unavailability of the water quality parameters is the limiting factor in calculating energy KPIs. In this paper, in order to overcome this limitation, the authors have developed a methodology to estimate the required parameters and manage the uncertainty in the estimation. By coupling the parameter estimation with an interval based benchmark approach, the authors propose an effective, fast and reproducible

  20. Benchmarking for Best Practice

    CERN Document Server

    Zairi, Mohamed

    1998-01-01

    Benchmarking for Best Practice uses up-to-the-minute case-studies of individual companies and industry-wide quality schemes to show how and why implementation has succeeded. For any practitioner wanting to establish best practice in a wide variety of business areas, this book makes essential reading. .It is also an ideal textbook on the applications of TQM since it describes concepts, covers definitions and illustrates the applications with first-hand examples. Professor Mohamed Zairi is an international expert and leading figure in the field of benchmarking. His pioneering work in this area l