WorldWideScience

Sample records for functional benchmark results

  1. Analysis of Benchmark 2 results

    International Nuclear Information System (INIS)

    Bacha, F.; Lefievre, B.; Maillard, J.; Silva, J.

    1994-01-01

    The code GEANT315 has been compared to different codes in two benchmarks. We analyze its performances through our results, especially in the thick target case. In spite of gaps in nucleus-nucleus interaction theories at intermediate energies, benchmarks allow possible improvements of physical models used in our codes. Thereafter, a scheme of radioactive waste burning system is studied. (authors). 4 refs., 7 figs., 1 tab

  2. Benchmarking

    OpenAIRE

    Meylianti S., Brigita

    1999-01-01

    Benchmarking has different meaning to different people. There are five types of benchmarking, namely internal benchmarking, competitive benchmarking, industry / functional benchmarking, process / generic benchmarking and collaborative benchmarking. Each type of benchmarking has its own advantages as well as disadvantages. Therefore it is important to know what kind of benchmarking is suitable to a specific application. This paper will discuss those five types of benchmarking in detail, includ...

  3. ZZ WPPR, Pu Recycling Benchmark Results

    International Nuclear Information System (INIS)

    Lutz, D.; Mattes, M.; Delpech, Marc; Juanola, Marc

    2002-01-01

    Description of program or function: The NEA NSC Working Party on Physics of Plutonium Recycling has commissioned a series of benchmarks covering: - Plutonium recycling in pressurized-water reactors; - Void reactivity effect in pressurized-water reactors; - Fast Plutonium-burner reactors: beginning of life; - Plutonium recycling in fast reactors; - Multiple recycling in advanced pressurized-water reactors. The results have been published (see references). ZZ-WPPR-1-A/B contains graphs and tables relative to the PWR Mox pin cell benchmark, representing typical fuel for plutonium recycling, one corresponding to a first cycle, the second for a fifth cycle. These computer readable files contain the complete set of results, while the printed report contains only a subset. ZZ-WPPR-2-CYC1 are the results from cycle 1 of the multiple recycling benchmarks

  4. Benchmark results in radiative transfer

    International Nuclear Information System (INIS)

    Garcia, R.D.M.; Siewert, C.E.

    1986-02-01

    Several aspects of the F N method are reported, and the method is used to solve accurately some benchmark problems in radiative transfer in the field of atmospheric physics. The method was modified to solve cases of pure scattering and an improved process was developed for computing the radiation intensity. An algorithms for computing several quantities used in the F N method was done. An improved scheme to evaluate certain integrals relevant to the method is done, and a two-term recursion relation that has proved useful for the numerical evaluation of matrix elements, basic for the method, is given. The methods used to solve the encountered linear algebric equations are discussed, and the numerical results are evaluated. (M.C.K.) [pt

  5. Benchmark Two-Good Utility Functions

    NARCIS (Netherlands)

    de Jaegher, K.

    Benchmark two-good utility functions involving a good with zero income elasticity and unit income elasticity are well known. This paper derives utility functions for the additional benchmark cases where one good has zero cross-price elasticity, unit own-price elasticity, and zero own price

  6. The Benchmark Test Results of QNX RTOS

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jang Yeol; Lee, Young Jun; Cheon, Se Woo; Lee, Jang Soo; Kwon, Kee Choon [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2010-10-15

    A Real-Time Operating System(RTOS) is an Operating System(OS) intended for real-time applications. Benchmarking is a point of reference by which something can be measured. The QNX is a Real Time Operating System(RTOS) developed by QSSL(QNX Software Systems Ltd.) in Canada. The ELMSYS is the brand name of commercially available Personal Computer(PC) for applications such as Cabinet Operator Module(COM) of Digital Plant Protection System(DPPS) and COM of Digital Engineered Safety Features Actuation System(DESFAS). The ELMSYS PC Hardware is being qualified by KTL(Korea Testing Lab.) for use as a Cabinet Operator Module(COM). The QNX RTOS is being dedicated by Korea Atomic Energy Research Institute (KAERI). This paper describes the outline and benchmarking test results on Context Switching, Message Passing, Synchronization and Deadline Violation of QNX RTOS under the ELMSYS PC platform

  7. The Benchmark Test Results of QNX RTOS

    International Nuclear Information System (INIS)

    Kim, Jang Yeol; Lee, Young Jun; Cheon, Se Woo; Lee, Jang Soo; Kwon, Kee Choon

    2010-01-01

    A Real-Time Operating System(RTOS) is an Operating System(OS) intended for real-time applications. Benchmarking is a point of reference by which something can be measured. The QNX is a Real Time Operating System(RTOS) developed by QSSL(QNX Software Systems Ltd.) in Canada. The ELMSYS is the brand name of commercially available Personal Computer(PC) for applications such as Cabinet Operator Module(COM) of Digital Plant Protection System(DPPS) and COM of Digital Engineered Safety Features Actuation System(DESFAS). The ELMSYS PC Hardware is being qualified by KTL(Korea Testing Lab.) for use as a Cabinet Operator Module(COM). The QNX RTOS is being dedicated by Korea Atomic Energy Research Institute (KAERI). This paper describes the outline and benchmarking test results on Context Switching, Message Passing, Synchronization and Deadline Violation of QNX RTOS under the ELMSYS PC platform

  8. MIPS bacterial genomes functional annotation benchmark dataset.

    Science.gov (United States)

    Tetko, Igor V; Brauner, Barbara; Dunger-Kaltenbach, Irmtraud; Frishman, Goar; Montrone, Corinna; Fobo, Gisela; Ruepp, Andreas; Antonov, Alexey V; Surmeli, Dimitrij; Mewes, Hans-Wernen

    2005-05-15

    Any development of new methods for automatic functional annotation of proteins according to their sequences requires high-quality data (as benchmark) as well as tedious preparatory work to generate sequence parameters required as input data for the machine learning methods. Different program settings and incompatible protocols make a comparison of the analyzed methods difficult. The MIPS Bacterial Functional Annotation Benchmark dataset (MIPS-BFAB) is a new, high-quality resource comprising four bacterial genomes manually annotated according to the MIPS functional catalogue (FunCat). These resources include precalculated sequence parameters, such as sequence similarity scores, InterPro domain composition and other parameters that could be used to develop and benchmark methods for functional annotation of bacterial protein sequences. These data are provided in XML format and can be used by scientists who are not necessarily experts in genome annotation. BFAB is available at http://mips.gsf.de/proj/bfab

  9. The Medical Library Association Benchmarking Network: results*

    Science.gov (United States)

    Dudden, Rosalind Farnam; Corcoran, Kate; Kaplan, Janice; Magouirk, Jeff; Rand, Debra C.; Smith, Bernie Todd

    2006-01-01

    Objective: This article presents some limited results from the Medical Library Association (MLA) Benchmarking Network survey conducted in 2002. Other uses of the data are also presented. Methods: After several years of development and testing, a Web-based survey opened for data input in December 2001. Three hundred eighty-five MLA members entered data on the size of their institutions and the activities of their libraries. The data from 344 hospital libraries were edited and selected for reporting in aggregate tables and on an interactive site in the Members-Only area of MLANET. The data represent a 16% to 23% return rate and have a 95% confidence level. Results: Specific questions can be answered using the reports. The data can be used to review internal processes, perform outcomes benchmarking, retest a hypothesis, refute a previous survey findings, or develop library standards. The data can be used to compare to current surveys or look for trends by comparing the data to past surveys. Conclusions: The impact of this project on MLA will reach into areas of research and advocacy. The data will be useful in the everyday working of small health sciences libraries as well as provide concrete data on the current practices of health sciences libraries. PMID:16636703

  10. Results of LWR core transient benchmarks

    International Nuclear Information System (INIS)

    Finnemann, H.; Bauer, H.; Galati, A.; Martinelli, R.

    1993-10-01

    LWR core transient (LWRCT) benchmarks, based on well defined problems with a complete set of input data, are used to assess the discrepancies between three-dimensional space-time kinetics codes in transient calculations. The PWR problem chosen is the ejection of a control assembly from an initially critical core at hot zero power or at full power, each for three different geometrical configurations. The set of problems offers a variety of reactivity excursions which efficiently test the coupled neutronic/thermal - hydraulic models of the codes. The 63 sets of submitted solutions are analyzed by comparison with a nodal reference solution defined by using a finer spatial and temporal resolution than in standard calculations. The BWR problems considered are reactivity excursions caused by cold water injection and pressurization events. In the present paper, only the cold water injection event is discussed and evaluated in some detail. Lacking a reference solution the evaluation of the 8 sets of BWR contributions relies on a synthetic comparative discussion. The results of this first phase of LWRCT benchmark calculations are quite satisfactory, though there remain some unresolved issues. It is therefore concluded that even more challenging problems can be successfully tackled in a suggested second test phase. (authors). 46 figs., 21 tabs., 3 refs

  11. The Medical Library Association Benchmarking Network: results.

    Science.gov (United States)

    Dudden, Rosalind Farnam; Corcoran, Kate; Kaplan, Janice; Magouirk, Jeff; Rand, Debra C; Smith, Bernie Todd

    2006-04-01

    This article presents some limited results from the Medical Library Association (MLA) Benchmarking Network survey conducted in 2002. Other uses of the data are also presented. After several years of development and testing, a Web-based survey opened for data input in December 2001. Three hundred eighty-five MLA members entered data on the size of their institutions and the activities of their libraries. The data from 344 hospital libraries were edited and selected for reporting in aggregate tables and on an interactive site in the Members-Only area of MLANET. The data represent a 16% to 23% return rate and have a 95% confidence level. Specific questions can be answered using the reports. The data can be used to review internal processes, perform outcomes benchmarking, retest a hypothesis, refute a previous survey findings, or develop library standards. The data can be used to compare to current surveys or look for trends by comparing the data to past surveys. The impact of this project on MLA will reach into areas of research and advocacy. The data will be useful in the everyday working of small health sciences libraries as well as provide concrete data on the current practices of health sciences libraries.

  12. Repeated Results Analysis for Middleware Regression Benchmarking

    Czech Academy of Sciences Publication Activity Database

    Bulej, Lubomír; Kalibera, T.; Tůma, P.

    2005-01-01

    Roč. 60, - (2005), s. 345-358 ISSN 0166-5316 R&D Projects: GA ČR GA102/03/0672 Institutional research plan: CEZ:AV0Z10300504 Keywords : middleware benchmarking * regression benchmarking * regression testing Subject RIV: JD - Computer Applications, Robotics Impact factor: 0.756, year: 2005

  13. Reactor calculation benchmark PCA blind test results

    International Nuclear Information System (INIS)

    Kam, F.B.K.; Stallmann, F.W.

    1980-01-01

    Further improvement in calculational procedures or a combination of calculations and measurements is necessary to attain 10 to 15% (1 sigma) accuracy for neutron exposure parameters (flux greater than 0.1 MeV, flux greater than 1.0 MeV, and dpa). The calculational modeling of power reactors should be benchmarked in an actual LWR plant to provide final uncertainty estimates for end-of-life predictions and limitations for plant operations. 26 references, 14 figures, 6 tables

  14. Reactor calculation benchmark PCA blind test results

    Energy Technology Data Exchange (ETDEWEB)

    Kam, F.B.K.; Stallmann, F.W.

    1980-01-01

    Further improvement in calculational procedures or a combination of calculations and measurements is necessary to attain 10 to 15% (1 sigma) accuracy for neutron exposure parameters (flux greater than 0.1 MeV, flux greater than 1.0 MeV, and dpa). The calculational modeling of power reactors should be benchmarked in an actual LWR plant to provide final uncertainty estimates for end-of-life predictions and limitations for plant operations. 26 references, 14 figures, 6 tables.

  15. Evaluation of PWR and BWR pin cell benchmark results

    International Nuclear Information System (INIS)

    Pijlgroms, B.J.; Gruppelaar, H.; Janssen, A.J.; Hoogenboom, J.E.; Leege, P.F.A. de; Voet, J. van der; Verhagen, F.C.M.

    1991-12-01

    Benchmark results of the Dutch PINK working group on PWR and BWR pin cell calculational benchmark as defined by EPRI are presented and evaluated. The observed discrepancies are problem dependent: a part of the results is satisfactory, some other results require further analysis. A brief overview is given of the different code packages used in this analysis. (author). 14 refs., 9 figs., 30 tabs

  16. Evaluation of PWR and BWR pin cell benchmark results

    Energy Technology Data Exchange (ETDEWEB)

    Pijlgroms, B.J.; Gruppelaar, H.; Janssen, A.J. (Netherlands Energy Research Foundation (ECN), Petten (Netherlands)); Hoogenboom, J.E.; Leege, P.F.A. de (Interuniversitair Reactor Inst., Delft (Netherlands)); Voet, J. van der (Gemeenschappelijke Kernenergiecentrale Nederland NV, Dodewaard (Netherlands)); Verhagen, F.C.M. (Keuring van Electrotechnische Materialen NV, Arnhem (Netherlands))

    1991-12-01

    Benchmark results of the Dutch PINK working group on PWR and BWR pin cell calculational benchmark as defined by EPRI are presented and evaluated. The observed discrepancies are problem dependent: a part of the results is satisfactory, some other results require further analysis. A brief overview is given of the different code packages used in this analysis. (author). 14 refs., 9 figs., 30 tabs.

  17. KAERI results for BN600 full MOX benchmark (Phase 4)

    International Nuclear Information System (INIS)

    Lee, Kibog Lee

    2003-01-01

    The purpose of this document is to report the results of KAERI's calculation for the Phase-4 of BN-600 full MOX fueled core benchmark analyses according to the RCM report of IAEA CRP Action on U pdated Codes and Methods to Reduce the Calculational Uncertainties of the LMFR Reactivity Effects. T he BN-600 full MOX core model is based on the specification in the document, F ull MOX Model (Phase4. doc ) . This document addresses the calculational methods employed in the benchmark analyses and benchmark results carried out by KAERI

  18. Evaluation of PWR and BWR pin cell benchmark results

    Energy Technology Data Exchange (ETDEWEB)

    Pilgroms, B.J.; Gruppelaar, H.; Janssen, A.J. (Netherlands Energy Research Foundation (ECN), Petten (Netherlands)); Hoogenboom, J.E.; Leege, P.F.A. de (Interuniversitair Reactor Inst., Delft (Netherlands)); Voet, J. van der (Gemeenschappelijke Kernenergiecentrale Nederland NV, Dodewaard (Netherlands)); Verhagen, F.C.M. (Keuring van Electrotechnische Materialen NV, Arnhem (Netherlands))

    1991-12-01

    Benchmark results of the Dutch PINK working group on the PWR and BWR pin cell calculational benchmark as defined by EPRI are presented and evaluated. The observed discrepancies are problem dependent: a part of the results is satisfactory, some other results require further analysis. A brief overview is given of the different code packages used in this analysis. (author). 14 refs.; 9 figs.; 30 tabs.

  19. Actinides transmutation - a comparison of results for PWR benchmark

    International Nuclear Information System (INIS)

    Claro, Luiz H.

    2009-01-01

    The physical aspects involved in the Partitioning and Transmutation (P and T) of minor actinides (MA) and fission products (FP) generated by reactors PWR are of great interest in the nuclear industry. Besides these the reduction in the storage of radioactive wastes are related with the acceptability of the nuclear electric power. From the several concepts for partitioning and transmutation suggested in literature, one of them involves PWR reactors to burn the fuel containing plutonium and minor actinides reprocessed of UO 2 used in previous stages. In this work are presented the results of the calculations of a benchmark in P and T carried with WIMSD5B program using its new cross sections library generated from the ENDF-B-VII and the comparison with the results published in literature by other calculations. For comparison, was used the benchmark transmutation concept based in a typical PWR cell and the analyzed results were the k∞ and the atomic density of the isotopes Np-239, Pu-241, Pu-242 and Am-242m, as function of burnup considering discharge of 50 GWd/tHM. (author)

  20. ANL results for LMFR reactivity coefficients benchmark

    International Nuclear Information System (INIS)

    Hill, Robert

    2000-01-01

    The fast reactor analysis methods developed at ANL were extensively tested in ZPR and ZPPR experiments, applied to EBR-2 and FFTF test reactors. The basic nuclear data library used was ENDF/B-V.2 with the ETOE-2 data processing code and the ENDF/B-VI. Multigroup constants were generated by Monte Carlo code MCNP 2 -2. Neutron flux calculation were done by DIF3D code applying neutron diffusion theory and finite difference method. The results obtained include basic parameters; fuel and structure regional Doppler coefficients; geometry expansion fuel coefficients; kinetics parameters. In general, agreement between phase 1 and 2 results were excellent

  1. Criticality Benchmark Results Using Various MCNP Data Libraries

    International Nuclear Information System (INIS)

    Frankle, Stephanie C.

    1999-01-01

    A suite of 86 criticality benchmarks has been recently implemented in MCNPtrademark as part of the nuclear data validation effort. These benchmarks have been run using two sets of MCNP continuous-energy neutron data: ENDF/B-VI based data through Release 2 (ENDF60) and the ENDF/B-V based data. New evaluations were completed for ENDF/B-VI for a number of the important nuclides such as the isotopes of H, Be, C, N, O, Fe, Ni, 235,238 U, 237 Np, and 239,240 Pu. When examining the results of these calculations for the five manor categories of 233 U, intermediate-enriched 235 U (IEU), highly enriched 235 U (HEU), 239 Pu, and mixed metal assembles, we find the following: (1) The new evaluations for 9 Be, 12 C, and 14 N show no net effect on k eff ; (2) There is a consistent decrease in k eff for all of the solution assemblies for ENDF/B-VI due to 1 H and 16 O, moving k eff further from the benchmark value for uranium solutions and closer to the benchmark value for plutonium solutions; (3) k eff decreased for the ENDF/B-VI Fe isotopic data, moving the calculated k eff further from the benchmark value; (4) k eff decreased for the ENDF/B-VI Ni isotopic data, moving the calculated k eff closer to the benchmark value; (5) The W data remained unchanged and tended to calculate slightly higher than the benchmark values; (6) For metal uranium systems, the ENDF/B-VI data for 235 U tends to decrease k eff while the 238 U data tends to increase k eff . The net result depends on the energy spectrum and material specifications for the particular assembly; (7) For more intermediate-energy systems, the changes in the 235,238 U evaluations tend to increase k eff . For the mixed graphite and normal uranium-reflected assembly, a large increase in k eff due to changes in the 238 U evaluation moved the calculated k eff much closer to the benchmark value. (8) There is little change in k eff for the uranium solutions due to the new 235,238 U evaluations; and (9) There is little change in k eff

  2. Aircraft Engine Gas Path Diagnostic Methods: Public Benchmarking Results

    Science.gov (United States)

    Simon, Donald L.; Borguet, Sebastien; Leonard, Olivier; Zhang, Xiaodong (Frank)

    2013-01-01

    Recent technology reviews have identified the need for objective assessments of aircraft engine health management (EHM) technologies. To help address this issue, a gas path diagnostic benchmark problem has been created and made publicly available. This software tool, referred to as the Propulsion Diagnostic Method Evaluation Strategy (ProDiMES), has been constructed based on feedback provided by the aircraft EHM community. It provides a standard benchmark problem enabling users to develop, evaluate and compare diagnostic methods. This paper will present an overview of ProDiMES along with a description of four gas path diagnostic methods developed and applied to the problem. These methods, which include analytical and empirical diagnostic techniques, will be described and associated blind-test-case metric results will be presented and compared. Lessons learned along with recommendations for improving the public benchmarking processes will also be presented and discussed.

  3. Analyzing the BBOB results by means of benchmarking concepts.

    Science.gov (United States)

    Mersmann, O; Preuss, M; Trautmann, H; Bischl, B; Weihs, C

    2015-01-01

    We present methods to answer two basic questions that arise when benchmarking optimization algorithms. The first one is: which algorithm is the "best" one? and the second one is: which algorithm should I use for my real-world problem? Both are connected and neither is easy to answer. We present a theoretical framework for designing and analyzing the raw data of such benchmark experiments. This represents a first step in answering the aforementioned questions. The 2009 and 2010 BBOB benchmark results are analyzed by means of this framework and we derive insight regarding the answers to the two questions. Furthermore, we discuss how to properly aggregate rankings from algorithm evaluations on individual problems into a consensus, its theoretical background and which common pitfalls should be avoided. Finally, we address the grouping of test problems into sets with similar optimizer rankings and investigate whether these are reflected by already proposed test problem characteristics, finding that this is not always the case.

  4. Results of the benchmark for blade structural models, part A

    DEFF Research Database (Denmark)

    Lekou, D.J.; Chortis, D.; Belen Fariñas, A.

    2013-01-01

    A benchmark on structural design methods for blades was performed within the InnWind.Eu project under WP2 “Lightweight Rotor” Task 2.2 “Lightweight structural design”. The present document is describes the results of the comparison simulation runs that were performed by the partners involved within...... Task 2.2 of the InnWind.Eu project. The benchmark is based on the reference wind turbine and the reference blade provided by DTU [1]. "Structural Concept developers/modelers" of WP2 were provided with the necessary input for a comparison numerical simulation run, upon definition of the reference blade...

  5. Performance of Multi-chaotic PSO on a shifted benchmark functions set

    Energy Technology Data Exchange (ETDEWEB)

    Pluhacek, Michal; Senkerik, Roman; Zelinka, Ivan [Tomas Bata University in Zlín, Faculty of Applied Informatics Department of Informatics and Artificial Intelligence nám. T.G. Masaryka 5555, 760 01 Zlín (Czech Republic)

    2015-03-10

    In this paper the performance of Multi-chaotic PSO algorithm is investigated using two shifted benchmark functions. The purpose of shifted benchmark functions is to simulate the time-variant real-world problems. The results of chaotic PSO are compared with canonical version of the algorithm. It is concluded that using the multi-chaotic approach can lead to better results in optimization of shifted functions.

  6. Performance of Multi-chaotic PSO on a shifted benchmark functions set

    International Nuclear Information System (INIS)

    Pluhacek, Michal; Senkerik, Roman; Zelinka, Ivan

    2015-01-01

    In this paper the performance of Multi-chaotic PSO algorithm is investigated using two shifted benchmark functions. The purpose of shifted benchmark functions is to simulate the time-variant real-world problems. The results of chaotic PSO are compared with canonical version of the algorithm. It is concluded that using the multi-chaotic approach can lead to better results in optimization of shifted functions

  7. [Benchmarking and other functions of ROM: back to basics].

    Science.gov (United States)

    Barendregt, M

    2015-01-01

    Since 2011 outcome data in the Dutch mental health care have been collected on a national scale. This has led to confusion about the position of benchmarking in the system known as routine outcome monitoring (rom). To provide insight into the various objectives and uses of aggregated outcome data. A qualitative review was performed and the findings were analysed. Benchmarking is a strategy for finding best practices and for improving efficacy and it belongs to the domain of quality management. Benchmarking involves comparing outcome data by means of instrumentation and is relatively tolerant with regard to the validity of the data. Although benchmarking is a function of rom, it must be differentiated form other functions from rom. Clinical management, public accountability, research, payment for performance and information for patients are all functions of rom which require different ways of data feedback and which make different demands on the validity of the underlying data. Benchmarking is often wrongly regarded as being simply a synonym for 'comparing institutions'. It is, however, a method which includes many more factors; it can be used to improve quality and has a more flexible approach to the validity of outcome data and is less concerned than other rom functions about funding and the amount of information given to patients. Benchmarking can make good use of currently available outcome data.

  8. Compilation of benchmark results for fusion related Nuclear Data

    International Nuclear Information System (INIS)

    Maekawa, Fujio; Wada, Masayuki; Oyama, Yukio; Ichihara, Chihiro; Makita, Yo; Takahashi, Akito

    1998-11-01

    This report compiles results of benchmark tests for validation of evaluated nuclear data to be used in nuclear designs of fusion reactors. Parts of results were obtained under activities of the Fusion Neutronics Integral Test Working Group organized by the members of both Japan Nuclear Data Committee and the Reactor Physics Committee. The following three benchmark experiments were employed used for the tests: (i) the leakage neutron spectrum measurement experiments from slab assemblies at the D-T neutron source at FNS/JAERI, (ii) in-situ neutron and gamma-ray measurement experiments (so-called clean benchmark experiments) also at FNS, and (iii) the pulsed sphere experiments for leakage neutron and gamma-ray spectra at the D-T neutron source facility of Osaka University, OKTAVIAN. Evaluated nuclear data tested were JENDL-3.2, JENDL Fusion File, FENDL/E-1.0 and newly selected data for FENDL/E-2.0. Comparisons of benchmark calculations with the experiments for twenty-one elements, i.e., Li, Be, C, N, O, F, Al, Si, Ti, V, Cr, Mn, Fe, Co, Ni, Cu, Zr, Nb, Mo, W and Pb, are summarized. (author). 65 refs

  9. Benchmark density functional theory calculations for nanoscale conductance

    DEFF Research Database (Denmark)

    Strange, Mikkel; Bækgaard, Iben Sig Buur; Thygesen, Kristian Sommer

    2008-01-01

    We present a set of benchmark calculations for the Kohn-Sham elastic transmission function of five representative single-molecule junctions. The transmission functions are calculated using two different density functional theory methods, namely an ultrasoft pseudopotential plane-wave code...

  10. Benchmarking

    OpenAIRE

    Beretta Sergio; Dossi Andrea; Grove Hugh

    2000-01-01

    Due to their particular nature, the benchmarking methodologies tend to exceed the boundaries of management techniques, and to enter the territories of managerial culture. A culture that is also destined to break into the accounting area not only strongly supporting the possibility of fixing targets, and measuring and comparing the performance (an aspect that is already innovative and that is worthy of attention), but also questioning one of the principles (or taboos) of the accounting or...

  11. Results from the IAEA benchmark of spallation models

    International Nuclear Information System (INIS)

    Leray, S.; David, J.C.; Khandaker, M.; Mank, G.; Mengoni, A.; Otsuka, N.; Filges, D.; Gallmeier, F.; Konobeyev, A.; Michel, R.

    2011-01-01

    Spallation reactions play an important role in a wide domain of applications. In the simulation codes used in this field, the nuclear interaction cross-sections and characteristics are computed by spallation models. The International Atomic Energy Agency (IAEA) has recently organised a benchmark of the spallation models used or that could be used in the future into high-energy transport codes. The objectives were, first, to assess the prediction capabilities of the different spallation models for the different mass and energy regions and the different exit channels and, second, to understand the reason for the success or deficiency of the models. Results of the benchmark concerning both the analysis of the prediction capabilities of the models and the first conclusions on the physics of spallation models are presented. (authors)

  12. Benchmarking NNWSI flow and transport codes: COVE 1 results

    International Nuclear Information System (INIS)

    Hayden, N.K.

    1985-06-01

    The code verification (COVE) activity of the Nevada Nuclear Waste Storage Investigations (NNWSI) Project is the first step in certification of flow and transport codes used for NNWSI performance assessments of a geologic repository for disposing of high-level radioactive wastes. The goals of the COVE activity are (1) to demonstrate and compare the numerical accuracy and sensitivity of certain codes, (2) to identify and resolve problems in running typical NNWSI performance assessment calculations, and (3) to evaluate computer requirements for running the codes. This report describes the work done for COVE 1, the first step in benchmarking some of the codes. Isothermal calculations for the COVE 1 benchmarking have been completed using the hydrologic flow codes SAGUARO, TRUST, and GWVIP; the radionuclide transport codes FEMTRAN and TRUMP; and the coupled flow and transport code TRACR3D. This report presents the results of three cases of the benchmarking problem solved for COVE 1, a comparison of the results, questions raised regarding sensitivities to modeling techniques, and conclusions drawn regarding the status and numerical sensitivities of the codes. 30 refs

  13. Parton distribution functions and benchmark cross sections at NNLO

    Energy Technology Data Exchange (ETDEWEB)

    Alekhin, S. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Institute for High Energy Physics (IHEP), Protvino (Russian Federation); Bluemlein, J.; Moch, S. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany)

    2012-02-15

    We present a determination of parton distribution functions (ABM11) and the strong coupling constant {alpha}{sub s} at next-to-leading order and next-to-next-to-leading order (NNLO) in QCD based on world data for deep-inelastic scattering and fixed-target data for the Drell-Yan process. The analysis is performed in the fixed-flavor number scheme for n{sub f}=3,4,5 and uses the MS-scheme for {alpha}{sub s} and the heavy-quark masses. At NNLO we obtain the value {alpha}{sub s}(MZ)=0.1134{+-}0.0011. The fit results are used to compute benchmark cross sections at hadron colliders to NNLO accuracy and to compare to data from the LHC. (orig.)

  14. Systems reliability Benchmark exercise part 1-Description and results

    International Nuclear Information System (INIS)

    Amendola, A.

    1986-01-01

    The report describes aims, rules and results of the Systems Reliability Benchmark Exercise, which has been performed in order to assess methods and procedures for reliability analysis of complex systems and involved a large number of European organizations active in NPP safety evaluation. The exercise included both qualitative and quantitative methods and was structured in such a way that separation of the effects of uncertainties in modelling and in data on the overall spread was made possible. Part I describes the way in which RBE has been performed, its main results and conclusions

  15. JNC results of BN-600 benchmark calculation (phase 3)

    International Nuclear Information System (INIS)

    Ishikawa, M.

    2002-01-01

    The present work is the result of phase 3 BN-600 core benchmark problem, meaning burnup and heterogeneity. Analytical method applied consisted of: JENDL-3.2 nuclear data library, group constants (70 group, ABBN type self shielding transport factors), heterogeneous cell model for fuel and control rod, basic diffusion calculation (CITATION code), transport theory and mesh size correction (NSHEX code based on SN transport nodal method developed by JNC). Burnup and heterogeneity calculation results are presented obtained by applying both diffusion and transport approach for beginning and end of cycle

  16. Benchmarking Density Functionals for Chemical Bonds of Gold

    DEFF Research Database (Denmark)

    Kepp, Kasper Planeta

    2017-01-01

    Gold plays a major role in nanochemistry, catalysis, and electrochemistry. Accordingly, hundreds of studies apply density functionals to study chemical bonding with gold, yet there is no systematic attempt to assess the accuracy of these methods applied to gold. This paper reports a benchmark aga...

  17. Experiment vs simulation RT WFNDEC 2014 benchmark: CIVA results

    International Nuclear Information System (INIS)

    Tisseur, D.; Costin, M.; Rattoni, B.; Vienne, C.; Vabre, A.; Cattiaux, G.; Sollier, T.

    2015-01-01

    The French Atomic Energy Commission and Alternative Energies (CEA) has developed for years the CIVA software dedicated to simulation of NDE techniques such as Radiographic Testing (RT). RT modelling is achieved in CIVA using combination of a determinist approach based on ray tracing for transmission beam simulation and a Monte Carlo model for the scattered beam computation. Furthermore, CIVA includes various detectors models, in particular common x-ray films and a photostimulable phosphor plates. This communication presents the results obtained with the configurations proposed in the World Federation of NDEC 2014 RT modelling benchmark with the RT models implemented in the CIVA software

  18. Experiment vs simulation RT WFNDEC 2014 benchmark: CIVA results

    Energy Technology Data Exchange (ETDEWEB)

    Tisseur, D., E-mail: david.tisseur@cea.fr; Costin, M., E-mail: david.tisseur@cea.fr; Rattoni, B., E-mail: david.tisseur@cea.fr; Vienne, C., E-mail: david.tisseur@cea.fr; Vabre, A., E-mail: david.tisseur@cea.fr; Cattiaux, G., E-mail: david.tisseur@cea.fr [CEA LIST, CEA Saclay 91191 Gif sur Yvette Cedex (France); Sollier, T. [Institut de Radioprotection et de Sûreté Nucléaire, B.P.17 92262 Fontenay-Aux-Roses (France)

    2015-03-31

    The French Atomic Energy Commission and Alternative Energies (CEA) has developed for years the CIVA software dedicated to simulation of NDE techniques such as Radiographic Testing (RT). RT modelling is achieved in CIVA using combination of a determinist approach based on ray tracing for transmission beam simulation and a Monte Carlo model for the scattered beam computation. Furthermore, CIVA includes various detectors models, in particular common x-ray films and a photostimulable phosphor plates. This communication presents the results obtained with the configurations proposed in the World Federation of NDEC 2014 RT modelling benchmark with the RT models implemented in the CIVA software.

  19. JNC results of BN-600 benchmark calculation (phase 4)

    International Nuclear Information System (INIS)

    Ishikawa, Makoto

    2003-01-01

    The present work is the results of JNC, Japan, for the Phase 4 of the BN-600 core benchmark problem (Hex-Z fully MOX fuelled core model) organized by IAEA. The benchmark specification is based on 1) the RCM report of IAEA CRP on 'Updated Codes and Methods to Reduce the Calculational Uncertainties of LMFR Reactivity Effects, Action 3.12' (Calculations for BN-600 fully fuelled MOX core for subsequent transient analyses). JENDL-3.2 nuclear data library was used for calculating 70 group ABBN-type group constants. Cell models for fuel assembly and control rod calculations were applied: homogeneous and heterogeneous (cylindrical supercell) model. Basic diffusion calculation was three-dimensional Hex-Z model, 18 group (Citation code). Transport calculations were 18 group, three-dimensional (NSHEC code) based on Sn-transport nodal method developed at JNC. The generated thermal power per fission was based on Sher's data corrected on the basis of ENDF/B-IV data library. Calculation results are presented in Tables for intercomparison

  20. Benchmarking Benchmarks

    NARCIS (Netherlands)

    D.C. Blitz (David)

    2011-01-01

    textabstractBenchmarking benchmarks is a bundle of six studies that are inspired by the prevalence of benchmarking in academic finance research as well as in investment practice. Three studies examine if current benchmark asset pricing models adequately describe the cross-section of stock returns.

  1. Benchmark Calculations on Halden IFA-650 LOCA Test Results

    International Nuclear Information System (INIS)

    Ek, Mirkka; Kekkonen, Laura; Kelppe, Seppo; Stengaard, J.O.; Josek, Radomir; Wiesenack, Wolfgang; Aounallah, Yacine; Wallin, Hannu; Grandjean, Claude; Herb, Joachim; Lerchl, Georg; Trambauer, Klaus; Sonnenburg, Heinz-Guenther; Nakajima, Tetsuo; Spykman, Gerold; Struzik, Christine

    2010-01-01

    through several blow-downs and heat-ups and reached peak clad temperatures of more than 1000 C. In the second run, where the rod was sufficiently pre-pressurised, ballooning and burst was obtained. The first benchmark consisted of three rounds of code calculations related to IFA-650.3: 1. Pre-test calculations: Participants were provided with information regarding the setup of the Halden LOCA test facility, data from the commissioning runs, and information about the test pin and power conditions to be applied in the execution of the test. 2. Post-test calculations I: In addition to the information from the first round, participants were provided with the in-pile results from the test. 3. Post-test calculations II, unified thermal-hydraulic boundary conditions: Calculations were repeated using a cladding temperature distribution calculated with ATHLET-CD at GRS. Since the test, when executed, did not produce the expected ballooning and fuel relocation, it was decided to continue with a second benchmark using tests 650.4 and 650.5, this time as post-test calculations. The fourth test of the series, IFA-650.4 conducted in April 2006, caused particular attention in the international nuclear community. The fuel used in the experiment had a high burnup, 92 MWd/kgU, and a low pre-test hydrogen content of about 50 ppm. The cladding burst at about 790 deg. C caused a marked temperature increase at the lower end of the segment and a decrease at the upper end, indicating that fuel relocation had occurred. Subsequent gamma scanning showed that approximately 19 cm (40%) of the fuel stack were missing from the upper part of the rod. PIE at the IFE-Kjeller hot cells corroborated this evidence of substantial fuel relocation. This report presents the results of the codes which participated in the various benchmarks. The two main parts, on benchmark I and II, each start with a brief description of the most important experimental data. Then, the code calculation results follow

  2. Boiling water reactor turbine trip (TT) benchmark. Volume II: Summary Results of Exercise 1

    International Nuclear Information System (INIS)

    Akdeniz, Bedirhan; Ivanov, Kostadin N.; Olson, Andy M.

    2005-06-01

    The OECD Nuclear Energy Agency (NEA) completed under US Nuclear Regulatory Commission (NRC) sponsorship a PWR main steam line break (MSLB) benchmark against coupled system three-dimensional (3-D) neutron kinetics and thermal-hydraulic codes. Another OECD/NRC coupled-code benchmark was recently completed for a BWR turbine trip (TT) transient and is the object of the present report. Turbine trip transients in a BWR are pressurisation events in which the coupling between core space-dependent neutronic phenomena and system dynamics plays an important role. The data made available from actual experiments carried out at the Peach Bottom 2 plant make the present benchmark particularly valuable. While defining and coordinating the BWR TT benchmark, a systematic approach and level methodology not only allowed for a consistent and comprehensive validation process, but also contributed to the study of key parameters of pressurisation transients. The benchmark consists of three separate exercises, two initial states and five transient scenarios. The BWR TT Benchmark will be published in four volumes as NEA reports. CD-ROMs will also be prepared and will include the four reports and the transient boundary conditions, decay heat values as a function of time, cross-section libraries and supplementary tables and graphs not published in the paper version. BWR TT Benchmark - Volume I: Final Specifications was issued in 2001 [NEA/NSC/DOC(2001)]. The benchmark team [Pennsylvania State University (PSU) in co-operation with Exelon Nuclear and the NEA] has been responsible for coordinating benchmark activities, answering participant questions and assisting them in developing their models, as well as analysing submitted solutions and providing reports summarising the results for each phase. The benchmark team has also been involved in the technical aspects of the benchmark, including sensitivity studies for the different exercises. Volume II summarises the results for Exercise 1 of the

  3. Benchmarking Evaluation Results for Prototype Extravehicular Activity Gloves

    Science.gov (United States)

    Aitchison, Lindsay; McFarland, Shane

    2012-01-01

    The Space Suit Assembly (SSA) Development Team at NASA Johnson Space Center has invested heavily in the advancement of rear-entry planetary exploration suit design but largely deferred development of extravehicular activity (EVA) glove designs, and accepted the risk of using the current flight gloves, Phase VI, for unique mission scenarios outside the Space Shuttle and International Space Station (ISS) Program realm of experience. However, as design reference missions mature, the risks of using heritage hardware have highlighted the need for developing robust new glove technologies. To address the technology gap, the NASA Game-Changing Technology group provided start-up funding for the High Performance EVA Glove (HPEG) Project in the spring of 2012. The overarching goal of the HPEG Project is to develop a robust glove design that increases human performance during EVA and creates pathway for future implementation of emergent technologies, with specific aims of increasing pressurized mobility to 60% of barehanded capability, increasing the durability by 100%, and decreasing the potential of gloves to cause injury during use. The HPEG Project focused initial efforts on identifying potential new technologies and benchmarking the performance of current state of the art gloves to identify trends in design and fit leading to establish standards and metrics against which emerging technologies can be assessed at both the component and assembly levels. The first of the benchmarking tests evaluated the quantitative mobility performance and subjective fit of four prototype gloves developed by Flagsuit LLC, Final Frontier Designs, LLC Dover, and David Clark Company as compared to the Phase VI. All of the companies were asked to design and fabricate gloves to the same set of NASA provided hand measurements (which corresponded to a single size of Phase Vi glove) and focus their efforts on improving mobility in the metacarpal phalangeal and carpometacarpal joints. Four test

  4. Benchmarking Outpatient Rehabilitation Clinics Using Functional Status Outcomes.

    Science.gov (United States)

    Gozalo, Pedro L; Resnik, Linda J; Silver, Benjamin

    2016-04-01

    To utilize functional status (FS) outcomes to benchmark outpatient therapy clinics. Outpatient therapy data from clinics using Focus on Therapeutic Outcomes (FOTO) assessments. Retrospective analysis of 538 clinics, involving 2,040 therapists and 90,392 patients admitted July 2006-June 2008. FS at discharge was modeled using hierarchical regression methods with patients nested within therapists within clinics. Separate models were estimated for all patients, for those with lumbar, and for those with shoulder impairments. All models risk-adjusted for intake FS, age, gender, onset, surgery count, functional comorbidity index, fear-avoidance level, and payer type. Inverse probability weighting adjusted for censoring. Functional status was captured using computer adaptive testing at intake and at discharge. Clinic and therapist effects explained 11.6 percent of variation in FS. Clinics ranked in the lowest quartile had significantly different outcomes than those in the highest quartile (p < .01). Clinics ranked similarly in lumbar and shoulder impairments (correlation = 0.54), but some clinics ranked in the highest quintile for one condition and in the lowest for the other. Benchmarking models based on validated FS measures clearly separated high-quality from low-quality clinics, and they could be used to inform value-based-payment policies. © Health Research and Educational Trust.

  5. Results of the event sequence reliability benchmark exercise

    International Nuclear Information System (INIS)

    Silvestri, E.

    1990-01-01

    The Event Sequence Reliability Benchmark Exercise is the fourth of a series of benchmark exercises on reliability and risk assessment, with specific reference to nuclear power plant applications, and is the logical continuation of the previous benchmark exercises on System Analysis Common Cause Failure and Human Factors. The reference plant is the Nuclear Power Plant at Grohnde Federal Republic of Germany a 1300 MW PWR plant of KWU design. The specific objective of the Exercise is to model, to quantify and to analyze such event sequences initiated by the occurrence of a loss of offsite power that involve the steam generator feed. The general aim is to develop a segment of a risk assessment, which ought to include all the specific aspects and models of quantification, such as common canal failure, Human Factors and System Analysis, developed in the previous reliability benchmark exercises, with the addition of the specific topics of dependences between homologous components belonging to different systems featuring in a given event sequence and of uncertainty quantification, to end up with an overall assessment of: - the state of the art in risk assessment and the relative influences of quantification problems in a general risk assessment framework. The Exercise has been carried out in two phases, both requiring modelling and quantification, with the second phase adopting more restrictive rules and fixing certain common data, as emerged necessary from the first phase. Fourteen teams have participated in the Exercise mostly from EEC countries, with one from Sweden and one from the USA. (author)

  6. featsel: A framework for benchmarking of feature selection algorithms and cost functions

    OpenAIRE

    Marcelo S. Reis; Gustavo Estrela; Carlos Eduardo Ferreira; Junior Barrera

    2017-01-01

    In this paper, we introduce featsel, a framework for benchmarking of feature selection algorithms and cost functions. This framework allows the user to deal with the search space as a Boolean lattice and has its core coded in C++ for computational efficiency purposes. Moreover, featsel includes Perl scripts to add new algorithms and/or cost functions, generate random instances, plot graphs and organize results into tables. Besides, this framework already comes with dozens of algorithms and co...

  7. Pericles and Attila results for the C5G7 MOX benchmark problems

    International Nuclear Information System (INIS)

    Wareing, T.A.; McGhee, J.M.

    2002-01-01

    Recently the Nuclear Energy Agency has published a new benchmark entitled, 'C5G7 MOX Benchmark.' This benchmark is to test the ability of current transport codes to treat reactor core problems without spatial homogenization. The benchmark includes both a two- and three-dimensional problem. We have calculated results for these benchmark problems with our Pericles and Attila codes. Pericles is a one-,two-, and three-dimensional unstructured grid discrete-ordinates code and was used for the twodimensional benchmark problem. Attila is a three-dimensional unstructured tetrahedral mesh discrete-ordinate code and was used for the three-dimensional problem. Both codes use discontinuous finite element spatial differencing. Both codes use diffusion synthetic acceleration (DSA) for accelerating the inner iterations.

  8. Results of the Monte Carlo 'simple case' benchmark exercise

    International Nuclear Information System (INIS)

    2003-11-01

    A new 'simple case' benchmark intercomparison exercise was launched, intended to study the importance of the fundamental nuclear data constants, physics treatments and geometry model approximations, employed by Monte Carlo codes in common use. The exercise was also directed at determining the level of agreement which can be expected between measured and calculated quantities, using current state or the art modelling codes and techniques. To this end, measurements and Monte Carlo calculations of the total (or gross) neutron count rates have been performed using a simple moderated 3 He cylindrical proportional counter array or 'slab monitor' counting geometry, deciding to select a very simple geometry for this exercise

  9. Does Your Terrestrial Model Capture Key Arctic-Boreal Relationships?: Functional Benchmarks in the ABoVE Model Benchmarking System

    Science.gov (United States)

    Stofferahn, E.; Fisher, J. B.; Hayes, D. J.; Schwalm, C. R.; Huntzinger, D. N.; Hantson, W.

    2017-12-01

    The Arctic-Boreal Region (ABR) is a major source of uncertainties for terrestrial biosphere model (TBM) simulations. These uncertainties are precipitated by a lack of observational data from the region, affecting the parameterizations of cold environment processes in the models. Addressing these uncertainties requires a coordinated effort of data collection and integration of the following key indicators of the ABR ecosystem: disturbance, vegetation / ecosystem structure and function, carbon pools and biogeochemistry, permafrost, and hydrology. We are continuing to develop the model-data integration framework for NASA's Arctic Boreal Vulnerability Experiment (ABoVE), wherein data collection is driven by matching observations and model outputs to the ABoVE indicators via the ABoVE Grid and Projection. The data are used as reference datasets for a benchmarking system which evaluates TBM performance with respect to ABR processes. The benchmarking system utilizes two types of performance metrics to identify model strengths and weaknesses: standard metrics, based on the International Land Model Benchmarking (ILaMB) system, which relate a single observed variable to a single model output variable, and functional benchmarks, wherein the relationship of one variable to one or more variables (e.g. the dependence of vegetation structure on snow cover, the dependence of active layer thickness (ALT) on air temperature and snow cover) is ascertained in both observations and model outputs. This in turn provides guidance to model development teams for reducing uncertainties in TBM simulations of the ABR.

  10. Development of computer code SIMPSEX for simulation of FBR fuel reprocessing flowsheets: II. additional benchmarking results

    International Nuclear Information System (INIS)

    Shekhar Kumar; Koganti, S.B.

    2003-07-01

    Benchmarking and application of a computer code SIMPSEX for high plutonium FBR flowsheets was reported recently in an earlier report (IGC-234). Improvements and recompilation of the code (Version 4.01, March 2003) required re-validation with the existing benchmarks as well as additional benchmark flowsheets. Improvements in the high Pu region (Pu Aq >30 g/L) resulted in better results in the 75% Pu flowsheet benchmark. Below 30 g/L Pu Aq concentration, results were identical to those from the earlier version (SIMPSEX Version 3, code compiled in 1999). In addition, 13 published flowsheets were taken as additional benchmarks. Eleven of these flowsheets have a wide range of feed concentrations and few of them are β-γ active runs with FBR fuels having a wide distribution of burnup and Pu ratios. A published total partitioning flowsheet using externally generated U(IV) was also simulated using SIMPSEX. SIMPSEX predictions were compared with listed predictions from conventional SEPHIS, PUMA, PUNE and PUBG. SIMPSEX results were found to be comparable and better than the result from above listed codes. In addition, recently reported UREX demo results along with AMUSE simulations are also compared with SIMPSEX predictions. Results of the benchmarking SIMPSEX with these 14 benchmark flowsheets are discussed in this report. (author)

  11. Non-grey benchmark results for two temperature non-equilibrium radiative transfer

    International Nuclear Information System (INIS)

    Su, B.; Olson, G.L.

    1999-01-01

    Benchmark solutions to time-dependent radiative transfer problems involving non-equilibrium coupling to the material temperature field are crucial for validating time-dependent radiation transport codes. Previous efforts on generating analytical solutions to non-equilibrium radiative transfer problems were all restricted to the one-group grey model. In this paper, a non-grey model, namely the picket-fence model, is considered for a two temperature non-equilibrium radiative transfer problem in an infinite medium. The analytical solutions, as functions of space and time, are constructed in the form of infinite integrals for both the diffusion description and transport description. These expressions are evaluated numerically and the benchmark results are generated. The asymptotic solutions for large and small times are also derived in terms of elementary functions and are compared with the exact results. Comparisons are given between the transport and diffusion solutions and between the grey and non-grey solutions. (Copyright (c) 1999 Elsevier Science B.V., Amsterdam. All rights reserved.)

  12. Fast burner reactor benchmark results from the NEA working party on physics of plutonium recycle

    International Nuclear Information System (INIS)

    Hill, R.N.; Wade, D.C.; Palmiotti, G.

    1995-01-01

    As part of a program proposed by the OECD/NEA Working Party on Physics of Plutonium Recycling (WPPR) to evaluate different scenarios for the use of plutonium, fast reactor physics benchmarks were developed; fuel cycle scenarios using either PUREX/TRUEX (oxide fuel) or pyrometallurgical (metal fuel) separation technologies were specified. These benchmarks were designed to evaluate the nuclear performance and radiotoxicity impact of a transuranic-burning fast reactor system. International benchmark results are summarized in this paper; and key conclusions are highlighted

  13. Performance of exchange-correlation functionals in density functional theory calculations for liquid metal: A benchmark test for sodium

    Science.gov (United States)

    Han, Jeong-Hwan; Oda, Takuji

    2018-04-01

    The performance of exchange-correlation functionals in density-functional theory (DFT) calculations for liquid metal has not been sufficiently examined. In the present study, benchmark tests of Perdew-Burke-Ernzerhof (PBE), Armiento-Mattsson 2005 (AM05), PBE re-parameterized for solids, and local density approximation (LDA) functionals are conducted for liquid sodium. The pair correlation function, equilibrium atomic volume, bulk modulus, and relative enthalpy are evaluated at 600 K and 1000 K. Compared with the available experimental data, the errors range from -11.2% to 0.0% for the atomic volume, from -5.2% to 22.0% for the bulk modulus, and from -3.5% to 2.5% for the relative enthalpy depending on the DFT functional. The generalized gradient approximation functionals are superior to the LDA functional, and the PBE and AM05 functionals exhibit the best performance. In addition, we assess whether the error tendency in liquid simulations is comparable to that in solid simulations, which would suggest that the atomic volume and relative enthalpy performances are comparable between solid and liquid states but that the bulk modulus performance is not. These benchmark test results indicate that the results of liquid simulations are significantly dependent on the exchange-correlation functional and that the DFT functional performance in solid simulations can be used to roughly estimate the performance in liquid simulations.

  14. Analysis result for OECD benchmark on thermal fatigue problem

    International Nuclear Information System (INIS)

    Kamaya, Masayuki; Nakamura, Akira; Fujii, Yuzou

    2005-01-01

    The main objective of this analysis is to understand the crack growth behavior under three-dimensional (3D) thermal fatigue by conducting 3D crack initiation and propagation analyses. The possibility of crack propagation through the wall thickness of pipe, and the accuracy of the prediction of crack initiation and propagation are of major interest. In this report, in order to estimate the heat transfer coefficients and evaluate the thermal stress, conventional finite element analysis (FEA) is conducted. Then, the crack driving force is evaluated by using the finite element alternating method (FEAM), which can derive the stress intensity factor (SIF) under 3D mechanical loading based on finite element analysis without generating the mesh for a cracked body. Through these two realistic 3D numerical analyses, it has been tried to predict the crack initiation and propagation behavior. The thermal fatigue crack initiation and propagation behavior were numerically analyzed. The conventional FEA was conducted in order to estimate the heat transfer coefficient and evaluate the thermal stress. Then, the FEAM was conducted to evaluate the SIFs of surface single cracks and interacting multiple cracks, and crack growth was evaluated. The results are summarized as follows: 1. The heat transfer coefficients were estimated as H air = 40 W/m 2 K and H water = 5000 W/m 2 K. This allows simulation of the change in temperature with time at the crack initiation points obtained by the experiment. 2. The maximum stress occurred along the line of symmetry and the maximum Mises equivalent stress was 572 MPa. 3. By taking the effect of mean stress into account according to the modified Goodman diagram, the equivalent stress range and the number of cycles to crack initiation were estimated as 1093 MPa and 3.8x10 4 , respectively, although the tensile strength was assumed to be 600 MPa. 4. It was shown from the evaluated SIFs that longitudinal cracks can penetrate the wall of the pipe

  15. Benchmark of Space Charge Simulations and Comparison with Experimental Results for High Intensity, Low Energy Accelerators

    CERN Document Server

    Cousineau, Sarah M

    2005-01-01

    Space charge effects are a major contributor to beam halo and emittance growth leading to beam loss in high intensity, low energy accelerators. As future accelerators strive towards unprecedented levels of beam intensity and beam loss control, a more comprehensive understanding of space charge effects is required. A wealth of simulation tools have been developed for modeling beams in linacs and rings, and with the growing availability of high-speed computing systems, computationally expensive problems that were inconceivable a decade ago are now being handled with relative ease. This has opened the field for realistic simulations of space charge effects, including detailed benchmarks with experimental data. A great deal of effort is being focused in this direction, and several recent benchmark studies have produced remarkably successful results. This paper reviews the achievements in space charge benchmarking in the last few years, and discusses the challenges that remain.

  16. Results of the ISPRS benchmark on urban object detection and 3D building reconstruction

    NARCIS (Netherlands)

    Rottensteiner, F.; Sohn, G.; Gerke, M.; Wegner, J.D.; Breitkopf, U.; Jung, J.

    2014-01-01

    For more than two decades, many efforts have been made to develop methods for extracting urban objects from data acquired by airborne sensors. In order to make the results of such algorithms more comparable, benchmarking data sets are of paramount importance. Such a data set, consisting of airborne

  17. Results of the GABLS3 diurnal-cycle benchmark for wind energy applications

    DEFF Research Database (Denmark)

    Rodrigo, J. Sanz; Allaerts, D.; Avila, M.

    2017-01-01

    errors are used to quantify model performance. The results of the benchmark are used to discuss input uncertainties from mesoscale modelling, different meso-micro coupling strategies (online vs offline) and consistency between RANS and LES codes when dealing with boundary-layer mean flow quantities....... Overall, all the microscale simulations produce a consistent coupling with mesoscale forcings....

  18. Benchmarking density functional tight binding models for barrier heights and reaction energetics of organic molecules.

    Science.gov (United States)

    Gruden, Maja; Andjeklović, Ljubica; Jissy, Akkarapattiakal Kuriappan; Stepanović, Stepan; Zlatar, Matija; Cui, Qiang; Elstner, Marcus

    2017-09-30

    Density Functional Tight Binding (DFTB) models are two to three orders of magnitude faster than ab initio and Density Functional Theory (DFT) methods and therefore are particularly attractive in applications to large molecules and condensed phase systems. To establish the applicability of DFTB models to general chemical reactions, we conduct benchmark calculations for barrier heights and reaction energetics of organic molecules using existing databases and several new ones compiled in this study. Structures for the transition states and stable species have been fully optimized at the DFTB level, making it possible to characterize the reliability of DFTB models in a more thorough fashion compared to conducting single point energy calculations as done in previous benchmark studies. The encouraging results for the diverse sets of reactions studied here suggest that DFTB models, especially the most recent third-order version (DFTB3/3OB augmented with dispersion correction), in most cases provide satisfactory description of organic chemical reactions with accuracy almost comparable to popular DFT methods with large basis sets, although larger errors are also seen for certain cases. Therefore, DFTB models can be effective for mechanistic analysis (e.g., transition state search) of large (bio)molecules, especially when coupled with single point energy calculations at higher levels of theory. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  19. The Monte Carlo performance benchmark test - AIMS, specifications and first results

    Energy Technology Data Exchange (ETDEWEB)

    Hoogenboom, J. Eduard, E-mail: j.e.hoogenboom@tudelft.nl [Faculty of Applied Sciences, Delft University of Technology (Netherlands); Martin, William R., E-mail: wrm@umich.edu [Nuclear Engineering and Radiological Sciences, University of Michigan, Ann Arbor, MI (United States); Petrovic, Bojan, E-mail: Bojan.Petrovic@gatech.edu [Nuclear and Radiological Engineering, Georgia Institute of Technology, Atlanta, GA (United States)

    2011-07-01

    The Monte Carlo performance benchmark for detailed power density calculation in a full-size reactor core is organized under the auspices of the OECD NEA Data Bank. It aims at monitoring over a range of years the increase in performance, measured in terms of standard deviation and computer time, of Monte Carlo calculation of the power density in small volumes. A short description of the reactor geometry and composition is discussed. One of the unique features of the benchmark exercise is the possibility to upload results from participants at a web site of the NEA Data Bank which enables online analysis of results and to graphically display how near we are at the goal of doing a detailed power distribution calculation with acceptable statistical uncertainty in an acceptable computing time. First results are discussed which show that 10 to 100 billion histories must be simulated to reach a standard deviation of a few percent in the estimated power of most of the requested the fuel zones. Even when using a large supercomputer, a considerable speedup is still needed to reach the target of 1 hour computer time. An outlook is given of what to expect from this benchmark exercise over the years. Possible extensions of the benchmark for specific issues relevant in current Monte Carlo calculation for nuclear reactors are also discussed. (author)

  20. The Monte Carlo performance benchmark test - AIMS, specifications and first results

    International Nuclear Information System (INIS)

    Hoogenboom, J. Eduard; Martin, William R.; Petrovic, Bojan

    2011-01-01

    The Monte Carlo performance benchmark for detailed power density calculation in a full-size reactor core is organized under the auspices of the OECD NEA Data Bank. It aims at monitoring over a range of years the increase in performance, measured in terms of standard deviation and computer time, of Monte Carlo calculation of the power density in small volumes. A short description of the reactor geometry and composition is discussed. One of the unique features of the benchmark exercise is the possibility to upload results from participants at a web site of the NEA Data Bank which enables online analysis of results and to graphically display how near we are at the goal of doing a detailed power distribution calculation with acceptable statistical uncertainty in an acceptable computing time. First results are discussed which show that 10 to 100 billion histories must be simulated to reach a standard deviation of a few percent in the estimated power of most of the requested the fuel zones. Even when using a large supercomputer, a considerable speedup is still needed to reach the target of 1 hour computer time. An outlook is given of what to expect from this benchmark exercise over the years. Possible extensions of the benchmark for specific issues relevant in current Monte Carlo calculation for nuclear reactors are also discussed. (author)

  1. Effects of benchmarking on the quality of type 2 diabetes care: results of the OPTIMISE (Optimal Type 2 Diabetes Management Including Benchmarking and Standard Treatment) study in Greece

    Science.gov (United States)

    Tsimihodimos, Vasilis; Kostapanos, Michael S.; Moulis, Alexandros; Nikas, Nikos; Elisaf, Moses S.

    2015-01-01

    Objectives: To investigate the effect of benchmarking on the quality of type 2 diabetes (T2DM) care in Greece. Methods: The OPTIMISE (Optimal Type 2 Diabetes Management Including Benchmarking and Standard Treatment) study [ClinicalTrials.gov identifier: NCT00681850] was an international multicenter, prospective cohort study. It included physicians randomized 3:1 to either receive benchmarking for glycated hemoglobin (HbA1c), systolic blood pressure (SBP) and low-density lipoprotein cholesterol (LDL-C) treatment targets (benchmarking group) or not (control group). The proportions of patients achieving the targets of the above-mentioned parameters were compared between groups after 12 months of treatment. Also, the proportions of patients achieving those targets at 12 months were compared with baseline in the benchmarking group. Results: In the Greek region, the OPTIMISE study included 797 adults with T2DM (570 in the benchmarking group). At month 12 the proportion of patients within the predefined targets for SBP and LDL-C was greater in the benchmarking compared with the control group (50.6 versus 35.8%, and 45.3 versus 36.1%, respectively). However, these differences were not statistically significant. No difference between groups was noted in the percentage of patients achieving the predefined target for HbA1c. At month 12 the increase in the percentage of patients achieving all three targets was greater in the benchmarking (5.9–15.0%) than in the control group (2.7–8.1%). In the benchmarking group more patients were on target regarding SBP (50.6% versus 29.8%), LDL-C (45.3% versus 31.3%) and HbA1c (63.8% versus 51.2%) at 12 months compared with baseline (p Benchmarking may comprise a promising tool for improving the quality of T2DM care. Nevertheless, target achievement rates of each, and of all three, quality indicators were suboptimal, indicating there are still unmet needs in the management of T2DM. PMID:26445642

  2. INL Results for Phases I and III of the OECD/NEA MHTGR-350 Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Gerhard Strydom; Javier Ortensi; Sonat Sen; Hans Hammer

    2013-09-01

    The Idaho National Laboratory (INL) Very High Temperature Reactor (VHTR) Technology Development Office (TDO) Methods Core Simulation group led the construction of the Organization for Economic Cooperation and Development (OECD) Modular High Temperature Reactor (MHTGR) 350 MW benchmark for comparing and evaluating prismatic VHTR analysis codes. The benchmark is sponsored by the OECD's Nuclear Energy Agency (NEA), and the project will yield a set of reference steady-state, transient, and lattice depletion problems that can be used by the Department of Energy (DOE), the Nuclear Regulatory Commission (NRC), and vendors to assess their code suits. The Methods group is responsible for defining the benchmark specifications, leading the data collection and comparison activities, and chairing the annual technical workshops. This report summarizes the latest INL results for Phase I (steady state) and Phase III (lattice depletion) of the benchmark. The INSTANT, Pronghorn and RattleSnake codes were used for the standalone core neutronics modeling of Exercise 1, and the results obtained from these codes are compared in Section 4. Exercise 2 of Phase I requires the standalone steady-state thermal fluids modeling of the MHTGR-350 design, and the results for the systems code RELAP5-3D are discussed in Section 5. The coupled neutronics and thermal fluids steady-state solution for Exercise 3 are reported in Section 6, utilizing the newly developed Parallel and Highly Innovative Simulation for INL Code System (PHISICS)/RELAP5-3D code suit. Finally, the lattice depletion models and results obtained for Phase III are compared in Section 7. The MHTGR-350 benchmark proved to be a challenging simulation set of problems to model accurately, and even with the simplifications introduced in the benchmark specification this activity is an important step in the code-to-code verification of modern prismatic VHTR codes. A final OECD/NEA comparison report will compare the Phase I and III

  3. Benchmarking LES with wall-functions and RANS for fatigue problems in thermal–hydraulics systems

    Energy Technology Data Exchange (ETDEWEB)

    Tunstall, R., E-mail: ryan.tunstall@manchester.ac.uk [School of MACE, The University of Manchester, Manchester M13 9PL (United Kingdom); Laurence, D.; Prosser, R. [School of MACE, The University of Manchester, Manchester M13 9PL (United Kingdom); Skillen, A. [Scientific Computing Department, STFC Daresbury Laboratory, Warrington WA4 4AD (United Kingdom)

    2016-11-15

    Highlights: • We benchmark LES with blended wall-functions and low-Re RANS for a pipe bend and T-Junction. • Blended wall-laws allow the first cell from the wall to be placed anywhere in the boundary layer. • In both cases LES predictions improve as the first cell wall spacing is reduced. • Near-wall temperature fluctuations in the T-Junction are overpredicted by wall-modelled LES. • The EBRSM outperforms other RANS models for the pipe bend. - Abstract: In assessing whether nuclear plant components such as T-Junctions are likely to suffer thermal fatigue problems in service, CFD techniques need to provide accurate predictions for wall temperature fluctuations. Though it has been established that this is within the capabilities of wall-resolved LES, its high computational cost has prevented widespread usage in industry. In the present paper the suitability of LES with blended wall-functions, that allow the first cell to be placed in any part of the boundary layer, is assessed. Numerical results for the flows through a 90° pipe bend and a T-Junction are compared against experimental data. Both test cases contain areas where equilibrium laws are violated in practice. It is shown that reducing the first cell wall spacing improves agreement with experimental data by limiting the extent from the wall in which the solution is constrained to an equilibrium law. The LES with wall-function approach consistently overpredicts the near-wall temperature fluctuations in the T-Junction, suggesting that it can be considered as a conservative approach. We also benchmark a range of low-Re RANS models. EBRSM predictions for the 90° pipe bend are in significantly better agreement with experimental data than those from the other models. There are discrepancies from all RANS models in the case of the T-Junction.

  4. Comparing the Robustness of Evolutionary Algorithms on the Basis of Benchmark Functions

    Directory of Open Access Journals (Sweden)

    DENIZ ULKER, E.

    2013-05-01

    Full Text Available In real-world optimization problems, even though the solution quality is of great importance, the robustness of the solution is also an important aspect. This paper investigates how the optimization algorithms are sensitive to the variations of control parameters and to the random initialization of the solution set for fixed control parameters. The comparison is performed of three well-known evolutionary algorithms which are Particle Swarm Optimization (PSO algorithm, Differential Evolution (DE algorithm and the Harmony Search (HS algorithm. Various benchmark functions with different characteristics are used for the evaluation of these algorithms. The experimental results show that the solution quality of the algorithms is not directly related to their robustness. In particular, the algorithm that is highly robust can have a low solution quality, or the algorithm that has a high quality of solution can be quite sensitive to the parameter variations.

  5. Final results of the fifth three-dimensional dynamic Atomic Energy Research benchmark problem calculations

    International Nuclear Information System (INIS)

    Hadek, J.

    1999-01-01

    The paper gives a brief survey of the fifth three-dimensional dynamic Atomic Energy Research benchmark calculation results received with the code DYN3D/ATHLET at NRI Rez. This benchmark was defined at the seventh Atomic Energy Research Symposium (Hoernitz near Zittau, 1997). Its initiating event is a symmetrical break of the main steam header at the end of the first fuel cycle and hot shutdown conditions with one stuck out control rod group. The calculations were performed with the externally coupled codes ATHLET Mod.1.1 Cycle C and DYN3DH1.1/M3. The standard WWER-440/213 input deck of ATHLET code was adopted for benchmark purposes and for coupling with the code DYN3D. The first part of paper contains a brief characteristics of NPP input deck and reactor core model. The second part shows the time dependencies of important global and local parameters. In comparison with the results published at the eighth Atomic Energy Research Symposium (Bystrice nad Pernstejnem, 1998), the results published in this paper are based on improved ATHLET descriptions of control and safety systems. (Author)

  6. OECD/NEA Burnup Credit Calculational Criticality Benchmark Phase I-B Results

    Energy Technology Data Exchange (ETDEWEB)

    DeHart, M.D.

    1993-01-01

    Burnup credit is an ongoing technical concern for many countries that operate commercial nuclear power reactors. In a multinational cooperative effort to resolve burnup credit issues, a Burnup Credit Working Group has been formed under the auspices of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development. This working group has established a set of well-defined calculational benchmarks designed to study significant aspects of burnup credit computational methods. These benchmarks are intended to provide a means for the intercomparison of computer codes, methods, and data applied in spent fuel analysis. The benchmarks have been divided into multiple phases, each phase focusing on a particular feature of burnup credit analysis. This report summarizes the results and findings of the Phase I-B benchmark, which was proposed to provide a comparison of the ability of different code systems and data libraries to perform depletion analysis for the prediction of spent fuel isotopic concentrations. Results included here represent 21 different sets of calculations submitted by 16 different organizations worldwide, and are based on a limited set of nuclides determined to have the most important effect on the neutron multiplication factor of light-water-reactor spent fuel. A comparison of all sets of results demonstrates that most methods are in agreement to within 10% in the ability to estimate the spent fuel concentrations of most actinides. All methods are within 11% agreement about the average for all fission products studied. Furthermore, most deviations are less than 10%, and many are less than 5%. The exceptions are {sup 149}Sm, {sup 151}Sm, and {sup 155}Gd.

  7. OECD/NEA burnup credit calculational criticality benchmark Phase I-B results

    Energy Technology Data Exchange (ETDEWEB)

    DeHart, M.D.; Parks, C.V. [Oak Ridge National Lab., TN (United States); Brady, M.C. [Sandia National Labs., Las Vegas, NV (United States)

    1996-06-01

    In most countries, criticality analysis of LWR fuel stored in racks and casks has assumed that the fuel is fresh with the maximum allowable initial enrichment. This assumption has led to the design of widely spaced and/or highly poisoned storage and transport arrays. If credit is assumed for fuel burnup, initial enrichment limitations can be raised in existing systems, and more compact and economical arrays can be designed. Such reliance on the reduced reactivity of spent fuel for criticality control is referred to as burnup credit. The Burnup Credit Working Group, formed under the auspices of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development, has established a set of well-defined calculational benchmarks designed to study significant aspects of burnup credit computational methods. These benchmarks are intended to provide a means for the intercomparison of computer codes, methods, and data applied in spent fuel analysis. The benchmarks have been divided into multiple phases, each phase focusing on a particular feature of burnup credit analysis. This report summarizes the results and findings of the Phase I-B benchmark, which was proposed to provide a comparison of the ability of different code systems and data libraries to perform depletion analysis for the prediction of spent fuel isotopic concentrations. Results included here represent 21 different sets of calculations submitted by 16 different organizations worldwide and are based on a limited set of nuclides determined to have the most important effect on the neutron multiplication factor of light-water-reactor spent fuel. A comparison of all sets of results demonstrates that most methods agree to within 10% in the ability to estimate the spent fuel concentrations of most actinides. All methods agree within 11% about the average for all fission products studied. Most deviations are less than 10%, and many are less than 5%. The exceptions are Sm 149, Sm 151, and Gd 155.

  8. OECD/NEA Burnup Credit Calculational Criticality Benchmark Phase I-B Results

    International Nuclear Information System (INIS)

    DeHart, M.D.

    1993-01-01

    Burnup credit is an ongoing technical concern for many countries that operate commercial nuclear power reactors. In a multinational cooperative effort to resolve burnup credit issues, a Burnup Credit Working Group has been formed under the auspices of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development. This working group has established a set of well-defined calculational benchmarks designed to study significant aspects of burnup credit computational methods. These benchmarks are intended to provide a means for the intercomparison of computer codes, methods, and data applied in spent fuel analysis. The benchmarks have been divided into multiple phases, each phase focusing on a particular feature of burnup credit analysis. This report summarizes the results and findings of the Phase I-B benchmark, which was proposed to provide a comparison of the ability of different code systems and data libraries to perform depletion analysis for the prediction of spent fuel isotopic concentrations. Results included here represent 21 different sets of calculations submitted by 16 different organizations worldwide, and are based on a limited set of nuclides determined to have the most important effect on the neutron multiplication factor of light-water-reactor spent fuel. A comparison of all sets of results demonstrates that most methods are in agreement to within 10% in the ability to estimate the spent fuel concentrations of most actinides. All methods are within 11% agreement about the average for all fission products studied. Furthermore, most deviations are less than 10%, and many are less than 5%. The exceptions are 149 Sm, 151 Sm, and 155 Gd

  9. OECD/NEA burnup credit calculational criticality benchmark Phase I-B results

    International Nuclear Information System (INIS)

    DeHart, M.D.; Parks, C.V.; Brady, M.C.

    1996-06-01

    In most countries, criticality analysis of LWR fuel stored in racks and casks has assumed that the fuel is fresh with the maximum allowable initial enrichment. This assumption has led to the design of widely spaced and/or highly poisoned storage and transport arrays. If credit is assumed for fuel burnup, initial enrichment limitations can be raised in existing systems, and more compact and economical arrays can be designed. Such reliance on the reduced reactivity of spent fuel for criticality control is referred to as burnup credit. The Burnup Credit Working Group, formed under the auspices of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development, has established a set of well-defined calculational benchmarks designed to study significant aspects of burnup credit computational methods. These benchmarks are intended to provide a means for the intercomparison of computer codes, methods, and data applied in spent fuel analysis. The benchmarks have been divided into multiple phases, each phase focusing on a particular feature of burnup credit analysis. This report summarizes the results and findings of the Phase I-B benchmark, which was proposed to provide a comparison of the ability of different code systems and data libraries to perform depletion analysis for the prediction of spent fuel isotopic concentrations. Results included here represent 21 different sets of calculations submitted by 16 different organizations worldwide and are based on a limited set of nuclides determined to have the most important effect on the neutron multiplication factor of light-water-reactor spent fuel. A comparison of all sets of results demonstrates that most methods agree to within 10% in the ability to estimate the spent fuel concentrations of most actinides. All methods agree within 11% about the average for all fission products studied. Most deviations are less than 10%, and many are less than 5%. The exceptions are Sm 149, Sm 151, and Gd 155

  10. International Benchmark on Pressurised Water Reactor Sub-channel and Bundle Tests. Volume II: Benchmark Results of Phase I: Void Distribution

    International Nuclear Information System (INIS)

    Rubin, Adam; Avramova, Maria; Velazquez-Lozada, Alexander

    2016-03-01

    This report summarised the first phase of the Nuclear Energy Agency (NEA) and the US Nuclear Regulatory Commission Benchmark based on NUPEC PWR Sub-channel and Bundle Tests (PSBT), which was intended to provide data for the verification of void distribution models in participants' codes. This phase was composed of four exercises; Exercise 1: steady-state single sub-channel benchmark, Exercise 2: steady-state rod bundle benchmark, Exercise 3: transient rod bundle benchmark and Exercise 4: a pressure drop benchmark. The experimental data provided to the participants of this benchmark is from a series of void measurement tests using full-size mock-up tests for both Boiling Water Reactors (BWRs) and Pressurised Water Reactors (PWRs). These tests were performed from 1987 to 1995 by the Nuclear Power Engineering Corporation (NUPEC) in Japan and made available by the Japan Nuclear Energy Safety Organisation (JNES) for the purposes of this benchmark, which was organised by Pennsylvania State University. Twenty-one institutions from nine countries participated in this benchmark. Seventeen different computer codes were used in Exercises 1, 2, 3 and 4. Among the computer codes were porous media, sub-channel, systems thermal-hydraulic code and Computational Fluid Dynamics (CFD) codes. It was observed that the codes tended to overpredict the thermal equilibrium quality at lower elevations and under predict it at higher elevations. There was also a tendency to overpredict void fraction at lower elevations and underpredict it at high elevations for the bundle test cases. The overprediction of void fraction at low elevations is likely caused by the x-ray densitometer measurement method used. Under sub-cooled boiling conditions, the voids accumulate at heated surfaces (and are therefore not seen in the centre of the sub-channel, where the measurements are being taken), so the experimentally-determined void fractions will be lower than the actual void fraction. Some of the best

  11. 3-D extension C5G7 MOX benchmark results using PARTISN

    Energy Technology Data Exchange (ETDEWEB)

    Dahl, J.A. [Los Alamos National Laboratory, CCS-4 Transport Methods Group, Los Alamos, NM (United States)

    2005-07-01

    We have participated in the Expert Group of 3-D Radiation Transport Benchmarks' proposed 3-dimensional Extension C5G7 MOX problems using the discrete ordinate transport code PARTISN. The computational mesh was created using the FRAC-IN-THE-BOX code, which produces a volume fraction Cartesian mesh from combinatorial geometry descriptions. k{sub eff} eigenvalues, maximum pin powers, and average fuel assembly powers are reported and compared to a benchmark quality Monte Carlo solution. We also present a two dimensional mesh convergence study examining the affects of using volume fractions to approximate the water-pin cell interface. It appears that the control rod pin cell must be meshed twice as fine as a fuel pin cell in order to achieve the same spatial error when using the volume fraction method to define water channel-pin cell interfaces. It is noted that the previous PARTISN results provided to the OECD/NEA Expert Group on 3-dimensional Radiation Benchmarks contained a cross section error, and therefore should be disregarded.

  12. 3-D extension C5G7 MOX benchmark results using PARTISN

    International Nuclear Information System (INIS)

    Dahl, J.A.

    2005-01-01

    We have participated in the Expert Group of 3-D Radiation Transport Benchmarks' proposed 3-dimensional Extension C5G7 MOX problems using the discrete ordinate transport code PARTISN. The computational mesh was created using the FRAC-IN-THE-BOX code, which produces a volume fraction Cartesian mesh from combinatorial geometry descriptions. k eff eigenvalues, maximum pin powers, and average fuel assembly powers are reported and compared to a benchmark quality Monte Carlo solution. We also present a two dimensional mesh convergence study examining the affects of using volume fractions to approximate the water-pin cell interface. It appears that the control rod pin cell must be meshed twice as fine as a fuel pin cell in order to achieve the same spatial error when using the volume fraction method to define water channel-pin cell interfaces. It is noted that the previous PARTISN results provided to the OECD/NEA Expert Group on 3-dimensional Radiation Benchmarks contained a cross section error, and therefore should be disregarded

  13. The reactive transport benchmark proposed by GdR MoMaS: presentation and first results

    Energy Technology Data Exchange (ETDEWEB)

    Carrayrou, J. [Institut de Mecanique des Fluides et des Solides, UMR ULP-CNRS 7507, 67 - Strasbourg (France); Lagneau, V. [Ecole des Mines de Paris, Centre de Geosciences, 77 - Fontainebleau (France)

    2007-07-01

    We present here the actual context of reactive transport modelling and the major numerical challenges. GdR MoMaS proposes a benchmark on reactive transport. We present this benchmark and some results obtained on it by two reactive transport codes HYTEC and SPECY. (authors)

  14. The reactive transport benchmark proposed by GdR MoMaS: presentation and first results

    International Nuclear Information System (INIS)

    Carrayrou, J.; Lagneau, V.

    2007-01-01

    We present here the actual context of reactive transport modelling and the major numerical challenges. GdR MoMaS proposes a benchmark on reactive transport. We present this benchmark and some results obtained on it by two reactive transport codes HYTEC and SPECY. (authors)

  15. Benchmarking FeCr empirical potentials against density functional theory data

    International Nuclear Information System (INIS)

    Klaver, T P C; Bonny, G; Terentyev, D; Olsson, P

    2010-01-01

    Three semi-empirical force field FeCr potentials, two within the formalism of the two-band model and one within the formalism of the concentration dependent model, have been benchmarked against a wide variety of density functional theory (DFT) structures. The benchmarking allows an assessment of how reliable empirical potential results are in different areas relevant to radiation damage modelling. The DFT data consist of defect-free structures, structures with single interstitials and structures with small di- and tri-interstitial clusters. All three potentials reproduce the general trend of the heat of formation (h.o.f.) quite well. The most important shortcomings of the original two-band model potential are the low or even negative h.o.f. for Cr-rich structures and the lack of a strong repulsion when moving two solute Cr atoms from being second-nearest neighbours to nearest neighbours. The newer two-band model potential partly solves the first problem. The most important shortcoming in the concentration dependent model potential is the magnitude of the Cr–Cr repulsion, being too strong at short distances and mostly absent at longer distances. Both two-band model potentials do reproduce long-range Cr–Cr repulsion. For interstitials the two-band model potentials reproduce a number of Cr–interstitial binding energies surprisingly well, in contrast to the concentration dependent model potential. For Cr interacting with clusters, the result can sometimes be directly extrapolated from Cr interacting with single interstitials, both according to DFT and the three empirical potentials

  16. A note on bound constraints handling for the IEEE CEC'05 benchmark function suite.

    Science.gov (United States)

    Liao, Tianjun; Molina, Daniel; de Oca, Marco A Montes; Stützle, Thomas

    2014-01-01

    The benchmark functions and some of the algorithms proposed for the special session on real parameter optimization of the 2005 IEEE Congress on Evolutionary Computation (CEC'05) have played and still play an important role in the assessment of the state of the art in continuous optimization. In this article, we show that if bound constraints are not enforced for the final reported solutions, state-of-the-art algorithms produce infeasible best candidate solutions for the majority of functions of the IEEE CEC'05 benchmark function suite. This occurs even though the optima of the CEC'05 functions are within the specified bounds. This phenomenon has important implications on algorithm comparisons, and therefore on algorithm designs. This article's goal is to draw the attention of the community to the fact that some authors might have drawn wrong conclusions from experiments using the CEC'05 problems.

  17. Validation of the WIMSD4M cross-section generation code with benchmark results

    International Nuclear Information System (INIS)

    Deen, J.R.; Woodruff, W.L.; Leal, L.E.

    1995-01-01

    The WIMSD4 code has been adopted for cross-section generation in support of the Reduced Enrichment Research and Test Reactor (RERTR) program at Argonne National Laboratory (ANL). Subsequently, the code has undergone several updates, and significant improvements have been achieved. The capability of generating group-collapsed micro- or macroscopic cross sections from the ENDF/B-V library and the more recent evaluation, ENDF/B-VI, in the ISOTXS format makes the modified version of the WIMSD4 code, WIMSD4M, very attractive, not only for the RERTR program, but also for the reactor physics community. The intent of the present paper is to validate the WIMSD4M cross-section libraries for reactor modeling of fresh water moderated cores. The results of calculations performed with multigroup cross-section data generated with the WIMSD4M code will be compared against experimental results. These results correspond to calculations carried out with thermal reactor benchmarks of the Oak Ridge National Laboratory (ORNL) unreflected HEU critical spheres, the TRX LEU critical experiments, and calculations of a modified Los Alamos HEU D 2 O moderated benchmark critical system. The benchmark calculations were performed with the discrete-ordinates transport code, TWODANT, using WIMSD4M cross-section data. Transport calculations using the XSDRNPM module of the SCALE code system are also included. In addition to transport calculations, diffusion calculations with the DIF3D code were also carried out, since the DIF3D code is used in the RERTR program for reactor analysis and design. For completeness, Monte Carlo results of calculations performed with the VIM and MCNP codes are also presented

  18. Validation of the WIMSD4M cross-section generation code with benchmark results

    Energy Technology Data Exchange (ETDEWEB)

    Deen, J.R.; Woodruff, W.L. [Argonne National Lab., IL (United States); Leal, L.E. [Oak Ridge National Lab., TN (United States)

    1995-01-01

    The WIMSD4 code has been adopted for cross-section generation in support of the Reduced Enrichment Research and Test Reactor (RERTR) program at Argonne National Laboratory (ANL). Subsequently, the code has undergone several updates, and significant improvements have been achieved. The capability of generating group-collapsed micro- or macroscopic cross sections from the ENDF/B-V library and the more recent evaluation, ENDF/B-VI, in the ISOTXS format makes the modified version of the WIMSD4 code, WIMSD4M, very attractive, not only for the RERTR program, but also for the reactor physics community. The intent of the present paper is to validate the WIMSD4M cross-section libraries for reactor modeling of fresh water moderated cores. The results of calculations performed with multigroup cross-section data generated with the WIMSD4M code will be compared against experimental results. These results correspond to calculations carried out with thermal reactor benchmarks of the Oak Ridge National Laboratory (ORNL) unreflected HEU critical spheres, the TRX LEU critical experiments, and calculations of a modified Los Alamos HEU D{sub 2}O moderated benchmark critical system. The benchmark calculations were performed with the discrete-ordinates transport code, TWODANT, using WIMSD4M cross-section data. Transport calculations using the XSDRNPM module of the SCALE code system are also included. In addition to transport calculations, diffusion calculations with the DIF3D code were also carried out, since the DIF3D code is used in the RERTR program for reactor analysis and design. For completeness, Monte Carlo results of calculations performed with the VIM and MCNP codes are also presented.

  19. Jet Substructure at the Tevatron and LHC: New results, new tools, new benchmarks

    CERN Document Server

    Altheimer, A; Asquith, L; Brooijmans, G; Butterworth, J; Campanelli, M; Chapleau, B; Cholakian, A E; Chou, J P; Dasgupta, M; Davison, A; Dolen, J; Ellis, S D; Essig, R; Fan, J J; Field, R; Fregoso, A; Gallicchio, J; Gershtein, Y; Gomes, A; Haas, A; Halkiadakis, E; Halyo, V; Hoeche, S; Hook, A; Hornig, A; Huang, P; Izaguirre, E; Jankowiak, M; Kribs, G; Krohn, D; Larkoski, A J; Lath, A; Lee, C; Lee, S J; Loch, P; Maksimovic, P; Martinez, M; Miller, D W; Plehn, T; Prokofiev, K; Rahmat, R; Rappoccio, S; Safonov, A; Salam, G P; Schumann, S; Schwartz, M D; Schwartzman, A; Seymour, M; Shao, J; Sinervo, P; Son, M; Soper, D E; Spannowsky, M; Stewart, I W; Strassler, M; Strauss, E; Takeuchi, M; Thaler, J; Thomas, S; Tweedie, B; Vasquez Sierra, R; Vermilion, C K; Villaplana, M; Vos, M; Wacker, J; Walker, D; Walsh, J R; Wang, L-T; Wilbur, S; Yavin, I; Zhu, W

    2012-01-01

    In this report we review recent theoretical progress and the latest experimental results in jet substructure from the Tevatron and the LHC. We review the status of and outlook for calculation and simulation tools for studying jet substructure. Following up on the report of the Boost 2010 workshop, we present a new set of benchmark comparisons of substructure techniques, focusing on the set of variables and grooming methods that are collectively known as "top taggers". To facilitate further exploration, we have attempted to collect, harmonise, and publish software implementations of these techniques.

  20. Benchmarks for electronically excited states: Time-dependent density functional theory and density functional theory based multireference configuration interaction

    DEFF Research Database (Denmark)

    Silva-Junior, Mario R.; Schreiber, Marko; Sauer, Stephan P. A.

    2008-01-01

    Time-dependent density functional theory (TD-DFT) and DFT-based multireference configuration interaction (DFT/MRCI) calculations are reported for a recently proposed benchmark set of 28 medium-sized organic molecules. Vertical excitation energies, oscillator strengths, and excited-state dipole...

  1. The VENUS-7 benchmarks. Results from state-of-the-art transport codes and nuclear data

    International Nuclear Information System (INIS)

    Zwermann, Winfried; Pautz, Andreas; Timm, Wolf

    2010-01-01

    For the validation of both nuclear data and computational methods, comparisons with experimental data are necessary. Most advantageous are assemblies where not only the multiplication factors or critical parameters were measured, but also additional quantities like reactivity differences or pin-wise fission rate distributions have been assessed. Currently there is a comprehensive activity to evaluate such measure-ments and incorporate them in the International Handbook of Evaluated Reactor Physics Benchmark Experiments. A large number of such experiments was performed at the VENUS zero power reactor at SCK/CEN in Belgium in the sixties and seventies. The VENUS-7 series was specified as an international benchmark within the OECD/NEA Working Party on Scientific Issues of Reactor Systems (WPRS), and results obtained with various codes and nuclear data evaluations were summarized. In the present paper, results of high-accuracy transport codes with full spatial resolution with up-to-date nuclear data libraries from the JEFF and ENDF/B evaluations are presented. The comparisons of the results, both code-to-code and with the measured data are augmented by uncertainty and sensitivity analyses with respect to nuclear data uncertainties. For the multiplication factors, these are performed with the TSUNAMI-3D code from the SCALE system. In addition, uncertainties in the reactivity differences are analyzed with the TSAR code which is available from the current SCALE-6 version. (orig.)

  2. Accelerator driven systems. ADS benchmark calculations. Results of stage 2. Radiotoxic waste transmutation

    Energy Technology Data Exchange (ETDEWEB)

    Freudenreich, W.E.; Gruppelaar, H

    1998-12-01

    This report contains the results of calculations made at ECN-Petten of a benchmark to study the neutronic potential of a modular fast spectrum ADS (Accelerator-Driven System) for radiotoxic waste transmutation. The study is focused on the incineration of TRans-Uranium elements (TRU), Minor Actinides (MA) and Long-Lived Fission Products (LLFP), in this case {sup 99}Tc. The benchmark exercise is made in the framework of an IAEA Co-ordinated Research Programme. A simplified description of an ADS, restricted to the reactor part, with TRU or MA fuel (k{sub eff}=0.96) has been analysed. All spectrum calculations have been performed with the Monte Carlo code MCNP-4A. The burnup calculations have been performed with the code FISPACT coupled to MCNP-4A by means of our OCTOPUS system. The cross sections are based upon JEF-2.2 for transport calculations and supplemented with EAF-4 data for inventory calculations. The determined quantities are: core dimensions, fuel inventories, system power, sensitivity on external source spectrum and waste transmutation rates. The main conclusions are: The MA-burner requires only a small accelerator current increase during burnup, in contrast to the TRU-burner. The {sup 99} Tc-burner has a large initial loading; a more effective design may be possible. 5 refs.

  3. Accelerator driven systems. ADS benchmark calculations. Results of stage 2. Radiotoxic waste transmutation

    International Nuclear Information System (INIS)

    Freudenreich, W.E.; Gruppelaar, H.

    1998-12-01

    This report contains the results of calculations made at ECN-Petten of a benchmark to study the neutronic potential of a modular fast spectrum ADS (Accelerator-Driven System) for radiotoxic waste transmutation. The study is focused on the incineration of TRans-Uranium elements (TRU), Minor Actinides (MA) and Long-Lived Fission Products (LLFP), in this case 99 Tc. The benchmark exercise is made in the framework of an IAEA Co-ordinated Research Programme. A simplified description of an ADS, restricted to the reactor part, with TRU or MA fuel (k eff =0.96) has been analysed. All spectrum calculations have been performed with the Monte Carlo code MCNP-4A. The burnup calculations have been performed with the code FISPACT coupled to MCNP-4A by means of our OCTOPUS system. The cross sections are based upon JEF-2.2 for transport calculations and supplemented with EAF-4 data for inventory calculations. The determined quantities are: core dimensions, fuel inventories, system power, sensitivity on external source spectrum and waste transmutation rates. The main conclusions are: The MA-burner requires only a small accelerator current increase during burnup, in contrast to the TRU-burner. The 99 Tc-burner has a large initial loading; a more effective design may be possible. 5 refs

  4. Analysis of the European results on the HTTR's core physics benchmarks

    International Nuclear Information System (INIS)

    Raepsaet, X.; Damian, F.; Ohlig, U.A.; Brockmann, H.J.; Haas, J.B.M. de; Wallerboss, E.M.

    2002-01-01

    Within the frame of the European contract HTR-N1 calculations are performed on the benchmark problems of the HTTR's start-up core physics experiments initially proposed by the IAEA in a Co-ordinated Research Programme. Three European partners, the FZJ in Germany, NRG and IRI in the Netherlands, and CEA in France, have joined this work package with the aim to validate their calculational methods. Pre-test and post-test calculational results, obtained by the partners, are compared with each other and with the experiment. Parts of the discrepancies between experiment and pre-test predictions are analysed and tackled by different treatments. In the case of the Monte Carlo code TRIPOLI4, used by CEA, the discrepancy between measurement and calculation at the first criticality is reduced to Δk/k∼0.85%, when considering the revised data of the HTTR benchmark. In the case of the diffusion codes, this discrepancy is reduced to: Δk/k∼0.8% (FZJ) and 2.7 or 1.8% (CEA). (author)

  5. Comparison of the results of the fifth dynamic AER benchmark-a benchmark for coupled thermohydraulic system/three-dimensional hexagonal kinetic core models

    International Nuclear Information System (INIS)

    Kliem, S.

    1998-01-01

    The fifth dynamic benchmark was defined at seventh AER-Symposium, held in Hoernitz, Germany in 1997. It is the first benchmark for coupled thermohydraulic system/three-dimensional hexagonal neutron kinetic core models. In this benchmark the interaction between the components of a WWER-440 NPP with the reactor core has been investigated. The initiating event is a symmetrical break of the main steam header at the end of the first fuel cycle and hot shutdown conditions with one control rod group stucking. This break causes an overcooling of the primary circuit. During this overcooling the scram reactivity is compensated and the scrammed reactor becomes re critical. The calculation was continued until the highly-borated water from the high pressure injection system terminated the power excursion. Each participant used own best-estimate nuclear cross section data. Only the initial subcriticality at the beginning of the transient was given. Solutions were received from Kurchatov Institute Russia with the code BIPR8/ATHLET, VTT Energy Finland with HEXTRAN/SMABRE, NRI Rez Czech Republic with DYN3/ATHLET, KFKI Budapest Hungary with KIKO3D/ATHLET and from FZR Germany with the code DYN3D/ATHLET.In this paper the results are compared. Beside the comparison of global results, the behaviour of several thermohydraulic and neutron kinetic parameters is presented to discuss the revealed differences between the solutions.(Authors)

  6. Comparison of typical inelastic analysis predictions with benchmark problem experimental results

    International Nuclear Information System (INIS)

    Clinard, J.A.; Corum, J.M.; Sartory, W.K.

    1975-01-01

    The results of exemplary inelastic analyses are presented for a series of experimental benchmark problems. Consistent analytical procedures and constitutive relations were used in each of the analyses, and published material behavior data were used in all cases. Two finite-element inelastic computer programs were employed. These programs implement the analysis procedures and constitutive equations for Type 304 stainless steel that are currently used in many analyses of elevated-temperature nuclear reactor system components. The analysis procedures and constitutive relations are briefly discussed, and representative analytical results are presented and compared to the test data. The results that are presented demonstrate the feasibility of performing inelastic analyses, and they are indicative of the general level of agreement that the analyst might expect when using conventional inelastic analysis procedures. (U.S.)

  7. Comparison of typical inelastic analysis predictions with benchmark problem experimental results

    International Nuclear Information System (INIS)

    Clinard, J.A.; Corum, J.M.; Sartory, W.K.

    1975-01-01

    The results of exemplary inelastic analyses for experimental benchmark problems on reactor components are presented. Consistent analytical procedures and constitutive relations were used in each of the analyses, and the material behavior data presented in the Appendix were used in all cases. Two finite-element inelastic computer programs were employed. These programs implement the analysis procedures and constitutive equations for type 304 stainless steel that are currently used in many analyses of elevated-temperature nuclear reactor system components. The analysis procedures and constitutive relations are briefly discussed, and representative analytical results are presented and compared to the test data. The results that are presented demonstrate the feasibility of performing inelastic analyses for the types of problems discussed, and they are indicative of the general level of agreement that the analyst might expect when using conventional inelastic analysis procedures. (U.S.)

  8. Survey of the results of a two- and three-dimensional kinetics benchmark problem typical for a thermal reactor

    International Nuclear Information System (INIS)

    Werner, W.

    1975-01-01

    In 1973, NEACRP and CSNI posed a number of kinetic benchmark problems intended to be solved by different groups. Comparison of the submitted results should lead to estimates on the accuracy and efficiency of the employed codes. This was felt to be of great value since the codes involved become more and more important in the field of reactor safety. In this paper the results of the 2d and 3d benchmark problem for a BWR are presented. The specification of the problem is included in the appendix of this survey. For the 2d benchmark problem, 5 contributions have been obtained, while for the 3d benchmark problem 2 contributions have been submitted. (orig./RW) [de

  9. Comparative analysis of exercise 2 results of the OECD WWER-1000 MSLB benchmark

    International Nuclear Information System (INIS)

    Kolev, N.; Petrov, N.; Royer, E.; Ivanov, B.; Ivanov, K.

    2006-01-01

    In the framework of joint effort between OECD/NEA, US DOE and CEA France a coupled three-dimensional (3D) thermal-hydraulic/neutron kinetics benchmark for WWER-1000 was defined. Phase 2 of this benchmark is labeled W1000CT-2 and consists of calculation of a vessel mixing experiment and main steam line break (MSLB) transients. The reference plant is Kozloduy-6 in Bulgaria. Plant data are available for code validation consisting of one experiment of pump start-up (W1000CT-1) and one experiment of steam generator isolation (W1000CT-2). The validated codes can be used to calculate asymmetric MSLB transients involving similar mixing patterns. This paper summarizes a comparison of the available results for W1000CT-2 Exercise 2 devoted to core-vessel calculation with imposed MSLB vessel boundary conditions. Because of the recent re-calculation of the cross-section libraries, core physics results from PARCS and CRONOS codes could be compared only. The comparison is code-to-code (including BIPR7A/TVS-M lib) and code vs. plant measured data in a steady state close to the MSLB initial state. The results provide a test of the cross-section libraries and show a good agreement of plant measured and computed data. The comparison of full vessel calculations was made from the point of view of vessel mixing, considering mainly the coarse-mesh features of the flow. The FZR and INRNE results from multi-1D calculations with different mixing models are similar, while the FZK calculations with a coarse-3D vessel model show deviations from the others. These deviations seem to be due to an error in the use of a boundary condition after flow reversal (Authors)

  10. Mixed-oxide (MOX) fuel performance benchmark. Summary of the results for the PRIMO MOX rod BD8

    International Nuclear Information System (INIS)

    Ott, L.J.; Sartori, E.; Costa, A.; ); Sobolev, V.; Lee, B-H.; Alekseev, P.N.; Shestopalov, A.A.; Mikityuk, K.O.; Fomichenko, P.A.; Shatrova, L.P.; Medvedev, A.V.; Bogatyr, S.M.; Khvostov, G.A.; Kuznetsov, V.I.; Stoenescu, R.; Chatwin, C.P.

    2009-01-01

    The OECD/NEA Nuclear Science Committee has established an Expert Group that deals with the status and trends of reactor physics, nuclear fuel performance, and fuel cycle issues related to the disposition of weapons-grade plutonium as MOX fuel. The activities of the NEA Expert Group on Reactor-based Plutonium Disposition are carried out in close cooperation with the NEA Working Party on Scientific Issues in Reactor Systems (WPRS). A major part of these activities includes benchmark studies. This report describes the results of the PRIMO rod BD8 benchmark exercise, the second benchmark by the TFRPD relative to MOX fuel behaviour. The corresponding PRIMO experimental data have been released, compiled and reviewed for the International Fuel Performance Experiments (IFPE) database. The observed ranges (as noted in the text) in the predicted thermal and FGR responses are reasonable given the variety and combination of thermal conductivity and FGR models employed by the benchmark participants with their respective fuel performance codes

  11. Benchmarking density-functional-theory calculations of rotational g tensors and magnetizabilities using accurate coupled-cluster calculations.

    Science.gov (United States)

    Lutnaes, Ola B; Teale, Andrew M; Helgaker, Trygve; Tozer, David J; Ruud, Kenneth; Gauss, Jürgen

    2009-10-14

    An accurate set of benchmark rotational g tensors and magnetizabilities are calculated using coupled-cluster singles-doubles (CCSD) theory and coupled-cluster single-doubles-perturbative-triples [CCSD(T)] theory, in a variety of basis sets consisting of (rotational) London atomic orbitals. The accuracy of the results obtained is established for the rotational g tensors by careful comparison with experimental data, taking into account zero-point vibrational corrections. After an analysis of the basis sets employed, extrapolation techniques are used to provide estimates of the basis-set-limit quantities, thereby establishing an accurate benchmark data set. The utility of the data set is demonstrated by examining a wide variety of density functionals for the calculation of these properties. None of the density-functional methods are competitive with the CCSD or CCSD(T) methods. The need for a careful consideration of vibrational effects is clearly illustrated. Finally, the pure coupled-cluster results are compared with the results of density-functional calculations constrained to give the same electronic density. The importance of current dependence in exchange-correlation functionals is discussed in light of this comparison.

  12. The OECD/NEA/NSC PBMR400 MW coupled neutronics thermal hydraulics transient benchmark - Steady-state results and status

    International Nuclear Information System (INIS)

    Reitsma, F.; Han, J.; Ivanov, K.; Sartori, E.

    2008-01-01

    The PBMR is a High-Temperature Gas-cooled Reactor (HTGR) concept developed to be built in South Africa. The analysis tools used for core neutronic design and core safety analysis need to be verified and validated. Since only a few pebble-bed HTR experimental facilities or plant data are available the use of code-to-code comparisons are an essential part of the V and V plans. As part of this plan the PBMR 400 MW design and a representative set of transient cases is defined as an OECD benchmark. The scope of the benchmark is to establish a series of well-defined multi-dimensional computational benchmark problems with a common given set of cross-sections, to compare methods and tools in coupled neutronics and thermal hydraulics analysis with a specific focus on transient events. The OECD benchmark includes steady-state and transients cases. Although the focus of the benchmark is on the modelling of the transient behaviour of the PBMR core, it was also necessary to define some steady-state cases to ensure consistency between the different approaches before results of transient cases could be compared. This paper describes the status of the benchmark project and shows the results for the three steady state exercises defined as a standalone neutronics calculation, a standalone thermal-hydraulic core calculation, and a coupled neutronics/thermal-hydraulic simulation. (authors)

  13. Detection of sodium/water reaction in a steam generator: Results of a 1995 benchmark test

    International Nuclear Information System (INIS)

    Oriol, L.

    1997-01-01

    The CEA analysis of the 1995 benchmark test has been focused on the location of the injections. Two techniques have been tested: the pulse timing technique, and the time-domain delay and sum beamforming technique. The two methods gave coherent locations of the injector even if there was a difference of 25% of the SGU height between the vertical locations. Prior to that analysis, the RMS values of the signals were calculated in different frequency bands. The results obtained in the 200-1000 Hz were used to draw a rough estimation of the beginnings of the injections in order to determine the parts of the records on which the location signal processing can be carried out. (author). 2 refs, 8 figs, 2 tabs

  14. A comparison of recent results from HONDO III with the JSME nuclear shipping cask benchmark calculations

    International Nuclear Information System (INIS)

    Key, S.W.

    1985-01-01

    The results of two calculations related to the impact response of spent nuclear fuel shipping casks are compared to the benchmark results reported in a recent study by the Japan Society of Mechanical Engineers Subcommittee on Structural Analysis of Nuclear Shipping Casks. Two idealized impacts are considered. The first calculation utilizes a right circular cylinder of lead subjected to a 9.0 m free fall onto a rigid target, while the second calculation utilizes a stainless steel clad cylinder of lead subjected to the same impact conditions. For the first problem, four calculations from graphical results presented in the original study have been singled out for comparison with HONDO III. The results from DYNA3D, STEALTH, PISCES, and ABAQUS are reproduced. In the second problem, the results from four separate computer programs in the original study, ABAQUS, ANSYS, MARC, and PISCES, are used and compared with HONDO III. The current version of HONDO III contains a fully automated implementation of the explicit-explicit partitioning procedure for the central difference method time integration which results in a reduction of computational effort by a factor in excess of 5. The results reported here further support the conclusion of the original study that the explicit time integration schemes with automated time incrementation are effective and efficient techniques for computing the transient dynamic response of nuclear fuel shipping casks subject to impact loading. (orig.)

  15. WLUP benchmarks

    International Nuclear Information System (INIS)

    Leszczynski, Francisco

    2002-01-01

    The IAEA-WIMS Library Update Project (WLUP) is on the end stage. The final library will be released on 2002. It is a result of research and development made by more than ten investigators during 10 years. The organization of benchmarks for testing and choosing the best set of data has been coordinated by the author of this paper. It is presented the organization, name conventions, contents and documentation of WLUP benchmarks, and an updated list of the main parameters for all cases. First, the benchmarks objectives and types are given. Then, comparisons of results from different WIMSD libraries are included. Finally it is described the program QVALUE for analysis and plot of results. Some examples are given. The set of benchmarks implemented on this work is a fundamental tool for testing new multigroup libraries. (author)

  16. HTR-PROTEUS benchmark calculations. Pt. 1. Unit cell results LEUPRO-1 and LEUPRO-2

    International Nuclear Information System (INIS)

    Hogenbirk, A.; Stad, R.C.L. van der; Janssen, A.J.; Klippel, H.T.; Kuijper, J.C.

    1995-09-01

    In the framework of the IAEA Co-ordinated Research Programme (CRP) on 'Validation of Safety Related Physics Calculations for Low-Enriched (LEU) HTGRs' calculational benchmarks are performed on the basis of LEU-HTR pebble-bed critical experiments carried out in the PROTEUS facility at PSI, Switzerland. Of special interest is the treatment of the double heterogeneity of the fuel and the spherical fuel elements of these pebble bed core configurations. Also of interest is the proper calculation of the safety related physics parameters like the effect of water ingress and control rod worth. This document describes the ECN results of the LEUPRO-1 and LEUPRO-2 unitcell calculations performed with the codes WIMS-E, SCALE-4 and MCNP4A. Results of the LEUPRO-1 unit cell with 20% water ingress in the void is also reported for both the single and the double heterogeneous case. Emphasis is put on the intercomparison of the results obtained by the deterministic codes WIMS-E and SCALE-4, and the Monte Carlo code MCNP4A. The LEUPRO whole core calculations will be reported later. (orig.)

  17. Results of the reliability benchmark exercise and the future CEC-JRC program

    International Nuclear Information System (INIS)

    Amendola, A.

    1985-01-01

    As a contribution towards identifying problem areas and for assessing probabilistic safety assessment (PSA) methods and procedures of analysis, JRC has organized a wide-range Benchmark Exercise on systems reliability. This has been executed by ten different teams involving seventeen organizations from nine European countries. The exercise has been based on a real case (Auxiliary Feedwater System of EDF Paluel PWR 1300 MWe Unit), starting from analysis of technical specifications, logical and topological layout and operational procedures. Terms of references included both qualitative and quantitative analyses. The subdivision of the exercise into different phases and the rules adopted allowed assessment of the different components of the spread of the overall results. It appeared that modelling uncertainties may overwhelm data uncertainties and major efforts must be spent in order to improve consistency and completeness of qualitative analysis. After successful completion of the first exercise, CEC-JRC program has planned separate exercises on analysis of dependent failures and human factors before approaching the evaluation of a complete accident sequence

  18. Criticality benchmark results for the ENDF60 library with MCNP trademark

    International Nuclear Information System (INIS)

    Keen, N.D.; Frankle, S.C.; MacFarlane, R.E.

    1995-01-01

    The continuous-energy neutron data library ENDF60, for use with the Monte Carlo N-Particle radiation transport code MCNP4A, was released in the fall of 1994. The ENDF60 library is comprised of 124 nuclide data files based on the ENDF/B-VI (B-VI) evaluations through Release 2. Fifty-two percent of these B-VI evaluations are translations from ENDF/B-V (B-V). The remaining forty-eight percent are new evaluations which have sometimes changed significantly. Among these changes are greatly increased use of isotopic evaluations, more extensive resonance-parameter evaluations, and energy-angle correlated distributions for secondary particles. In particular, the upper energy limit for the resolved resonance region of 235 U, 238 U and 239 Pu has been extended from 0.082, 4.0, and 0.301 keV to 2..25, 10.0, and 2.5 keV respectively. As regulatory oversight has advanced and performing critical experiments has become more difficult, there has been an increased reliance on computational methods. For the criticality safety community, the performance of the combined transport code and data library is of interest. The purpose of this abstract is to provide benchmarking results to aid the user in determining the best data library for their application

  19. Application of PHEBUS results to benchmarking of nuclear plant safety codes

    International Nuclear Information System (INIS)

    Birchley, J.; Cripps, R.; Guentay, S.; Hosemann, J.P.

    2001-01-01

    The PHEBUS Fission Product project comprises six nuclear reactor severe accident simulations, using prototypic core materials and representative geometry and boundary conditions for the coolant loop and containment. The data thus produced are being used to benchmark the computer tools used for nuclear plant accident analysis to reduce the excessive conservatism typical for estimates of the radiological source term. A set of calculations has been carried out to simulate the results of experiment PHEBUS FPT-1 through each of its main stages, using computer models and methods analogous to those currently employed at PSI for assessments of Swiss nuclear plants. Good agreement for the core degradation and containment behaviour builds confidence in the models, while some open questions remain concerning some aspects of the release of fission products from the fuel, their transport and chemical speciation. Of potentially great importance to the reduction in source term estimates is the formation of the non-volatile species, silver iodide. Current investigations are focused on the uncertainty concerning fission product behaviour and the stability of silver iodide under irradiation. (author)

  20. VENUS-2 MOX Core Benchmark: Results of ORNL Calculations Using HELIOS-1.4

    Energy Technology Data Exchange (ETDEWEB)

    Ellis, RJ

    2001-02-02

    The Task Force on Reactor-Based Plutonium Disposition, now an Expert Group, was set up through the Organization for Economic Cooperation and Development/Nuclear Energy Agency to facilitate technical assessments of burning weapons-grade plutonium mixed-oxide (MOX) fuel in U.S. pressurized-water reactors and Russian VVER nuclear reactors. More than ten countries participated to advance the work of the Task Force in a major initiative, which was a blind benchmark study to compare code benchmark calculations against experimental data for the VENUS-2 MOX core at SCK-CEN in Mol, Belgium. At the Oak Ridge National Laboratory, the HELIOS-1.4 code was used to perform a comprehensive study of pin-cell and core calculations for the VENUS-2 benchmark.

  1. A Heterogeneous Medium Analytical Benchmark

    International Nuclear Information System (INIS)

    Ganapol, B.D.

    1999-01-01

    A benchmark, called benchmark BLUE, has been developed for one-group neutral particle (neutron or photon) transport in a one-dimensional sub-critical heterogeneous plane parallel medium with surface illumination. General anisotropic scattering is accommodated through the Green's Function Method (GFM). Numerical Fourier transform inversion is used to generate the required Green's functions which are kernels to coupled integral equations that give the exiting angular fluxes. The interior scalar flux is then obtained through quadrature. A compound iterative procedure for quadrature order and slab surface source convergence provides highly accurate benchmark qualities (4- to 5- places of accuracy) results

  2. BWR stability analysis: methodology of the stability analysis and results of PSI for the NEA/NCR benchmark task

    International Nuclear Information System (INIS)

    Hennig, D.; Nechvatal, L.

    1996-09-01

    The report describes the PSI stability analysis methodology and the validation of this methodology based on the international OECD/NEA BWR stability benchmark task. In the frame of this work, the stability properties of some operation points of the NPP Ringhals 1 have been analysed and compared with the experimental results. (author) figs., tabs., 45 refs

  3. Characterization of the dynamic friction of woven fabrics: Experimental methods and benchmark results

    NARCIS (Netherlands)

    Sachs, Ulrich; Akkerman, Remko; Fetfatsidis, K.; Vidal-Sallé, E.; Schumacher, J.; Ziegmann, G.; Allaoui, S.; Hivet, G.; Maron, B.; Vanclooster, K.; Lomov, S.V.

    2014-01-01

    A benchmark exercise was conducted to compare various friction test set-ups with respect to the measured coefficients of friction. The friction was determined between Twintex®PP, a fabric of commingled yarns of glass and polypropylene filaments, and a metal surface. The same material was supplied to

  4. The OECD/NEA/NSC PBMR 400 MW coupled neutronics thermal hydraulics transient benchmark: transient results - 290

    International Nuclear Information System (INIS)

    Strydom, G.; Reitsma, F.; Ngeleka, P.T.; Ivanov, K.N.

    2010-01-01

    The PBMR is a High-Temperature Gas-cooled Reactor (HTGR) concept developed to be built in South Africa. The analysis tools used for core neutronic design and core safety analysis need to be verified and validated, and code-to-code comparisons are an essential part of the V and V plans. As part of this plan the PBMR 400 MWth design and a representative set of transient exercises are defined as an OECD benchmark. The scope of the benchmark is to establish a series of well defined multi-dimensional computational benchmark problems with a common given set of cross sections, to compare methods and tools in coupled neutronics and thermal hydraulics analysis with a specific focus on transient events. This paper describes the current status of the benchmark project and shows the results for the six transient exercises, consisting of three Loss of Cooling Accidents, two Control Rod Withdrawal transients, a power load-follow transient, and a Helium over-cooling Accident. The participants' results are compared using a statistical method and possible areas of future code improvement are identified. (authors)

  5. Strong-coupling expansion for the momentum distribution of the Bose-Hubbard model with benchmarking against exact numerical results

    International Nuclear Information System (INIS)

    Freericks, J. K.; Krishnamurthy, H. R.; Kato, Yasuyuki; Kawashima, Naoki; Trivedi, Nandini

    2009-01-01

    A strong-coupling expansion for the Green's functions, self-energies, and correlation functions of the Bose-Hubbard model is developed. We illustrate the general formalism, which includes all possible (normal-phase) inhomogeneous effects in the formalism, such as disorder or a trap potential, as well as effects of thermal excitations. The expansion is then employed to calculate the momentum distribution of the bosons in the Mott phase for an infinite homogeneous periodic system at zero temperature through third order in the hopping. By using scaling theory for the critical behavior at zero momentum and at the critical value of the hopping for the Mott insulator-to-superfluid transition along with a generalization of the random-phase-approximation-like form for the momentum distribution, we are able to extrapolate the series to infinite order and produce very accurate quantitative results for the momentum distribution in a simple functional form for one, two, and three dimensions. The accuracy is better in higher dimensions and is on the order of a few percent relative error everywhere except close to the critical value of the hopping divided by the on-site repulsion. In addition, we find simple phenomenological expressions for the Mott-phase lobes in two and three dimensions which are much more accurate than the truncated strong-coupling expansions and any other analytic approximation we are aware of. The strong-coupling expansions and scaling-theory results are benchmarked against numerically exact quantum Monte Carlo simulations in two and three dimensions and against density-matrix renormalization-group calculations in one dimension. These analytic expressions will be useful for quick comparison of experimental results to theory and in many cases can bypass the need for expensive numerical simulations.

  6. JNC results of BFS-62-3A benchmark calculation (CRP: Phase 5)

    International Nuclear Information System (INIS)

    Ishikawa, M.

    2004-01-01

    The present work is the results of JNC, Japan, for the Phase 5 of IAEA CRP benchmark problem (BFS-62-3A critical experiment). Analytical Method of JNC is based on Nuclear Data Library JENDL-3.2; Group Constant Set JFS-3-J3.2R: 70-group, ABBN-type self-shielding factor table based on JENDL-3.2; Effective Cross-section - Current-weighted multigroup transport cross-section. Cell model for the BFS as-built tube and pellets was (Case 1) Homogeneous Model based on IPPE definition; (Case 2) Homogeneous atomic density equivalent to JNC's heterogeneous calculation only to cross-check the adjusted correction factors; (Case 3) Heterogeneous model based on JNC's evaluation, One-dimensional plate-stretch model with Tone's background cross-section method (CASUP code). Basic diffusion Calculation was done in 18-groups and three-dimensional Hex-Z model (by the CITATION code), with Isotropic diffusion coefficients (Case 1 and 2), and Benoist's anisotropic diffusion coefficients (Case 3). For sodium void reactivity, the exact perturbation theory was applied both to basic calculation and correction calculations, ultra-fine energy group correction - approx. 100,000 group constants below 50 keV, and ABBN-type 175 group constants with shielding factors above 50 keV. Transport theory and mesh size correction 18-group, was used for three-dimensional Hex-Z model (the MINIHEX code based on the S4-P0 transport method, which was developed by JNC. Effective delayed Neutron fraction in the reactivity scale was fixed at 0.00623 by IPPE evaluation. Analytical Results of criticality values and sodium void reactivity coefficient obtained by JNC are presented. JNC made a cross-check of the homogeneous model and the adjusted correction factors submitted by IPPE, and confirmed they are consistent. JNC standard system showed quite satisfactory analytical results for the criticality and the sodium void reactivity of BFS-62-3A experiment. JNC calculated the cross-section sensitivity coefficients of BFS

  7. VENUS-2 MOX Core Benchmark: Results of ORNL Calculations Using HELIOS-1.4 - Revised Report

    Energy Technology Data Exchange (ETDEWEB)

    Ellis, RJ

    2001-06-01

    The Task Force on Reactor-Based Plutonium Disposition (TFRPD) was formed by the Organization for Economic Cooperation and Development/Nuclear Energy Agency (OECD/NEA) to study reactor physics, fuel performance, and fuel cycle issues related to the disposition of weapons-grade (WG) plutonium as mixed-oxide (MOX) reactor fuel. To advance the goals of the TFRPD, 10 countries and 12 institutions participated in a major TFRPD activity: a blind benchmark study to compare code calculations to experimental data for the VENUS-2 MOX core at SCK-CEN in Mol, Belgium. At Oak Ridge National Laboratory, the HELIOS-1.4 code system was used to perform the comprehensive study of pin-cell and MOX core calculations for the VENUS-2 MOX core benchmark study.

  8. Evaluation of neutron thermalization parameters and benchmark reactor calculations using a synthetic scattering function for molecular gases

    International Nuclear Information System (INIS)

    Gillete, V.H.; Patino, N.E.; Granada, J.E.; Mayer, R.E.

    1988-01-01

    Using a synthetic scattering function which describes the interaction of neutrons with molecular gases we provide analytical expressions for zero-and first-order scattering kernels, σ 0 (E 0 →E), σ 1 (E 0 →E), and total cross section σ 0 (E 0 ). Based on these quantities, we have performed calculations of thermalization parameters and transport coefficients for H 2 O, D 2 O, C 6 H 6 and (CH 2 ) n at room temperature. Comparasion of such values with available experimental data and other calculations is satisfactory. We also generated nuclear data libraries for H 2 O with 47 thermal groups at 300K and performed some benchmark calculations ( 235 U, 239 Pu, PWR cell and typical APWR cell); the resulting reactivities are compared with experimental data and ENDF/B-IV calculations. (author) [pt

  9. Results of neutronic benchmark analysis for a high temperature reactor of the GT-MHR type - HTR2008-58107

    International Nuclear Information System (INIS)

    Boyarinov, V. F.; Bryzgalov, V. I.; Davidenko, V. D.; Fomichenko, P. A.; Glushkov, E. S.; Gomin, E. A.; Gurevich, M. I.; Kodochigov, N. G.; Marova, E. V.; Mitenkova, E. F.; Novikov, N. V.; Osipov, S. L.; Sukharev, Y. P.; Tsibulsky, V. F.; Yudkevich, M. S.

    2008-01-01

    The paper presents a description of benchmark cases, achieved results, analysis of possible reasons of differences of calculation results obtained by various neutronic codes. The comparative analysis is presented showing the benchmark-results obtained with reference and design codes by Russian specialists (WIMS-D, JAR-HTGR, UNK, MCU, MCNP5-MONTEBURNS1.0-ORIGEN2.0), by French specialists (AP0LL02, TRIP0LI4 codes), and by Korean specialists (HELIOS, MASTER, MCNP5 codes). The analysis of possible reasons for deviations was carried out, which was aimed at the decrease of uncertainties in calculated characteristics. This additional investigation was conducted with the use of 2D models of a fuel assembly cell and a reactor plane section. (authors)

  10. OECD/NEA expert group on uncertainty analysis for criticality safety assessment: Results of benchmark on sensitivity calculation (phase III)

    Energy Technology Data Exchange (ETDEWEB)

    Ivanova, T.; Laville, C. [Institut de Radioprotection et de Surete Nucleaire IRSN, BP 17, 92262 Fontenay aux Roses (France); Dyrda, J. [Atomic Weapons Establishment AWE, Aldermaston, Reading, RG7 4PR (United Kingdom); Mennerdahl, D. [E Mennerdahl Systems EMS, Starvaegen 12, 18357 Taeby (Sweden); Golovko, Y.; Raskach, K.; Tsiboulia, A. [Inst. for Physics and Power Engineering IPPE, 1, Bondarenko sq., 249033 Obninsk (Russian Federation); Lee, G. S.; Woo, S. W. [Korea Inst. of Nuclear Safety KINS, 62 Gwahak-ro, Yuseong-gu, Daejeon 305-338 (Korea, Republic of); Bidaud, A.; Sabouri, P. [Laboratoire de Physique Subatomique et de Cosmologie LPSC, CNRS-IN2P3/UJF/INPG, Grenoble (France); Patel, A. [U.S. Nuclear Regulatory Commission (NRC), Washington, DC 20555-0001 (United States); Bledsoe, K.; Rearden, B. [Oak Ridge National Laboratory ORNL, M.S. 6170, P.O. Box 2008, Oak Ridge, TN 37831 (United States); Gulliford, J.; Michel-Sendis, F. [OECD/NEA, 12, Bd des Iles, 92130 Issy-les-Moulineaux (France)

    2012-07-01

    The sensitivities of the k{sub eff} eigenvalue to neutron cross sections have become commonly used in similarity studies and as part of the validation algorithm for criticality safety assessments. To test calculations of the sensitivity coefficients, a benchmark study (Phase III) has been established by the OECD-NEA/WPNCS/EG UACSA (Expert Group on Uncertainty Analysis for Criticality Safety Assessment). This paper presents some sensitivity results generated by the benchmark participants using various computational tools based upon different computational methods: SCALE/TSUNAMI-3D and -1D, MONK, APOLLO2-MORET 5, DRAGON-SUSD3D and MMKKENO. The study demonstrates the performance of the tools. It also illustrates how model simplifications impact the sensitivity results and demonstrates the importance of 'implicit' (self-shielding) sensitivities. This work has been a useful step towards verification of the existing and developed sensitivity analysis methods. (authors)

  11. Benchmarking Density Functional Theory Approaches for the Description of Symmetry-Breaking in Long Polymethine Dyes

    KAUST Repository

    Gieseking, Rebecca L.

    2016-04-25

    Long polymethines are well-known experimentally to symmetry-break, which dramatically modifies their linear and nonlinear optical properties. Computational modeling could be very useful to provide insight into the symmetry-breaking process, which is not readily available experimentally; however, accurately predicting the crossover point from symmetric to symmetry-broken structures has proven challenging. Here, we benchmark the accuracy of several DFT approaches relative to CCSD(T) geometries. In particular, we compare analogous hybrid and long-range corrected (LRC) functionals to clearly show the influence of the functional exchange term. Although both hybrid and LRC functionals can be tuned to reproduce the CCSD(T) geometries, the LRC functionals are better performing at reproducing the geometry evolution with chain length and provide a finite upper limit for the gas-phase crossover point; these methods also provide good agreement with the experimental crossover points for more complex polymethines in polar solvents. Using an approach based on LRC functionals, a reduction in the crossover length is found with increasing medium dielectric constant, which is related to localization of the excess charge on the end groups. Symmetry-breaking is associated with the appearance of an imaginary frequency of b2 symmetry involving a large change in the degree of bond-length alternation. Examination of the IR spectra show that short, isolated streptocyanines have a mode at ~1200 cm-1 involving a large change in bond-length alternation; as the polymethine length or the medium dielectric increases, the frequency of this mode decreases before becoming imaginary at the crossover point.

  12. Current status and results of the PBMR -Pebble Box- benchmark within the framework of the IAEA CRP5 - 341

    International Nuclear Information System (INIS)

    Reitsma, F.; Tyobeka, B.

    2010-01-01

    The verification and validation of computer codes used in the analysis of high temperature gas cooled pebble bed reactor systems has not been an easy goal to achieve. A limited amount of tests and operating reactor measurements are available. Code-to-code comparisons for realistic pebble bed reactor designs often exhibit differences that are difficult to explain and are often blamed on the complexity of the core models or the variety of analysis methods and cross section data sets employed. For this reason, within the framework of the IAEA CRP5, the 'Pebble Box' benchmark was formulated as a simple way to compare various treatments of neutronics phenomena. The problem is comprised of six test cases which were designed to investigate the treatments and effects of leakage and heterogeneity. This paper presents the preliminary results of the benchmark exercise as received during the CRP and suggests possible future steps towards the resolution of discrepancies between the results. Although few participants took part in the benchmarking exercise, the results presented here show that there is still a need for further evaluation and in-depth understanding in order to build the confidence that all the different methods, codes and cross-section data sets have the capability to handle the various neutronics effects for such systems. (authors)

  13. The calculational VVER burnup Credit Benchmark No.3 results with the ENDF/B-VI rev.5 (1999)

    Energy Technology Data Exchange (ETDEWEB)

    Rodriguez Gual, Maritza [Centro de Tecnologia Nuclear, La Habana (Cuba). E-mail: mrgual@ctn.isctn.edu.cu

    2000-07-01

    The purpose of this papers to present the results of CB3 phase of the VVER calculational benchmark with the recent evaluated nuclear data library ENDF/B-VI Rev.5 (1999). This results are compared with the obtained from the other participants in the calculations (Czech Republic, Finland, Hungary, Slovaquia, Spain and the United Kingdom). The phase (CB3) of the VVER calculation benchmark is similar to the Phase II-A of the OECD/NEA/INSC BUC Working Group benchmark for PWR. The cases without burnup profile (BP) were performed with the WIMS/D-4 code. The rest of the cases have been carried with DOTIII discrete ordinates code. The neutron library used was the ENDF/B-VI rev. 5 (1999). The WIMS/D-4 (69 groups) is used to collapse cross sections from the ENDF/B-VI Rev. 5 (1999) to 36 groups working library for 2-D calculations. This work also comprises the results of CB1 (obtained with ENDF/B-VI rev. 5 (1999), too) and CB3 for cases with Burnup of 30 MWd/TU and cooling time of 1 and 5 years and for case with Burnup of 40 MWd/TU and cooling time of 1 year. (author)

  14. Monitoring the referral system through benchmarking in rural Niger: an evaluation of the functional relation between health centres and the district hospital

    Directory of Open Access Journals (Sweden)

    Miyé Hamidou

    2006-04-01

    Full Text Available Abstract Background The main objective of this study is to establish a benchmark for referral rates in rural Niger so as to allow interpretation of routine referral data to assess the performance of the referral system in Niger. Methods Strict and controlled application of existing clinical decision trees in a sample of rural health centres allowed the estimation of the corresponding need for and characteristics of curative referrals in rural Niger. Compliance of referral was monitored as well. Need was matched against actual referral in 11 rural districts. The referral patterns were registered so as to get an idea on the types of pathology referred. Results The referral rate benchmark was set at 2.5 % of patients consulting at the health centre for curative reasons. Niger's rural districts have a referral rate of less than half this benchmark. Acceptability of referrals is low for the population and is adding to the deficient referral system in Niger. Mortality because of under-referral is highest among young children. Conclusion Referral patterns show that the present programme approach to deliver health care leaves a large amount of unmet need for which only comprehensive first and second line health services can provide a proper answer. On the other hand, the benchmark suggests that well functioning health centres can take care of the vast majority of problems patients present with.

  15. Comparing the Floating Point Systems, Inc. AP-190L to representative scientific computers: some benchmark results

    International Nuclear Information System (INIS)

    Brengle, T.A.; Maron, N.

    1980-01-01

    Results are presented of comparative timing tests made by running a typical FORTRAN physics simulation code on the following machines: DEC PDP-10 with KI processor; DEC PDP-10, KI processor, and FPS AP-190L; CDC 7600; and CRAY-1. Factors such as DMA overhead, code size for the AP-190L, and the relative utilization of floating point functional units for the different machines are discussed. 1 table

  16. Computational Benchmarking for Ultrafast Electron Dynamics: Wave Function Methods vs Density Functional Theory.

    Science.gov (United States)

    Oliveira, Micael J T; Mignolet, Benoit; Kus, Tomasz; Papadopoulos, Theodoros A; Remacle, F; Verstraete, Matthieu J

    2015-05-12

    Attosecond electron dynamics in small- and medium-sized molecules, induced by an ultrashort strong optical pulse, is studied computationally for a frozen nuclear geometry. The importance of exchange and correlation effects on the nonequilibrium electron dynamics induced by the interaction of the molecule with the strong optical pulse is analyzed by comparing the solution of the time-dependent Schrödinger equation based on the correlated field-free stationary electronic states computed with the equationof-motion coupled cluster singles and doubles and the complete active space multi-configurational self-consistent field methodologies on one hand, and various functionals in real-time time-dependent density functional theory (TDDFT) on the other. We aim to evaluate the performance of the latter approach, which is very widely used for nonlinear absorption processes and whose computational cost has a more favorable scaling with the system size. We focus on LiH as a toy model for a nontrivial molecule and show that our conclusions carry over to larger molecules, exemplified by ABCU (C10H19N). The molecules are probed with IR and UV pulses whose intensities are not strong enough to significantly ionize the system. By comparing the evolution of the time-dependent field-free electronic dipole moment, as well as its Fourier power spectrum, we show that TD-DFT performs qualitatively well in most cases. Contrary to previous studies, we find almost no changes in the TD-DFT excitation energies when excited states are populated. Transitions between states of different symmetries are induced using pulses polarized in different directions. We observe that the performance of TD-DFT does not depend on the symmetry of the states involved in the transition.

  17. Benchmarking of London Dispersion-Accounting Density Functional Theory Methods on Very Large Molecular Complexes.

    Science.gov (United States)

    Risthaus, Tobias; Grimme, Stefan

    2013-03-12

    A new test set (S12L) containing 12 supramolecular noncovalently bound complexes is presented and used to evaluate seven different methods to account for dispersion in DFT (DFT-D3, DFT-D2, DFT-NL, XDM, dDsC, TS-vdW, M06-L) at different basis set levels against experimental, back-corrected reference energies. This allows conclusions about the performance of each method in an explorative research setting on "real-life" problems. Most DFT methods show satisfactory performance but, due to the largeness of the complexes, almost always require an explicit correction for the nonadditive Axilrod-Teller-Muto three-body dispersion interaction to get accurate results. The necessity of using a method capable of accounting for dispersion is clearly demonstrated in that the two-body dispersion contributions are on the order of 20-150% of the total interaction energy. MP2 and some variants thereof are shown to be insufficient for this while a few tested D3-corrected semiempirical MO methods perform reasonably well. Overall, we suggest the use of this benchmark set as a "sanity check" against overfitting to too small molecular cases.

  18. Comparative analysis of results between CASMO, MCNP and Serpent for a suite of Benchmark problems on BWR reactors

    International Nuclear Information System (INIS)

    Xolocostli M, J. V.; Vargas E, S.; Gomez T, A. M.; Reyes F, M. del C.; Del Valle G, E.

    2014-10-01

    In this paper a comparison is made in analyzing the suite of Benchmark problems for reactors type BWR between CASMO-4, MCNP6 and Serpent code. The Benchmark problem consists of two different geometries: a fuel cell of a pin and assembly type BWR. To facilitate the study of reactors physics in the fuel pin their nuclear characteristics are provided to detail, such as burnt dependence, the reactivity of selected nuclide, etc. With respect to the fuel assembly, the presented results are regarding to infinite multiplication factor for burning different steps and different vacuum conditions. Making the analysis of this set of Benchmark problems provides comprehensive test problems for the next fuels generation of BWR reactors with high extended burned. It is important to note that when making this comparison the purpose is to validate the methodologies used in modeling for different operating conditions, if the case is of other BWR assembly. The results will be within a range with some uncertainty, considering that does not depend on code that is used. Escuela Superior de Fisica y Matematicas of Instituto Politecnico Nacional (IPN (Mexico) has accumulated some experience in using Serpent, due to the potential of this code over other commercial codes such as CASMO and MCNP. The obtained results for the infinite multiplication factor are encouraging and motivate the studies to continue with the generation of the X S of a core to a next step a respective nuclear data library is constructed and this can be used by codes developed as part of the development project of the Mexican Analysis Platform of Nuclear Reactors AZTLAN. (Author)

  19. RELAP5-3D Results for Phase I (Exercise 2) of the OECD/NEA MHTGR-350 MW Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Gerhard Strydom

    2012-06-01

    The coupling of the PHISICS code suite to the thermal hydraulics system code RELAP5-3D has recently been initiated at the Idaho National Laboratory (INL) to provide a fully coupled prismatic Very High Temperature Reactor (VHTR) system modeling capability as part of the NGNP methods development program. The PHISICS code consists of three modules: INSTANT (performing 3D nodal transport core calculations), MRTAU (depletion and decay heat generation) and a perturbation/mixer module. As part of the verification and validation activities, steady state results have been obtained for Exercise 2 of Phase I of the newly-defined OECD/NEA MHTGR-350 MW Benchmark. This exercise requires participants to calculate a steady-state solution for an End of Equilibrium Cycle 350 MW Modular High Temperature Reactor (MHTGR), using the provided geometry, material, and coolant bypass flow description. The paper provides an overview of the MHTGR Benchmark and presents typical steady state results (e.g. solid and gas temperatures, thermal conductivities) for Phase I Exercise 2. Preliminary results are also provided for the early test phase of Exercise 3 using a two-group cross-section library and the Relap5-3D model developed for Exercise 2.

  20. Benchmarking Parameter-free AMaLGaM on Functions With and Without Noise

    NARCIS (Netherlands)

    P.A.N. Bosman (Peter); J. Grahl; D. Thierens (Dirk)

    2013-01-01

    htmlabstractWe describe a parameter-free estimation-of-distribution algorithm (EDA) called the adapted maximum-likelihood Gaussian model iterated density-estimation evolutionary algorithm (AMaLGaM-IDEA, or AMaLGaM for short) for numerical optimization. AMaLGaM is benchmarked within the 2009 black

  1. Analyses and results of the OECD/NEA WPNCS EGUNF benchmark phase II. Technical report; Analysen und Ergebnisse zum OECD/NEA WPNCS EGUNF Benchmark Phase II. Technischer Bericht

    Energy Technology Data Exchange (ETDEWEB)

    Hannstein, Volker; Sommer, Fabian

    2017-05-15

    The report summarizes the performed studies and results in the frame of the phase II benchmarks of the expert group of used nuclear fuel (EGUNF) of the working party of nuclear criticality safety (WPNCS) of the nuclear energy agency (NEA) of the organization for economic co-operation and development (OECD). The studies specified within the benchmarks have been realized to the full extent. The scope of the benchmarks was the comparison of a generic BWR fuel element with gadolinium containing fuel rods with several computer codes and cross section libraries of different international working groups and institutions. The used computational model allows the evaluation of the accuracy of fuel rod and their influence of the inventory calculations and the respective influence on BWR burnout credit calculations.

  2. Benchmarking Reactor Systems Studies by Comparison of EU and Japanese System Code Results for Different DEMO Concepts

    Energy Technology Data Exchange (ETDEWEB)

    Kemp, R.; Ward, D.J., E-mail: richard.kemp@ccfe.ac.uk [EURATOM/CCFE Association, Culham Centre for Fusion Energy, Abingdon (United Kingdom); Nakamura, M.; Tobita, K. [Japan Atomic Energy Agency, Rokkasho (Japan); Federici, G. [EFDA Garching, Max Plank Institut fur Plasmaphysik, Garching (Germany)

    2012-09-15

    Full text: Recent systems studies work within the Broader Approach framework has focussed on benchmarking the EU systems code PROCESS against the Japanese code TPC for conceptual DEMO designs. This paper describes benchmarking work for a conservative, pulsed DEMO and an advanced, steady-state, high-bootstrap fraction DEMO. The resulting former machine is an R{sub 0} = 10 m, a = 2.5 m, {beta}{sub N} < 2.0 device with no enhancement in energy confinement over IPB98. The latter machine is smaller (R{sub 0} = 8 m, a = 2.7 m), with {beta}{sub N} = 3.0, enhanced confinement, and high bootstrap fraction f{sub BS} = 0.8. These options were chosen to test the codes across a wide range of parameter space. While generally in good agreement, some of the code outputs differ. In particular, differences have been identified in the impurity radiation models and flux swing calculations. The global effects of these differences are described and approaches to identifying the best models, including future experiments, are discussed. Results of varying some of the assumptions underlying the modelling are also presented, demonstrating the sensitivity of the solutions to technological limitations and providing guidance for where further research could be focussed. (author)

  3. The InterFrost benchmark of Thermo-Hydraulic codes for cold regions hydrology - first inter-comparison results

    Science.gov (United States)

    Grenier, Christophe; Roux, Nicolas; Anbergen, Hauke; Collier, Nathaniel; Costard, Francois; Ferrry, Michel; Frampton, Andrew; Frederick, Jennifer; Holmen, Johan; Jost, Anne; Kokh, Samuel; Kurylyk, Barret; McKenzie, Jeffrey; Molson, John; Orgogozo, Laurent; Rivière, Agnès; Rühaak, Wolfram; Selroos, Jan-Olof; Therrien, René; Vidstrand, Patrik

    2015-04-01

    The impacts of climate change in boreal regions has received considerable attention recently due to the warming trends that have been experienced in recent decades and are expected to intensify in the future. Large portions of these regions, corresponding to permafrost areas, are covered by water bodies (lakes, rivers) that interact with the surrounding permafrost. For example, the thermal state of the surrounding soil influences the energy and water budget of the surface water bodies. Also, these water bodies generate taliks (unfrozen zones below) that disturb the thermal regimes of permafrost and may play a key role in the context of climate change. Recent field studies and modeling exercises indicate that a fully coupled 2D or 3D Thermo-Hydraulic (TH) approach is required to understand and model the past and future evolution of landscapes, rivers, lakes and associated groundwater systems in a changing climate. However, there is presently a paucity of 3D numerical studies of permafrost thaw and associated hydrological changes, and the lack of study can be partly attributed to the difficulty in verifying multi-dimensional results produced by numerical models. Numerical approaches can only be validated against analytical solutions for a purely thermic 1D equation with phase change (e.g. Neumann, Lunardini). When it comes to the coupled TH system (coupling two highly non-linear equations), the only possible approach is to compare the results from different codes to provided test cases and/or to have controlled experiments for validation. Such inter-code comparisons can propel discussions to try to improve code performances. A benchmark exercise was initialized in 2014 with a kick-off meeting in Paris in November. Participants from USA, Canada, Germany, Sweden and France convened, representing altogether 13 simulation codes. The benchmark exercises consist of several test cases inspired by existing literature (e.g. McKenzie et al., 2007) as well as new ones. They

  4. Radcalc for windows benchmark study: A comparison of software results with Rocky Flats hydrogen gas generation data

    International Nuclear Information System (INIS)

    MCFADDEN, J.G.

    1999-01-01

    Radcalc for Windows Version 2.01 is a user-friendly software program developed by Waste Management Federal Services, Inc., Northwest Operations for the U.S. Department of Energy (McFadden et al. 1998). It is used for transportation and packaging applications in the shipment of radioactive waste materials. Among its applications are the classification of waste per the US. Department of Transportation regulations, the calculation of decay heat and daughter products, and the calculation of the radiolytic production of hydrogen gas. The Radcalc program has been extensively tested and validated (Green et al. 1995, McFadden et al. 1998) by comparison of each Radcalc algorithm to hand calculations. An opportunity to benchmark Radcalc hydrogen gas generation calculations to experimental data arose when the Rocky Flats Environmental Technology Site (RFETS) Residue Stabilization Program collected hydrogen gas generation data to determine compliance with requirements for shipment of waste in the TRUPACT-II (Schierloh 1998). The residue/waste drums tested at RFETS contain contaminated, solid, inorganic materials in polyethylene bags. The contamination is predominantly due to plutonium and americium isotopes. The information provided by Schierloh (1 998) of RFETS includes decay heat, hydrogen gas generation rates, calculated G eff values, and waste material type, making the experimental data ideal for benchmarking Radcalc. The following sections discuss the RFETS data and the Radcalc cases modeled with the data. Results are tabulated and also provided graphically

  5. RESULTS OF ANALYSIS OF BENCHMARKING METHODS OF INNOVATION SYSTEMS ASSESSMENT IN ACCORDANCE WITH AIMS OF SUSTAINABLE DEVELOPMENT OF SOCIETY

    Directory of Open Access Journals (Sweden)

    A. Vylegzhanina

    2016-01-01

    Full Text Available In this work, we introduce results of comparative analysis of international ratings indexes of innovation systems for their compliance with purposes of sustainable development. Purpose of this research is defining requirements to benchmarking methods of assessing national or regional innovation systems and compare them basing on assumption, that innovation system is aligned with sustainable development concept. Analysis of goal sets and concepts, which underlie observed international composite innovation indexes, comparison of their metrics and calculation techniques, allowed us to reveal opportunities and limitations of using these methods in frames of sustainable development concept. We formulated targets of innovation development on the base of innovation priorities of sustainable socio-economic development. Using comparative analysis of indexes with these targets, we revealed two methods of assessing innovation systems, maximally connected with goals of sustainable development. Nevertheless, today no any benchmarking method, which meets need of innovation systems assessing in compliance with sustainable development concept to a sufficient extent. We suggested practical directions of developing methods, assessing innovation systems in compliance with goals of societal sustainable development.

  6. Benchmarks for GADRAS performance validation

    International Nuclear Information System (INIS)

    Mattingly, John K.; Mitchell, Dean James; Rhykerd, Charles L. Jr.

    2009-01-01

    The performance of the Gamma Detector Response and Analysis Software (GADRAS) was validated by comparing GADRAS model results to experimental measurements for a series of benchmark sources. Sources for the benchmark include a plutonium metal sphere, bare and shielded in polyethylene, plutonium oxide in cans, a highly enriched uranium sphere, bare and shielded in polyethylene, a depleted uranium shell and spheres, and a natural uranium sphere. The benchmark experimental data were previously acquired and consist of careful collection of background and calibration source spectra along with the source spectra. The calibration data were fit with GADRAS to determine response functions for the detector in each experiment. A one-dimensional model (pie chart) was constructed for each source based on the dimensions of the benchmark source. The GADRAS code made a forward calculation from each model to predict the radiation spectrum for the detector used in the benchmark experiment. The comparisons between the GADRAS calculation and the experimental measurements are excellent, validating that GADRAS can correctly predict the radiation spectra for these well-defined benchmark sources.

  7. Optical rotation calculated with time-dependent density functional theory: the OR45 benchmark.

    Science.gov (United States)

    Srebro, Monika; Govind, Niranjan; de Jong, Wibe A; Autschbach, Jochen

    2011-10-13

    Time-dependent density functional theory (TDDFT) computations are performed for 42 organic molecules and three transition metal complexes, with experimental molar optical rotations ranging from 2 to 2 × 10(4) deg cm(2) dmol(-1). The performances of the global hybrid functionals B3LYP, PBE0, and BHLYP, and of the range-separated functionals CAM-B3LYP and LC-PBE0 (the latter being fully long-range corrected), are investigated. The performance of different basis sets is studied. When compared to liquid-phase experimental data, the range-separated functionals do, on average, not perform better than B3LYP and PBE0. Median relative deviations between calculations and experiment range from 25 to 29%. A basis set recently proposed for optical rotation calculations (LPol-ds) on average does not give improved results compared to aug-cc-pVDZ in TDDFT calculations with B3LYP. Individual cases are discussed in some detail, among them norbornenone for which the LC-PBE0 functional produced an optical rotation that is close to available data from coupled-cluster calculations, but significantly smaller in magnitude than the liquid-phase experimental value. Range-separated functionals and BHLYP perform well for helicenes and helicene derivatives. Metal complexes pose a challenge to first-principles calculations of optical rotation.

  8. OECD/NEA source convergence benchmark program: overview and summary of results

    International Nuclear Information System (INIS)

    Blomquist, Roger; Nouri, Ali; Armishaw, Malcolm; Jacquet, Olivier; Naito, Yoshitaka; Miyoshi, Yoshinori; Yamamoto, Toshihiro

    2003-01-01

    This paper describes the work of the OECD Nuclear Energy Agency Expert Group on Source Convergence in Criticality Safety Analysis. A set of test problems is presented, some computational results are given, and the effects of source convergence difficulties are described

  9. OECD/NEA source convergence benchmark program. Overview and summary of results

    International Nuclear Information System (INIS)

    Blomquist, Roger; Nouri, Ali; Armishaw, Malcolm; Jacquet, Olivier; Naito, Yoshitaka; Miyoshi, Yoshinori; Yamamoto, Toshihiro

    2003-01-01

    This paper describes the work of the OECD Nuclear Energy Agency Expert Group on Source Convergence in Criticality Safety Analysis. A set of test problems is presented, some computational results are given, and the effects of source convergence difficulties are described. (author)

  10. Benchmarking Glucose Results through Automation: The 2009 Remote Automated Laboratory System Report

    Science.gov (United States)

    Anderson, Marcy; Zito, Denise; Kongable, Gail

    2010-01-01

    Background Hyperglycemia in the adult inpatient population remains a topic of intense study in U.S. hospitals. Most hospitals have established glycemic control programs but are unable to determine their impact. The 2009 Remote Automated Laboratory System (RALS) Report provides trends in glycemic control over 4 years to 576 U.S. hospitals to support their effort to manage inpatient hyperglycemia. Methods A proprietary software application feeds de-identified patient point-of-care blood glucose (POC-BG) data from the Medical Automation Systems RALS-Plus data management system to a central server. Analyses include the number of tests and the mean and median BG results for intensive care unit (ICU), non-ICU, and each hospital compared to the aggregate of the other hospitals. Results More than 175 million BG results were extracted from 2006–2009; 25% were from the ICU. Mean range of BG results for all inpatients in 2006, 2007, 2008, and 2009 was 142.2–201.9, 145.6–201.2, 140.6–205.7, and 140.7–202.4 mg/dl, respectively. The range for ICU patients was 128–226.5, 119.5–219.8, 121.6–226.0, and 121.1–217 mg/dl, respectively. The range for non-ICU patients was 143.4–195.5, 148.6–199.8, 145.2–201.9, and 140.7–203.6 mg/dl, respectively. Hyperglycemia rates of >180 mg/dl in 2008 and 2009 were examined, and hypoglycemia rates of Automated POC-BG data management software can assist in this effort. PMID:21129348

  11. Results of a survey on accident and safety analysis codes, benchmarks, verification and validation methods

    International Nuclear Information System (INIS)

    Lee, A.G.; Wilkin, G.B.

    1996-03-01

    During the 'Workshop on R and D needs' at the 3rd Meeting of the International Group on Research Reactors (IGORR-III), the participants agreed that it would be useful to compile a survey of the computer codes and nuclear data libraries used in accident and safety analyses for research reactors and the methods various organizations use to verify and validate their codes and libraries. Five organizations, Atomic Energy of Canada Limited (AECL, Canada), China Institute of Atomic Energy (CIAE, People's Republic of China), Japan Atomic Energy Research Institute (JAERI, Japan), Oak Ridge National Laboratories (ORNL, USA), and Siemens (Germany) responded to the survey. The results of the survey are compiled in this report. (author) 36 refs., 3 tabs

  12. Weak-field asymptotic theory of tunneling ionization: benchmark analytical results for two-electron atoms

    International Nuclear Information System (INIS)

    Trinh, Vinh H; Morishita, Toru; Tolstikhin, Oleg I

    2015-01-01

    The recently developed many-electron weak-field asymptotic theory of tunneling ionization of atoms and molecules in an external static electric field (Tolstikhin et al 2014, Phys. Rev. A 89, 013421) is extended to the first-order terms in the asymptotic expansion in field. To highlight the results, here we present a simple analytical formula giving the rate of tunneling ionization of two-electron atoms H − and He. Comparison with fully-correlated ab initio calculations available for these systems shows that the first-order theory works quantitatively in a wide range of fields up to the onset of over-the-barrier ionization and hence is expected to find numerous applications in strong-field physics. (fast track communication)

  13. Library Benchmarking

    Directory of Open Access Journals (Sweden)

    Wiji Suwarno

    2017-02-01

    Full Text Available The term benchmarking has been encountered in the implementation of total quality (TQM or in Indonesian termed holistic quality management because benchmarking is a tool to look for ideas or learn from the library. Benchmarking is a processof measuring and comparing for continuous business process of systematic and continuous measurement, the process of measuring and comparing for continuous business process of an organization to get information that can help these organization improve their performance efforts.

  14. Five- and six-electron harmonium atoms: Highly accurate electronic properties and their application to benchmarking of approximate 1-matrix functionals

    Science.gov (United States)

    Cioslowski, Jerzy; Strasburger, Krzysztof

    2018-04-01

    Electronic properties of several states of the five- and six-electron harmonium atoms are obtained from large-scale calculations employing explicitly correlated basis functions. The high accuracy of the computed energies (including their components), natural spinorbitals, and their occupation numbers makes them suitable for testing, calibration, and benchmarking of approximate formalisms of quantum chemistry and solid state physics. In the case of the five-electron species, the availability of the new data for a wide range of the confinement strengths ω allows for confirmation and generalization of the previously reached conclusions concerning the performance of the presently known approximations for the electron-electron repulsion energy in terms of the 1-matrix that are at heart of the density matrix functional theory (DMFT). On the other hand, the properties of the three low-lying states of the six-electron harmonium atom, computed at ω = 500 and ω = 1000, uncover deficiencies of the 1-matrix functionals not revealed by previous studies. In general, the previously published assessment of the present implementations of DMFT being of poor accuracy is found to hold. Extending the present work to harmonically confined systems with even more electrons is most likely counterproductive as the steep increase in computational cost required to maintain sufficient accuracy of the calculated properties is not expected to be matched by the benefits of additional information gathered from the resulting benchmarks.

  15. Benchmarking MELCOR 1.8.2 for ITER Against Recent EVITA Results

    Energy Technology Data Exchange (ETDEWEB)

    Merrill, Brad J

    2007-11-01

    A version of MELCOR 1.8.2 modified for use in ITER Preliminary Safety Report analyses was validated against recent data from the EVITA facility located in Cadarache, France. EVITA Test Series 7 was used for this study to verify MELCOR’s ability to predict the pressures, temperatures, cryoplate ice mass, and vaccum vessel (VV) condensate mass for test conditions in EVITA that include injections of steam, nitrogen, and water in to the EVITA VV after the walls had been heated to 165 ºC and the cryoplate had been cooled to -193 ºC. In general, the ability of MELCOR to predict the VV pressure and wall temperatures for the steam only and water only injection tests was very good. Predicted ice layer masses where larger than reported for the EVITA cryoplate, in particular for the steam only injection tests (~40% too high), and the predicted condensate masses were less that measured in EVITA. Both of these descrpancies can be explained by ice porosity. The modified MELCOR 1.8.2 over predicts the EVITA VV pressure for the co-injection tests (e.g., steam plus nitrogen, or water plus nitrogen injections) by almost a factor of two. Based on parametric runs that where made by increasing the predicted cryoplate condensation rate, it is believed that this pressure over prediction is a result of an under predicted cryoplate condensation rate. The particulars of this study are documented in this report as well as conclusions about the impact this study has regarding the use of this verions of MELCOR for consequence analyses for ITER safety reports.

  16. Benchmarking MELCOR 1.8.2 for ITER Against Recent EVITA Results

    International Nuclear Information System (INIS)

    Merrill, Brad J.

    2007-01-01

    A version of MELCOR 1.8.2 modified for use in ITER Preliminary Safety Report analyses was validated against recent data from the EVITA facility located in Cadarache, France. EVITA Test Series 7 was used for this study to verify MELCOR's ability to predict the pressures, temperatures, cryoplate ice mass, and vacuum vessel (VV) condensate mass for test conditions in EVITA that include injections of steam, nitrogen, and water in to the EVITA VV after the walls had been heated to 165 C and the cryoplate had been cooled to -193 C. In general, the ability of MELCOR to predict the VV pressure and wall temperatures for the steam only and water only injection tests was very good. Predicted ice layer masses where larger than reported for the EVITA cryoplate, in particular for the steam only injection tests (∼40% too high), and the predicted condensate masses were less that measured in EVITA. Both of these discrepancies can be explained by ice porosity. The modified MELCOR 1.8.2 over predicts the EVITA VV pressure for the co-injection tests (e.g., steam plus nitrogen, or water plus nitrogen injections) by almost a factor of two. Based on parametric runs that where made by increasing the predicted cryoplate condensation rate, it is believed that this pressure over prediction is a result of an under predicted cryoplate condensation rate. The particulars of this study are documented in this report as well as conclusions about the impact this study has regarding the use of this version of MELCOR for consequence analyses for ITER safety reports

  17. Calculations of the IAEA-CRP-6 Benchmark Cases by Using the ABAQUS FE Model for a Comparison with the COPA Results

    International Nuclear Information System (INIS)

    Cho, Moon-Sung; Kim, Y. M.; Lee, Y. W.; Jeong, K. C.; Kim, Y. K.; Oh, S. C.

    2006-01-01

    The fundamental design for a gas-cooled reactor relies on an understanding of the behavior of a coated particle fuel. KAERI, which has been carrying out the Korean VHTR (Very High Temperature modular gas cooled Reactor) Project since 2004, is developing a fuel performance analysis code for a VHTR named COPA (COated Particle fuel Analysis). COPA predicts temperatures, stresses, a fission gas release and failure probabilities of a coated particle fuel in normal operating conditions. Validation of COPA in the process of its development is realized partly by participating in the benchmark section of the international CRP-6 program led by IAEA which provides comprehensive benchmark problems and analysis results obtained from the CRP-6 member countries. Apart from the validation effort through the CRP-6, a validation of COPA was attempted by comparing its benchmark results with the visco-elastic solutions obtained from the ABAQUS code calculations for the same CRP-6 TRISO coated particle benchmark problems involving creep, swelling, and pressure. The study shows the calculation results of the IAEA-CRP-6 benchmark cases 5 through 7 by using the ABAQUS FE model for a comparison with the COPA results

  18. Accelerating and benchmarking operating system functions in a “soft” system

    Directory of Open Access Journals (Sweden)

    Péter Molnár

    2015-06-01

    Full Text Available The todays computing technology provokes serious debates whether the operating system functions are implemented in the best possible way. The suggestions range from accelerating only certain functions through providing complete real-time operating systems as coprocessors to using simultaneously hardware and software implemented threads in the operating system. The performance gain in such systems depends on many factors, so its quantification is not a simple task at all. In addition to the subtleties of operating systems, the hardware accelerators in modern processors may considerably affect the results of such measurements. The reconfigurable systems offer a platform, where even end users can carry out reliable and accurate measurements. The paper presents a hardware acceleration idea for speeding up a simple OS service, its verification setup and the measurement results.

  19. Solutions of the Two-Dimensional Hubbard Model: Benchmarks and Results from a Wide Range of Numerical Algorithms

    Directory of Open Access Journals (Sweden)

    2015-12-01

    Full Text Available Numerical results for ground-state and excited-state properties (energies, double occupancies, and Matsubara-axis self-energies of the single-orbital Hubbard model on a two-dimensional square lattice are presented, in order to provide an assessment of our ability to compute accurate results in the thermodynamic limit. Many methods are employed, including auxiliary-field quantum Monte Carlo, bare and bold-line diagrammatic Monte Carlo, method of dual fermions, density matrix embedding theory, density matrix renormalization group, dynamical cluster approximation, diffusion Monte Carlo within a fixed-node approximation, unrestricted coupled cluster theory, and multireference projected Hartree-Fock methods. Comparison of results obtained by different methods allows for the identification of uncertainties and systematic errors. The importance of extrapolation to converged thermodynamic-limit values is emphasized. Cases where agreement between different methods is obtained establish benchmark results that may be useful in the validation of new approaches and the improvement of existing methods.

  20. Benchmarking the cad-based attila discrete ordinates code with experimental data of fusion experiments and to the results of MCNP code in simulating ITER

    International Nuclear Information System (INIS)

    Youssef, M. Z.

    2007-01-01

    Attila is a newly developed finite element code based on Sn neutron, gamma, and charged particle transport in 3-D geometry in which unstructured tetrahedral meshes are generated to describe complex geometry that is based on CAD input (Solid Works, Pro/Engineer, etc). In the present work we benchmark its calculation accuracy by comparing its prediction to the measured data inside two experimental mock-ups bombarded with 14 MeV neutrons. The results are also compared to those based on MCNP calculations. The experimental mock-ups simulate parts of the International Thermonuclear Experimental Reactor (ITER) in-vessel components, namely: (1) the Tungsten mockup configuration (54.3 cm x 46.8 cm x 45 cm), and (2) the ITER shielding blanket followed by the SCM region (simulated by alternating layers of SS316 and copper). In the latter configuration, a high aspect ratio rectangular streaming channel was introduced (to simulate steaming paths between ITER blanket modules) which ends with a rectangular cavity. The experiments on these two fusion-oriented integral experiments were performed at the Fusion Neutron Generator (FNG) facility, Frascati, Italy. In addition, the nuclear performance of the ITER MCNP 'Benchmark' CAD model has been performed with Attila to compare its results to those obtained with CAD-based MCNP approach developed by several ITER participants. The objective of this paper is to compare results based on two distinctive 3-D calculation tools using the same nuclear data, FENDL2.1, and the same response functions of several reaction rates measured in ITER mock-ups and to enhance confidence from the international neutronics community in the Attila code and how it can precisely quantify the nuclear field in large and complex systems, such as ITER. Attila has the advantage of providing a full flux mapping visualization everywhere in one run where components subjected to excessive radiation level and strong streaming paths can be identified. In addition, the

  1. Interactive benchmarking

    DEFF Research Database (Denmark)

    Lawson, Lartey; Nielsen, Kurt

    2005-01-01

    We discuss individual learning by interactive benchmarking using stochastic frontier models. The interactions allow the user to tailor the performance evaluation to preferences and explore alternative improvement strategies by selecting and searching the different frontiers using directional...... in the suggested benchmarking tool. The study investigates how different characteristics on dairy farms influences the technical efficiency....

  2. RUNE benchmarks

    DEFF Research Database (Denmark)

    Peña, Alfredo

    This report contains the description of a number of benchmarks with the purpose of evaluating flow models for near-shore wind resource estimation. The benchmarks are designed based on the comprehensive database of observations that the RUNE coastal experiment established from onshore lidar...

  3. Benchmark selection

    DEFF Research Database (Denmark)

    Hougaard, Jens Leth; Tvede, Mich

    2002-01-01

    Within a production theoretic framework, this paper considers an axiomatic approach to benchmark selection. It is shown that two simple and weak axioms; efficiency and comprehensive monotonicity characterize a natural family of benchmarks which typically becomes unique. Further axioms are added...... in order to obtain a unique selection...

  4. A comparison of global optimization algorithms with standard benchmark functions and real-world applications using Energy Plus

    Energy Technology Data Exchange (ETDEWEB)

    Kamph, Jerome Henri; Robinson, Darren; Wetter, Michael

    2009-09-01

    There is an increasing interest in the use of computer algorithms to identify combinations of parameters which optimise the energy performance of buildings. For such problems, the objective function can be multi-modal and needs to be approximated numerically using building energy simulation programs. As these programs contain iterative solution algorithms, they introduce discontinuities in the numerical approximation to the objective function. Metaheuristics often work well for such problems, but their convergence to a global optimum cannot be established formally. Moreover, different algorithms tend to be suited to particular classes of optimization problems. To shed light on this issue we compared the performance of two metaheuristics, the hybrid CMA-ES/HDE and the hybrid PSO/HJ, in minimizing standard benchmark functions and real-world building energy optimization problems of varying complexity. From this we find that the CMA-ES/HDE performs well on more complex objective functions, but that the PSO/HJ more consistently identifies the global minimum for simpler objective functions. Both identified similar values in the objective functions arising from energy simulations, but with different combinations of model parameters. This may suggest that the objective function is multi-modal. The algorithms also correctly identified some non-intuitive parameter combinations that were caused by a simplified control sequence of the building energy system that does not represent actual practice, further reinforcing their utility.

  5. Mean Abnormal Result Rate: Proof of Concept of a New Metric for Benchmarking Selectivity in Laboratory Test Ordering.

    Science.gov (United States)

    Naugler, Christopher T; Guo, Maggie

    2016-04-01

    There is a need to develop and validate new metrics to access the appropriateness of laboratory test requests. The mean abnormal result rate (MARR) is a proposed measure of ordering selectivity, the premise being that higher mean abnormal rates represent more selective test ordering. As a validation of this metric, we compared the abnormal rate of lab tests with the number of tests ordered on the same requisition. We hypothesized that requisitions with larger numbers of requested tests represent less selective test ordering and therefore would have a lower overall abnormal rate. We examined 3,864,083 tests ordered on 451,895 requisitions and found that the MARR decreased from about 25% if one test was ordered to about 7% if nine or more tests were ordered, consistent with less selectivity when more tests were ordered. We then examined the MARR for community-based testing for 1,340 family physicians and found both a wide variation in MARR as well as an inverse relationship between the total tests ordered per year per physician and the physician-specific MARR. The proposed metric represents a new utilization metric for benchmarking relative selectivity of test orders among physicians. © American Society for Clinical Pathology, 2016. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  6. Bent functions results and applications to cryptography

    CERN Document Server

    Tokareva, Natalia

    2015-01-01

    Bent Functions: Results and Applications to Cryptography offers a unique survey of the objects of discrete mathematics known as Boolean bent functions. As these maximal, nonlinear Boolean functions and their generalizations have many theoretical and practical applications in combinatorics, coding theory, and cryptography, the text provides a detailed survey of their main results, presenting a systematic overview of their generalizations and applications, and considering open problems in classification and systematization of bent functions. The text is appropriate for novices and advanced

  7. Preliminary Results for the OECD/NEA Time Dependent Benchmark using Rattlesnake, Rattlesnake-IQS and TDKENO

    Energy Technology Data Exchange (ETDEWEB)

    DeHart, Mark D. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Mausolff, Zander [Univ. of Florida, Gainesville, FL (United States); Weems, Zach [Univ. of Florida, Gainesville, FL (United States); Popp, Dustin [Univ. of Florida, Gainesville, FL (United States); Smith, Kristin [Univ. of Florida, Gainesville, FL (United States); Shriver, Forrest [Univ. of Florida, Gainesville, FL (United States); Goluoglu, Sedat [Univ. of Florida, Gainesville, FL (United States); Prince, Zachary [Texas A & M Univ., College Station, TX (United States); Ragusa, Jean [Texas A & M Univ., College Station, TX (United States)

    2016-08-01

    One goal of the MAMMOTH M&S project is to validate the analysis capabilities within MAMMOTH. Historical data has shown limited value for validation of full three-dimensional (3D) multi-physics methods. Initial analysis considered the TREAT startup minimum critical core and one of the startup transient tests. At present, validation is focusing on measurements taken during the M8CAL test calibration series. These exercises will valuable in preliminary assessment of the ability of MAMMOTH to perform coupled multi-physics calculations; calculations performed to date are being used to validate the neutron transport solver Rattlesnake\\cite{Rattlesnake} and the fuels performance code BISON. Other validation projects outside of TREAT are available for single-physics benchmarking. Because the transient solution capability of Rattlesnake is one of the key attributes that makes it unique for TREAT transient simulations, validation of the transient solution of Rattlesnake using other time dependent kinetics benchmarks has considerable value. The Nuclear Energy Agency (NEA) of the Organization for Economic Cooperation and Development (OECD) has recently developed a computational benchmark for transient simulations. This benchmark considered both two-dimensional (2D) and 3D configurations for a total number of 26 different transients. All are negative reactivity insertions, typically returning to the critical state after some time.

  8. Preliminary Results for the OECD/NEA Time Dependent Benchmark using Rattlesnake, Rattlesnake-IQS and TDKENO

    International Nuclear Information System (INIS)

    DeHart, Mark D.; Mausolff, Zander; Weems, Zach; Popp, Dustin; Smith, Kristin; Shriver, Forrest; Goluoglu, Sedat; Prince, Zachary; Ragusa, Jean

    2016-01-01

    One goal of the MAMMOTH M&S project is to validate the analysis capabilities within MAMMOTH. Historical data has shown limited value for validation of full three-dimensional (3D) multi-physics methods. Initial analysis considered the TREAT startup minimum critical core and one of the startup transient tests. At present, validation is focusing on measurements taken during the M8CAL test calibration series. These exercises will valuable in preliminary assessment of the ability of MAMMOTH to perform coupled multi-physics calculations; calculations performed to date are being used to validate the neutron transport solver Rattlesnake\\citelesnake) and the fuels performance code BISON. Other validation projects outside of TREAT are available for single-physics benchmarking. Because the transient solution capability of Rattlesnake is one of the key attributes that makes it unique for TREAT transient simulations, validation of the transient solution of Rattlesnake using other time dependent kinetics benchmarks has considerable value. The Nuclear Energy Agency (NEA) of the Organization for Economic Cooperation and Development (OECD) has recently developed a computational benchmark for transient simulations. This benchmark considered both two-dimensional (2D) and 3D configurations for a total number of 26 different transients. All are negative reactivity insertions, typically returning to the critical state after some time.

  9. Benchmarking Investments in Advancement: Results of the Inaugural CASE Advancement Investment Metrics Study (AIMS). CASE White Paper

    Science.gov (United States)

    Kroll, Juidith A.

    2012-01-01

    The inaugural Advancement Investment Metrics Study, or AIMS, benchmarked investments and staffing in each of the advancement disciplines (advancement services, alumni relations, communications and marketing, fundraising and advancement management) as well as the return on the investment in fundraising specifically. This white paper reports on the…

  10. Verification and validation benchmarks.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy

    2007-02-01

    Verification and validation (V&V) are the primary means to assess the accuracy and reliability of computational simulations. V&V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V&V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the level of

  11. Verification and validation benchmarks

    International Nuclear Information System (INIS)

    Oberkampf, William Louis; Trucano, Timothy Guy

    2007-01-01

    Verification and validation (V and V) are the primary means to assess the accuracy and reliability of computational simulations. V and V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V and V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the

  12. Verification and validation benchmarks

    International Nuclear Information System (INIS)

    Oberkampf, William L.; Trucano, Timothy G.

    2008-01-01

    Verification and validation (V and V) are the primary means to assess the accuracy and reliability of computational simulations. V and V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V and V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the

  13. Comparative analysis of results between CASMO, MCNP and Serpent for a suite of Benchmark problems on BWR reactors; Analisis comparativo de resultados entre CASMO, MCNP y SERPENT para una suite de problemas Benchmark en reactores BWR

    Energy Technology Data Exchange (ETDEWEB)

    Xolocostli M, J. V.; Vargas E, S.; Gomez T, A. M. [ININ, Carretera Mexico-Toluca s/n, 52750 Ocoyoacac, Estado de Mexico (Mexico); Reyes F, M. del C.; Del Valle G, E., E-mail: vicente.xolocostli@inin.gob.mx [IPN, Escuela Superior de Fisica y Matematicas, UP - Adolfo Lopez Mateos, Edif. 9, 07738 Mexico D. F. (Mexico)

    2014-10-15

    In this paper a comparison is made in analyzing the suite of Benchmark problems for reactors type BWR between CASMO-4, MCNP6 and Serpent code. The Benchmark problem consists of two different geometries: a fuel cell of a pin and assembly type BWR. To facilitate the study of reactors physics in the fuel pin their nuclear characteristics are provided to detail, such as burnt dependence, the reactivity of selected nuclide, etc. With respect to the fuel assembly, the presented results are regarding to infinite multiplication factor for burning different steps and different vacuum conditions. Making the analysis of this set of Benchmark problems provides comprehensive test problems for the next fuels generation of BWR reactors with high extended burned. It is important to note that when making this comparison the purpose is to validate the methodologies used in modeling for different operating conditions, if the case is of other BWR assembly. The results will be within a range with some uncertainty, considering that does not depend on code that is used. Escuela Superior de Fisica y Matematicas of Instituto Politecnico Nacional (IPN (Mexico) has accumulated some experience in using Serpent, due to the potential of this code over other commercial codes such as CASMO and MCNP. The obtained results for the infinite multiplication factor are encouraging and motivate the studies to continue with the generation of the X S of a core to a next step a respective nuclear data library is constructed and this can be used by codes developed as part of the development project of the Mexican Analysis Platform of Nuclear Reactors AZTLAN. (Author)

  14. Chitin's Functionality as a Novel Disintegrant: Benchmarking Against Commonly Used Disintegrants in Different Physicochemical Environments.

    Science.gov (United States)

    Chaheen, Mohammad; Soulairol, Ian; Bataille, Bernard; Yassine, Ahmad; Belamie, Emmanuel; Sharkawi, Tahmer

    2017-07-01

    Disintegrants are used as excipients to ensure rapid disintegration of pharmaceutical tablets and further ensure proper dissolution of the active pharmaceutical ingredient. This study investigates disintegration mechanisms of chitin and common disintegrants. Swelling assessment (swelling force and swelling ratio) in different media, and compaction behavior (pure or mixed with other excipients) tabletability, deformation (Heckel modeling), and compact disintegration times were investigated on the tested disintegrants (alginic acid calcium salt, crospovidone, sodium starch glycolate, croscarmellose sodium, and chitin). Results show that the physicochemical properties of the disintegration medium such as pH and ionic strength, as well as other formulation ingredients, affect the disintegrant functionalities. Heckel analysis using the mean yield pressure "Py" shows that alginic acid calcium salt is the most brittle among the studied disintegrants, while crospovidone has the most plastic deformation mechanism, followed by chitin. Chitin showed good tabletability and disintegration properties that were not influenced by the physicochemical formulation environment. Chitin is largely available and easily modifiable and thus a promising material that could be used as a multifunctional excipient in tablet formulation. Copyright © 2017 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  15. Performance Evaluation of Supercomputers using HPCC and IMB Benchmarks

    Science.gov (United States)

    Saini, Subhash; Ciotti, Robert; Gunney, Brian T. N.; Spelce, Thomas E.; Koniges, Alice; Dossa, Don; Adamidis, Panagiotis; Rabenseifner, Rolf; Tiyyagura, Sunil R.; Mueller, Matthias; hide

    2006-01-01

    The HPC Challenge (HPCC) benchmark suite and the Intel MPI Benchmark (IMB) are used to compare and evaluate the combined performance of processor, memory subsystem and interconnect fabric of five leading supercomputers - SGI Altix BX2, Cray XI, Cray Opteron Cluster, Dell Xeon cluster, and NEC SX-8. These five systems use five different networks (SGI NUMALINK4, Cray network, Myrinet, InfiniBand, and NEC IXS). The complete set of HPCC benchmarks are run on each of these systems. Additionally, we present Intel MPI Benchmarks (IMB) results to study the performance of 11 MPI communication functions on these systems.

  16. Towards benchmarking citizen observatories: Features and functioning of online amateur weather networks.

    Science.gov (United States)

    Gharesifard, Mohammad; Wehn, Uta; van der Zaag, Pieter

    2017-05-15

    Crowd-sourced environmental observations are increasingly being considered as having the potential to enhance the spatial and temporal resolution of current data streams from terrestrial and areal sensors. The rapid diffusion of ICTs during the past decades has facilitated the process of data collection and sharing by the general public and has resulted in the formation of various online environmental citizen observatory networks. Online amateur weather networks are a particular example of such ICT-mediated observatories that are rooted in one of the oldest and most widely practiced citizen science activities, namely amateur weather observation. The objective of this paper is to introduce a conceptual framework that enables a systematic review of the features and functioning of these expanding networks. This is done by considering distinct dimensions, namely the geographic scope and types of participants, the network's establishment mechanism, revenue stream(s), existing communication paradigm, efforts required by data sharers, support offered by platform providers, and issues such as data accessibility, availability and quality. An in-depth understanding of these dimensions helps to analyze various dynamics such as interactions between different stakeholders, motivations to run the networks, and their sustainability. This framework is then utilized to perform a critical review of six existing online amateur weather networks based on publicly available data. The main findings of this analysis suggest that: (1) there are several key stakeholders such as emergency services and local authorities that are not (yet) engaged in these networks; (2) the revenue stream(s) of online amateur weather networks is one of the least discussed but arguably most important dimensions that is crucial for the sustainability of these networks; and (3) all of the networks included in this study have one or more explicit modes of bi-directional communication, however, this is limited to

  17. Global benchmarking of medical student learning outcomes? Implementation and pilot results of the International Foundations of Medicine Clinical Sciences Exam at The University of Queensland, Australia.

    Science.gov (United States)

    Wilkinson, David; Schafer, Jennifer; Hewett, David; Eley, Diann; Swanson, Dave

    2014-01-01

    To report pilot results for international benchmarking of learning outcomes among 426 final year medical students at the University of Queensland (UQ), Australia. Students took the International Foundations of Medicine (IFOM) Clinical Sciences Exam (CSE) developed by the National Board of Medical Examiners, USA, as a required formative assessment. IFOM CSE comprises 160 multiple-choice questions in medicine, surgery, obstetrics, paediatrics and mental health, taken over 4.5 hours. Significant implementation issues; IFOM scores and benchmarking with International Comparison Group (ICG) scores and United States Medical Licensing Exam (USMLE) Step 2 Clinical Knowledge (CK) scores; and correlation with UQ medical degree cumulative grade point average (GPA). Implementation as an online exam, under university-mandated conditions was successful. Mean IFOM score was 531.3 (maximum 779-minimum 200). The UQ cohort performed better (31% scored below 500) than the ICG (55% below 500). However 49% of the UQ cohort did not meet the USMLE Step 2 CK minimum score. Correlation between IFOM scores and UQ cumulative GPA was reasonable at 0.552 (p benchmarking is feasible and provides a variety of useful benchmarking opportunities.

  18. Evaluation of PWR and BWR assembly benchmark calculations. Status report of EPRI computational benchmark results, performed in the framework of the Netherlands` PINK programme (Joint project of ECN, IRI, KEMA and GKN)

    Energy Technology Data Exchange (ETDEWEB)

    Gruppelaar, H. [Netherlands Energy Research Foundation (ECN), Petten (Netherlands); Klippel, H.T. [Netherlands Energy Research Foundation (ECN), Petten (Netherlands); Kloosterman, J.L. [Netherlands Energy Research Foundation (ECN), Petten (Netherlands); Hoogenboom, J.E. [Technische Univ. Delft (Netherlands). Interfacultair Reactor Instituut; Leege, P.F.A. de [Technische Univ. Delft (Netherlands). Interfacultair Reactor Instituut; Verhagen, F.C.M. [Keuring van Electrotechnische Materialen NV, Arnhem (Netherlands); Bruggink, J.C. [Gemeenschappelijke Kernenergiecentrale Nederland N.V., Dodewaard (Netherlands)

    1993-11-01

    Benchmark results of the Dutch PINK working group on calculational benchmarks on single pin cell and multipin assemblies as defined by EPRI are presented and evaluated. First a short update of methods used by the various institutes involved is given as well as an update of the status with respect to previous performed pin-cell calculations. Problems detected in previous pin-cell calculations are inspected more closely. Detailed discussion of results of multipin assembly calculations is given. The assembly consists of 9 pins in a multicell square lattice in which the central pin is filled differently, i.e. a Gd pin for the BWR assembly and a control rod/guide tube for the PWR assembly. The results for pin cells showed a rather good overall agreement between the four participants although BWR pins with high void fraction turned out to be difficult to calculate. With respect to burnup calculations good overall agreement for the reactivity swing was obtained, provided that a fine time grid is used. (orig.)

  19. Evaluation of PWR and BWR assembly benchmark calculations. Status report of EPRI computational benchmark results, performed in the framework of the Netherlands' PINK programme (Joint project of ECN, IRI, KEMA and GKN)

    International Nuclear Information System (INIS)

    Gruppelaar, H.; Klippel, H.T.; Kloosterman, J.L.; Hoogenboom, J.E.; Bruggink, J.C.

    1993-11-01

    Benchmark results of the Dutch PINK working group on calculational benchmarks on single pin cell and multipin assemblies as defined by EPRI are presented and evaluated. First a short update of methods used by the various institutes involved is given as well as an update of the status with respect to previous performed pin-cell calculations. Problems detected in previous pin-cell calculations are inspected more closely. Detailed discussion of results of multipin assembly calculations is given. The assembly consists of 9 pins in a multicell square lattice in which the central pin is filled differently, i.e. a Gd pin for the BWR assembly and a control rod/guide tube for the PWR assembly. The results for pin cells showed a rather good overall agreement between the four participants although BWR pins with high void fraction turned out to be difficult to calculate. With respect to burnup calculations good overall agreement for the reactivity swing was obtained, provided that a fine time grid is used. (orig.)

  20. ZZ ECN-BUBEBO, ECN-Petten Burnup Benchmark Book, Inventories, Afterheat

    International Nuclear Information System (INIS)

    Kloosterman, Jan Leen

    1999-01-01

    Description of program or function: Contains experimental benchmarks which can be used for the validation of burnup code systems and accompanied data libraries. Although the benchmarks presented here are thoroughly described in literature, it is in many cases not straightforward to retrieve unambiguously the correct input data and corresponding results from the benchmark Descriptions. Furthermore, results which can easily be measured, are sometimes difficult to calculate because of conversions to be made. Therefore, emphasis has been put to clarify the input of the benchmarks and to present the benchmark results in such a way that they can easily be calculated and compared. For more thorough Descriptions of the benchmarks themselves, the literature referred to here should be consulted. This benchmark book is divided in 11 chapters/files containing the following in text and tabular form: chapter 1: Introduction; chapter 2: Burnup Credit Criticality Benchmark Phase 1-B; chapter 3: Yankee-Rowe Core V Fuel Inventory Study; chapter 4: H.B. Robinson Unit 2 Fuel Inventory Study; chapter 5: Turkey Point Unit 3 Fuel Inventory Study; chapter 6: Turkey Point Unit 3 Afterheat Power Study; chapter 7: Dickens Benchmark on Fission Product Energy Release of U-235; chapter 8: Dickens Benchmark on Fission Product Energy Release of Pu-239; chapter 9: Yarnell Benchmark on Decay Heat Measurements of U-233; chapter 10: Yarnell Benchmark on Decay Heat Measurements of U-235; chapter 11: Yarnell Benchmark on Decay Heat Measurements of Pu-239

  1. Benchmarking the quality of breast cancer care in a nationwide voluntary system: the first five-year results (2003–2007 from Germany as a proof of concept

    Directory of Open Access Journals (Sweden)

    Rezai Mahdi

    2008-12-01

    Full Text Available Abstract Background The main study objectives were: to establish a nationwide voluntary collaborative network of breast centres with independent data analysis; to define suitable quality indicators (QIs for benchmarking the quality of breast cancer (BC care; to demonstrate existing differences in BC care quality; and to show that BC care quality improved with benchmarking from 2003 to 2007. Methods BC centres participated voluntarily in a scientific benchmarking procedure. A generic XML-based data set was developed and used for data collection. Nine guideline-based quality targets serving as rate-based QIs were initially defined, reviewed annually and modified or expanded accordingly. QI changes over time were analysed descriptively. Results During 2003–2007, respective increases in participating breast centres and postoperatively confirmed BCs were from 59 to 220 and from 5,994 to 31,656 (> 60% of new BCs/year in Germany. Starting from 9 process QIs, 12 QIs were developed by 2007 as surrogates for long-term outcome. Results for most QIs increased. From 2003 to 2007, the most notable increases seen were for preoperative histological confirmation of diagnosis (58% (in 2003 to 88% (in 2007, appropriate endocrine therapy in hormone receptor-positive patients (27 to 93%, appropriate radiotherapy after breast-conserving therapy (20 to 79% and appropriate radiotherapy after mastectomy (8 to 65%. Conclusion Nationwide external benchmarking of BC care is feasible and successful. The benchmarking system described allows both comparisons among participating institutions as well as the tracking of changes in average quality of care over time for the network as a whole. Marked QI increases indicate improved quality of BC care.

  2. Benchmarking the quality of breast cancer care in a nationwide voluntary system: the first five-year results (2003–2007) from Germany as a proof of concept

    Science.gov (United States)

    Brucker, Sara Y; Schumacher, Claudia; Sohn, Christoph; Rezai, Mahdi; Bamberg, Michael; Wallwiener, Diethelm

    2008-01-01

    Background The main study objectives were: to establish a nationwide voluntary collaborative network of breast centres with independent data analysis; to define suitable quality indicators (QIs) for benchmarking the quality of breast cancer (BC) care; to demonstrate existing differences in BC care quality; and to show that BC care quality improved with benchmarking from 2003 to 2007. Methods BC centres participated voluntarily in a scientific benchmarking procedure. A generic XML-based data set was developed and used for data collection. Nine guideline-based quality targets serving as rate-based QIs were initially defined, reviewed annually and modified or expanded accordingly. QI changes over time were analysed descriptively. Results During 2003–2007, respective increases in participating breast centres and postoperatively confirmed BCs were from 59 to 220 and from 5,994 to 31,656 (> 60% of new BCs/year in Germany). Starting from 9 process QIs, 12 QIs were developed by 2007 as surrogates for long-term outcome. Results for most QIs increased. From 2003 to 2007, the most notable increases seen were for preoperative histological confirmation of diagnosis (58% (in 2003) to 88% (in 2007)), appropriate endocrine therapy in hormone receptor-positive patients (27 to 93%), appropriate radiotherapy after breast-conserving therapy (20 to 79%) and appropriate radiotherapy after mastectomy (8 to 65%). Conclusion Nationwide external benchmarking of BC care is feasible and successful. The benchmarking system described allows both comparisons among participating institutions as well as the tracking of changes in average quality of care over time for the network as a whole. Marked QI increases indicate improved quality of BC care. PMID:19055735

  3. Benchmarking and the laboratory

    Science.gov (United States)

    Galloway, M; Nadin, L

    2001-01-01

    This article describes how benchmarking can be used to assess laboratory performance. Two benchmarking schemes are reviewed, the Clinical Benchmarking Company's Pathology Report and the College of American Pathologists' Q-Probes scheme. The Clinical Benchmarking Company's Pathology Report is undertaken by staff based in the clinical management unit, Keele University with appropriate input from the professional organisations within pathology. Five annual reports have now been completed. Each report is a detailed analysis of 10 areas of laboratory performance. In this review, particular attention is focused on the areas of quality, productivity, variation in clinical practice, skill mix, and working hours. The Q-Probes scheme is part of the College of American Pathologists programme in studies of quality assurance. The Q-Probes scheme and its applicability to pathology in the UK is illustrated by reviewing two recent Q-Probe studies: routine outpatient test turnaround time and outpatient test order accuracy. The Q-Probes scheme is somewhat limited by the small number of UK laboratories that have participated. In conclusion, as a result of the government's policy in the UK, benchmarking is here to stay. Benchmarking schemes described in this article are one way in which pathologists can demonstrate that they are providing a cost effective and high quality service. Key Words: benchmarking • pathology PMID:11477112

  4. A benchmarking study

    Directory of Open Access Journals (Sweden)

    H. Groessing

    2015-02-01

    Full Text Available A benchmark study for permeability measurement is presented. In the past studies of other research groups which focused on the reproducibility of 1D-permeability measurements showed high standard deviations of the gained permeability values (25%, even though a defined test rig with required specifications was used. Within this study, the reproducibility of capacitive in-plane permeability testing system measurements was benchmarked by comparing results of two research sites using this technology. The reproducibility was compared by using a glass fibre woven textile and carbon fibre non crimped fabric (NCF. These two material types were taken into consideration due to the different electrical properties of glass and carbon with respect to dielectric capacitive sensors of the permeability measurement systems. In order to determine the unsaturated permeability characteristics as function of fibre volume content the measurements were executed at three different fibre volume contents including five repetitions. It was found that the stability and reproducibility of the presentedin-plane permeability measurement system is very good in the case of the glass fibre woven textiles. This is true for the comparison of the repetition measurements as well as for the comparison between the two different permeameters. These positive results were confirmed by a comparison to permeability values of the same textile gained with an older generation permeameter applying the same measurement technology. Also it was shown, that a correct determination of the grammage and the material density are crucial for correct correlation of measured permeability values and fibre volume contents.

  5. Benchmarking Density Functional Theory Approaches for the Description of Symmetry-Breaking in Long Polymethine Dyes

    KAUST Repository

    Gieseking, Rebecca L.; Ravva, Mahesh Kumar; Coropceanu, Veaceslav; Bredas, Jean-Luc

    2016-01-01

    in polar solvents. Using an approach based on LRC functionals, a reduction in the crossover length is found with increasing medium dielectric constant, which is related to localization of the excess charge on the end groups. Symmetry-breaking is associated

  6. Regulatory Benchmarking

    DEFF Research Database (Denmark)

    Agrell, Per J.; Bogetoft, Peter

    2017-01-01

    Benchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators. The appli......Benchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators....... The application of bench-marking in regulation, however, requires specific steps in terms of data validation, model specification and outlier detection that are not systematically documented in open publications, leading to discussions about regulatory stability and economic feasibility of these techniques...

  7. Regulatory Benchmarking

    DEFF Research Database (Denmark)

    Agrell, Per J.; Bogetoft, Peter

    2017-01-01

    Benchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators. The appli......Benchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators....... The application of benchmarking in regulation, however, requires specific steps in terms of data validation, model specification and outlier detection that are not systematically documented in open publications, leading to discussions about regulatory stability and economic feasibility of these techniques...

  8. Benchmarking the quality of breast cancer care in a nationwide voluntary system: the first five-year results (2003-2007) from Germany as a proof of concept.

    Science.gov (United States)

    Brucker, Sara Y; Schumacher, Claudia; Sohn, Christoph; Rezai, Mahdi; Bamberg, Michael; Wallwiener, Diethelm

    2008-12-02

    The main study objectives were: to establish a nationwide voluntary collaborative network of breast centres with independent data analysis; to define suitable quality indicators (QIs) for benchmarking the quality of breast cancer (BC) care; to demonstrate existing differences in BC care quality; and to show that BC care quality improved with benchmarking from 2003 to 2007. BC centres participated voluntarily in a scientific benchmarking procedure. A generic XML-based data set was developed and used for data collection. Nine guideline-based quality targets serving as rate-based QIs were initially defined, reviewed annually and modified or expanded accordingly. QI changes over time were analysed descriptively. During 2003-2007, respective increases in participating breast centres and postoperatively confirmed BCs were from 59 to 220 and from 5,994 to 31,656 (> 60% of new BCs/year in Germany). Starting from 9 process QIs, 12 QIs were developed by 2007 as surrogates for long-term outcome. Results for most QIs increased. From 2003 to 2007, the most notable increases seen were for preoperative histological confirmation of diagnosis (58% (in 2003) to 88% (in 2007)), appropriate endocrine therapy in hormone receptor-positive patients (27 to 93%), appropriate radiotherapy after breast-conserving therapy (20 to 79%) and appropriate radiotherapy after mastectomy (8 to 65%). Nationwide external benchmarking of BC care is feasible and successful. The benchmarking system described allows both comparisons among participating institutions as well as the tracking of changes in average quality of care over time for the network as a whole. Marked QI increases indicate improved quality of BC care.

  9. Benchmark Linelists and Radiative Cooling Functions for LiH Isotopologues

    Science.gov (United States)

    Diniz, Leonardo G.; Alijah, Alexander; Mohallem, José R.

    2018-04-01

    Linelists and radiative cooling functions in the local thermodynamic equilibrium limit have been computed for the six most important isotopologues of lithium hydride, 7LiH, 6LiH, 7LiD, 6LiD, 7LiT, and 6LiT. The data are based on the most accurate dipole moment and potential energy curves presently available, the latter including adiabatic and leading relativistic corrections. Distance-dependent reduced vibrational masses are used to account for non-adiabatic corrections of the rovibrational energy levels. Even for 7LiH, for which linelists have been reported previously, the present linelist is more accurate. Among all isotopologues, 7LiH and 6LiH are the best coolants, as shown by the radiative cooling functions.

  10. Functionalized single-walled carbon nanotube-based fuel cell benchmarked against US DOE 2017 technical targets.

    Science.gov (United States)

    Jha, Neetu; Ramesh, Palanisamy; Bekyarova, Elena; Tian, Xiaojuan; Wang, Feihu; Itkis, Mikhail E; Haddon, Robert C

    2013-01-01

    Chemically modified single-walled carbon nanotubes (SWNTs) with varying degrees of functionalization were utilized for the fabrication of SWNT thin film catalyst support layers (CSLs) in polymer electrolyte membrane fuel cells (PEMFCs), which were suitable for benchmarking against the US DOE 2017 targets. Use of the optimum level of SWNT -COOH functionality allowed the construction of a prototype SWNT-based PEMFC with total Pt loading of 0.06 mg(Pt)/cm²--well below the value of 0.125 mg(Pt)/cm² set as the US DOE 2017 technical target for total Pt group metals (PGM) loading. This prototype PEMFC also approaches the technical target for the total Pt content per kW of power (<0.125 g(PGM)/kW) at cell potential 0.65 V: a value of 0.15 g(Pt)/kW was achieved at 80°C/22 psig testing conditions, which was further reduced to 0.12 g(Pt)/kW at 35 psig back pressure.

  11. Toxicological Benchmarks for Wildlife

    Energy Technology Data Exchange (ETDEWEB)

    Sample, B.E. Opresko, D.M. Suter, G.W.

    1993-01-01

    Ecological risks of environmental contaminants are evaluated by using a two-tiered process. In the first tier, a screening assessment is performed where concentrations of contaminants in the environment are compared to no observed adverse effects level (NOAEL)-based toxicological benchmarks. These benchmarks represent concentrations of chemicals (i.e., concentrations presumed to be nonhazardous to the biota) in environmental media (water, sediment, soil, food, etc.). While exceedance of these benchmarks does not indicate any particular level or type of risk, concentrations below the benchmarks should not result in significant effects. In practice, when contaminant concentrations in food or water resources are less than these toxicological benchmarks, the contaminants may be excluded from further consideration. However, if the concentration of a contaminant exceeds a benchmark, that contaminant should be retained as a contaminant of potential concern (COPC) and investigated further. The second tier in ecological risk assessment, the baseline ecological risk assessment, may use toxicological benchmarks as part of a weight-of-evidence approach (Suter 1993). Under this approach, based toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. Other sources of evidence include media toxicity tests, surveys of biota (abundance and diversity), measures of contaminant body burdens, and biomarkers. This report presents NOAEL- and lowest observed adverse effects level (LOAEL)-based toxicological benchmarks for assessment of effects of 85 chemicals on 9 representative mammalian wildlife species (short-tailed shrew, little brown bat, meadow vole, white-footed mouse, cottontail rabbit, mink, red fox, and whitetail deer) or 11 avian wildlife species (American robin, rough-winged swallow, American woodcock, wild turkey, belted kingfisher, great blue heron, barred owl, barn owl, Cooper's hawk, and red

  12. Benchmarking Swiss electricity grids

    International Nuclear Information System (INIS)

    Walti, N.O.; Weber, Ch.

    2001-01-01

    This extensive article describes a pilot benchmarking project initiated by the Swiss Association of Electricity Enterprises that assessed 37 Swiss utilities. The data collected from these utilities on a voluntary basis included data on technical infrastructure, investments and operating costs. These various factors are listed and discussed in detail. The assessment methods and rating mechanisms that provided the benchmarks are discussed and the results of the pilot study are presented that are to form the basis of benchmarking procedures for the grid regulation authorities under the planned Switzerland's electricity market law. Examples of the practical use of the benchmarking methods are given and cost-efficiency questions still open in the area of investment and operating costs are listed. Prefaces by the Swiss Association of Electricity Enterprises and the Swiss Federal Office of Energy complete the article

  13. VVER-1000 coolant transient benchmark. Phase 1 (V1000CT-1). Vol. 3: summary results of exercise 2 on coupled 3-D kinetics/core thermal-hydraulics

    International Nuclear Information System (INIS)

    2007-01-01

    In the field of coupled neutronics/thermal-hydraulics computation there is a need to enhance scientific knowledge in order to develop advanced modelling techniques for new nuclear technologies and concepts, as well as current applications. (authors) Recently developed best-estimate computer code systems for modelling 3-D coupled neutronics/thermal-hydraulics transients in nuclear cores and for the coupling of core phenomena and system dynamics need to be compared against each other and validated against results from experiments. International benchmark studies have been set up for this purpose. The present volume is a follow-up to the first two volumes. While the first described the specification of the benchmark, the second presented the results of the first exercise that identified the key parameters and important issues concerning the thermal-hydraulic system modelling of the simulated transient caused by the switching on of a main coolant pump when the other three were in operation. Volume 3 summarises the results for Exercise 2 of the benchmark that identifies the key parameters and important issues concerning the 3-D neutron kinetics modelling of the simulated transient. These studies are based on an experiment that was conducted by Bulgarian and Russian engineers during the plant-commissioning phase at the VVER-1000 Kozloduy Unit 6. The final volume will soon be published, completing Phase 1 of this study. (authors)

  14. Benchmark calculations with correlated molecular wave functions. VII. Binding energy and structure of the HF dimer

    International Nuclear Information System (INIS)

    Peterson, K.A.; Dunning, T.H. Jr.

    1995-01-01

    The hydrogen bond energy and geometry of the HF dimer have been investigated using the series of correlation consistent basis sets from aug-cc-pVDZ to aug-cc-pVQZ and several theoretical methods including Moller--Plesset perturbation and coupled cluster theories. Estimates of the complete basis set (CBS) limit have been derived for the binding energy of (HF) 2 at each level of theory by utilizing the regular convergence characteristics of the correlation consistent basis sets. CBS limit hydrogen bond energies of 3.72, 4.53, 4.55, and 4.60 kcal/mol are estimated at the SCF, MP2, MP4, and CCSD(T) levels of theory, respectively. CBS limits for the intermolecular F--F distance are estimated to be 2.82, 2.74, 2.73, and 2.73 A, respectively, for the same correlation methods. The effects of basis set superposition error (BSSE) on both the binding energies and structures have also been investigated for each basis set using the standard function counterpoise (CP) method. While BSSE has a negligible effect on the intramolecular geometries, the CP-corrected F--F distance and binding energy differ significantly from the uncorrected values for the aug-cc-pVDZ basis set; these differences decrease regularly with increasing basis set size, yielding the same limits in the CBS limit. Best estimates for the equilibrium properties of the HF dimer from CCSD(T) calculations are D e =4.60 kcal/mol, R FF =2.73 A, r 1 =0.922 A, r 2 =0.920 A, Θ 1 =7 degree, and Θ 2 =111 degree

  15. PHISICS/RELAP5-3D RESULTS FOR EXERCISES II-1 AND II-2 OF THE OECD/NEA MHTGR-350 BENCHMARK

    Energy Technology Data Exchange (ETDEWEB)

    Strydom, Gerhard [Idaho National Laboratory

    2016-03-01

    The Idaho National Laboratory (INL) Advanced Reactor Technologies (ART) High-Temperature Gas-Cooled Reactor (HTGR) Methods group currently leads the Modular High-Temperature Gas-Cooled Reactor (MHTGR) 350 benchmark. The benchmark consists of a set of lattice-depletion, steady-state, and transient problems that can be used by HTGR simulation groups to assess the performance of their code suites. The paper summarizes the results obtained for the first two transient exercises defined for Phase II of the benchmark. The Parallel and Highly Innovative Simulation for INL Code System (PHISICS), coupled with the INL system code RELAP5-3D, was used to generate the results for the Depressurized Conduction Cooldown (DCC) (exercise II-1a) and Pressurized Conduction Cooldown (PCC) (exercise II-2) transients. These exercises require the time-dependent simulation of coupled neutronics and thermal-hydraulics phenomena, and utilize the steady-state solution previously obtained for exercise I-3 of Phase I. This paper also includes a comparison of the benchmark results obtained with a traditional system code “ring” model against a more detailed “block” model that include kinetics feedback on an individual block level and thermal feedbacks on a triangular sub-mesh. The higher spatial fidelity that can be obtained by the block model is illustrated with comparisons of the maximum fuel temperatures, especially in the case of natural convection conditions that dominate the DCC and PCC events. Differences up to 125 K (or 10%) were observed between the ring and block model predictions of the DCC transient, mostly due to the block model’s capability of tracking individual block decay powers and more detailed helium flow distributions. In general, the block model only required DCC and PCC calculation times twice as long as the ring models, and it therefore seems that the additional development and calculation time required for the block model could be worth the gain that can be

  16. Medico-economic evaluation of healthcare products. Methodology for defining a significant impact on French health insurance costs and selection of benchmarks for interpreting results.

    Science.gov (United States)

    Dervaux, Benoît; Baseilhac, Eric; Fagon, Jean-Yves; Biot, Claire; Blachier, Corinne; Braun, Eric; Debroucker, Frédérique; Detournay, Bruno; Ferretti, Carine; Granger, Muriel; Jouan-Flahault, Chrystel; Lussier, Marie-Dominique; Meyer, Arlette; Muller, Sophie; Pigeon, Martine; De Sahb, Rima; Sannié, Thomas; Sapède, Claudine; Vray, Muriel

    2014-01-01

    Decree No. 2012-1116 of 2 October 2012 on medico-economic assignments of the French National Authority for Health (Haute autorité de santé, HAS) significantly alters the conditions for accessing the health products market in France. This paper presents a theoretical framework for interpreting the results of the economic evaluation of health technologies and summarises the facts available in France for developing benchmarks that will be used to interpret incremental cost-effectiveness ratios. This literature review shows that it is difficult to determine a threshold value but it is also difficult to interpret then incremental cost effectiveness ratio (ICER) results without a threshold value. In this context, round table participants favour a pragmatic approach based on "benchmarks" as opposed to a threshold value, based on an interpretative and normative perspective, i.e. benchmarks that can change over time based on feedback. © 2014 Société Française de Pharmacologie et de Thérapeutique.

  17. Benchmark CCSD(T) and DFT study of binding energies in Be7 - 12: in search of reliable DFT functional for beryllium clusters

    Science.gov (United States)

    Labanc, Daniel; Šulka, Martin; Pitoňák, Michal; Černušák, Ivan; Urban, Miroslav; Neogrády, Pavel

    2018-05-01

    We present a computational study of the stability of small homonuclear beryllium clusters Be7 - 12 in singlet electronic states. Our predictions are based on highly correlated CCSD(T) coupled cluster calculations. Basis set convergence towards the complete basis set limit as well as the role of the 1s core electron correlation are carefully examined. Our CCSD(T) data for binding energies of Be7 - 12 clusters serve as a benchmark for performance assessment of several density functional theory (DFT) methods frequently used in beryllium cluster chemistry. We observe that, from Be10 clusters on, the deviation from CCSD(T) benchmarks is stable with respect to size, and fluctuating within 0.02 eV error bar for most examined functionals. This opens up the possibility of scaling the DFT binding energies for large Be clusters using CCSD(T) benchmark values for smaller clusters. We also tried to find analogies between the performance of DFT functionals for Be clusters and for the valence-isoelectronic Mg clusters investigated recently in Truhlar's group. We conclude that it is difficult to find DFT functionals that perform reasonably well for both beryllium and magnesium clusters. Out of 12 functionals examined, only the M06-2X functional gives reasonably accurate and balanced binding energies for both Be and Mg clusters.

  18. CHONDROSARCOMA OF BONE - ONCOLOGIC AND FUNCTIONAL RESULTS

    NARCIS (Netherlands)

    VANLOON, CJM; VETH, RPH; PRUSZCZYNSKI, M; WOBBES, T; LEMMENS, JAM; VANHORN, J

    1994-01-01

    A retrospective review of 27 patients (21 males and 6 females) with chondrosarcoma of bone was performed to evaluate the oncologic and functional results. The average age of the patients was 48 years (range: 17-76). The tumor sites were pelvis in 10 cases, distal femur in 2, proximal tibia in 3, rib

  19. Interim results of the sixth three-dimensional AER dynamic benchmark problem calculation. Solution of problem with DYN3D and RELAP5-3D codes

    International Nuclear Information System (INIS)

    Hadek, J.; Kral, P.; Macek, J.

    2001-01-01

    The paper gives a brief survey of the 6 th three-dimensional AER dynamic benchmark calculation results received with the codes DYN3D and RELAPS-3D at NRI Rez. This benchmark was defined at the 10 th AER Symposium. Its initiating event is a double ended break in the steam line of steam generator No. I in a WWER-440/213 plant at the end of the first fuel cycle and in hot full power conditions. Stationary and burnup calculations as well as tuning of initial state before the transient were performed with the code DYN3D. Transient calculations were made with the system code RELAPS-3D.The KASSETA library was used for the generation of reactor core neutronic parameters. The detailed six loops model of NPP Dukovany was adopted for the 6 th AER dynamic benchmark purposes. The RELAPS-3D full core neutronic model was connected with seven coolant channels thermal-hydraulic model of the core (Authors)

  20. Benchmark Credentialing Results for NRG-BR001: The First National Cancer Institute-Sponsored Trial of Stereotactic Body Radiation Therapy for Multiple Metastases

    Energy Technology Data Exchange (ETDEWEB)

    Al-Hallaq, Hania A., E-mail: halhallaq@radonc.uchicago.edu [Department of Radiation and Cellular Oncology, Chicago, Illinois (United States); Chmura, Steven J. [Department of Radiation and Cellular Oncology, Chicago, Illinois (United States); Salama, Joseph K. [Department of Radiation Oncology, Durham, North Carolina (United States); Lowenstein, Jessica R. [Imaging and Radiation Oncology Core Group (IROC) Houston, MD Anderson Cancer Center, Houston, Texas (United States); McNulty, Susan; Galvin, James M. [Imaging and Radiation Oncology Core Group (IROC) PHILADELPHIA RT, Philadelphia, Pennsylvania (United States); Followill, David S. [Imaging and Radiation Oncology Core Group (IROC) Houston, MD Anderson Cancer Center, Houston, Texas (United States); Robinson, Clifford G. [Department of Radiation Oncology, St Louis, Missouri (United States); Pisansky, Thomas M. [Department of Radiation Oncology, Rochester, Minnesota (United States); Winter, Kathryn A. [NRG Oncology Statistics and Data Management Center, Philadelphia, Pennsylvania (United States); White, Julia R. [Department of Radiation Oncology, Columbus, Ohio (United States); Xiao, Ying [Imaging and Radiation Oncology Core Group (IROC) PHILADELPHIA RT, Philadelphia, Pennsylvania (United States); Department of Radiation Oncology, Philadelphia, Pennsylvania (United States); Matuszak, Martha M. [Department of Radiation Oncology, Ann Arbor, Michigan (United States)

    2017-01-01

    Purpose: The NRG-BR001 trial is the first National Cancer Institute–sponsored trial to treat multiple (range 2-4) extracranial metastases with stereotactic body radiation therapy. Benchmark credentialing is required to ensure adherence to this complex protocol, in particular, for metastases in close proximity. The present report summarizes the dosimetric results and approval rates. Methods and Materials: The benchmark used anonymized data from a patient with bilateral adrenal metastases, separated by <5 cm of normal tissue. Because the planning target volume (PTV) overlaps with organs at risk (OARs), institutions must use the planning priority guidelines to balance PTV coverage (45 Gy in 3 fractions) against OAR sparing. Submitted plans were processed by the Imaging and Radiation Oncology Core and assessed by the protocol co-chairs by comparing the doses to targets, OARs, and conformity metrics using nonparametric tests. Results: Of 63 benchmarks submitted through October 2015, 94% were approved, with 51% approved at the first attempt. Most used volumetric arc therapy (VMAT) (78%), a single plan for both PTVs (90%), and prioritized the PTV over the stomach (75%). The median dose to 95% of the volume was 44.8 ± 1.0 Gy and 44.9 ± 1.0 Gy for the right and left PTV, respectively. The median dose to 0.03 cm{sup 3} was 14.2 ± 2.2 Gy to the spinal cord and 46.5 ± 3.1 Gy to the stomach. Plans that spared the stomach significantly reduced the dose to the left PTV and stomach. Conformity metrics were significantly better for single plans that simultaneously treated both PTVs with VMAT, intensity modulated radiation therapy, or 3-dimensional conformal radiation therapy compared with separate plans. No significant differences existed in the dose at 2 cm from the PTVs. Conclusions: Although most plans used VMAT, the range of conformity and dose falloff was large. The decision to prioritize either OARs or PTV coverage varied considerably, suggesting that

  1. Benchmarking in Czech Higher Education

    OpenAIRE

    Plaček Michal; Ochrana František; Půček Milan

    2015-01-01

    The first part of this article surveys the current experience with the use of benchmarking at Czech universities specializing in economics and management. The results indicate that collaborative benchmarking is not used on this level today, but most actors show some interest in its introduction. The expression of the need for it and the importance of benchmarking as a very suitable performance-management tool in less developed countries are the impetus for the second part of our article. Base...

  2. Shielding Benchmark Computational Analysis

    International Nuclear Information System (INIS)

    Hunter, H.T.; Slater, C.O.; Holland, L.B.; Tracz, G.; Marshall, W.J.; Parsons, J.L.

    2000-01-01

    Over the past several decades, nuclear science has relied on experimental research to verify and validate information about shielding nuclear radiation for a variety of applications. These benchmarks are compared with results from computer code models and are useful for the development of more accurate cross-section libraries, computer code development of radiation transport modeling, and building accurate tests for miniature shielding mockups of new nuclear facilities. When documenting measurements, one must describe many parts of the experimental results to allow a complete computational analysis. Both old and new benchmark experiments, by any definition, must provide a sound basis for modeling more complex geometries required for quality assurance and cost savings in nuclear project development. Benchmarks may involve one or many materials and thicknesses, types of sources, and measurement techniques. In this paper the benchmark experiments of varying complexity are chosen to study the transport properties of some popular materials and thicknesses. These were analyzed using three-dimensional (3-D) models and continuous energy libraries of MCNP4B2, a Monte Carlo code developed at Los Alamos National Laboratory, New Mexico. A shielding benchmark library provided the experimental data and allowed a wide range of choices for source, geometry, and measurement data. The experimental data had often been used in previous analyses by reputable groups such as the Cross Section Evaluation Working Group (CSEWG) and the Organization for Economic Cooperation and Development/Nuclear Energy Agency Nuclear Science Committee (OECD/NEANSC)

  3. RISKIND verification and benchmark comparisons

    International Nuclear Information System (INIS)

    Biwer, B.M.; Arnish, J.J.; Chen, S.Y.; Kamboj, S.

    1997-08-01

    This report presents verification calculations and benchmark comparisons for RISKIND, a computer code designed to estimate potential radiological consequences and health risks to individuals and the population from exposures associated with the transportation of spent nuclear fuel and other radioactive materials. Spreadsheet calculations were performed to verify the proper operation of the major options and calculational steps in RISKIND. The program is unique in that it combines a variety of well-established models into a comprehensive treatment for assessing risks from the transportation of radioactive materials. Benchmark comparisons with other validated codes that incorporate similar models were also performed. For instance, the external gamma and neutron dose rate curves for a shipping package estimated by RISKIND were compared with those estimated by using the RADTRAN 4 code and NUREG-0170 methodology. Atmospheric dispersion of released material and dose estimates from the GENII and CAP88-PC codes. Verification results have shown the program to be performing its intended function correctly. The benchmark results indicate that the predictions made by RISKIND are within acceptable limits when compared with predictions from similar existing models

  4. RISKIND verification and benchmark comparisons

    Energy Technology Data Exchange (ETDEWEB)

    Biwer, B.M.; Arnish, J.J.; Chen, S.Y.; Kamboj, S.

    1997-08-01

    This report presents verification calculations and benchmark comparisons for RISKIND, a computer code designed to estimate potential radiological consequences and health risks to individuals and the population from exposures associated with the transportation of spent nuclear fuel and other radioactive materials. Spreadsheet calculations were performed to verify the proper operation of the major options and calculational steps in RISKIND. The program is unique in that it combines a variety of well-established models into a comprehensive treatment for assessing risks from the transportation of radioactive materials. Benchmark comparisons with other validated codes that incorporate similar models were also performed. For instance, the external gamma and neutron dose rate curves for a shipping package estimated by RISKIND were compared with those estimated by using the RADTRAN 4 code and NUREG-0170 methodology. Atmospheric dispersion of released material and dose estimates from the GENII and CAP88-PC codes. Verification results have shown the program to be performing its intended function correctly. The benchmark results indicate that the predictions made by RISKIND are within acceptable limits when compared with predictions from similar existing models.

  5. Benchmarking Using Basic DBMS Operations

    Science.gov (United States)

    Crolotte, Alain; Ghazal, Ahmad

    The TPC-H benchmark proved to be successful in the decision support area. Many commercial database vendors and their related hardware vendors used these benchmarks to show the superiority and competitive edge of their products. However, over time, the TPC-H became less representative of industry trends as vendors keep tuning their database to this benchmark-specific workload. In this paper, we present XMarq, a simple benchmark framework that can be used to compare various software/hardware combinations. Our benchmark model is currently composed of 25 queries that measure the performance of basic operations such as scans, aggregations, joins and index access. This benchmark model is based on the TPC-H data model due to its maturity and well-understood data generation capability. We also propose metrics to evaluate single-system performance and compare two systems. Finally we illustrate the effectiveness of this model by showing experimental results comparing two systems under different conditions.

  6. CEC thermal-hydraulic benchmark exercise on Fiploc verification experiment F2 in Battelle model containment. Experimental phases 2, 3 and 4. Results of comparisons

    International Nuclear Information System (INIS)

    Fischer, K.; Schall, M.; Wolf, L.

    1993-01-01

    The present final report comprises the major results of Phase II of the CEC thermal-hydraulic benchmark exercise on Fiploc verification experiment F2 in the Battelle model containment, experimental phases 2, 3 and 4, which was organized and sponsored by the Commission of the European Communities for the purpose of furthering the understanding and analysis of long-term thermal-hydraulic phenomena inside containments during and after severe core accidents. This benchmark exercise received high European attention with eight organizations from six countries participating with eight computer codes during phase 2. Altogether 18 results from computer code runs were supplied by the participants and constitute the basis for comparisons with the experimental data contained in this publication. This reflects both the high technical interest in, as well as the complexity of, this CEC exercise. Major comparison results between computations and data are reported on all important quantities relevant for containment analyses during long-term transients. These comparisons comprise pressure, steam and air content, velocities and their directions, heat transfer coefficients and saturation ratios. Agreements and disagreements are discussed for each participating code/institution, conclusions drawn and recommendations provided. The phase 2 CEC benchmark exercise provided an up-to-date state-of-the-art status review of the thermal-hydraulic capabilities of present computer codes for containment analyses. This exercise has shown that all of the participating codes can simulate the important global features of the experiment correctly, like: temperature stratification, pressure and leakage, heat transfer to structures, relative humidity, collection of sump water. Several weaknesses of individual codes were identified, and this may help to promote their development. As a general conclusion it may be said that while there is still a wide area of necessary extensions and improvements, the

  7. Benchmarking of MCNPX Results with Measured Tritium Production Rate and Neutron Flux at the Mock-up of EU TBM (HCPB concept)

    Energy Technology Data Exchange (ETDEWEB)

    Tore, C.; Ortego, P.

    2013-07-01

    In order to reassesses the available design results of Test Breeder Modules (TBMs) a framework contract agreement between F4E and IDOM-Spain has been signed. SEA SL-Spain and UNED-Spain participate as sub-contractors of IDOM. In this study, a qualification of MCNPX code and nuclear data libraries are performed with benchmarking of measured tritium production and neutron flux at the mock-up of the EU TBM, HCPB concept. The irradiation and measurements had been performed in the frame of European Fusion Technology Program by ENEA-Italy, TUD-Germany and JAERI -Japan.

  8. Deviating From the Benchmarks

    DEFF Research Database (Denmark)

    Rocha, Vera; Van Praag, Mirjam; Carneiro, Anabela

    This paper studies three related questions: To what extent otherwise similar startups employ different quantities and qualities of human capital at the moment of entry? How persistent are initial human capital choices over time? And how does deviating from human capital benchmarks influence firm......, founders human capital, and the ownership structure of startups (solo entrepreneurs versus entrepreneurial teams). We then study the survival implications of exogenous deviations from these benchmarks, based on spline models for survival data. Our results indicate that (especially negative) deviations from...... the benchmark can be substantial, are persistent over time, and hinder the survival of firms. The implications may, however, vary according to the sector and the ownership structure at entry. Given the stickiness of initial choices, wrong human capital decisions at entry turn out to be a close to irreversible...

  9. HPCG Benchmark Technical Specification

    Energy Technology Data Exchange (ETDEWEB)

    Heroux, Michael Allen [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Dongarra, Jack [Univ. of Tennessee, Knoxville, TN (United States); Luszczek, Piotr [Univ. of Tennessee, Knoxville, TN (United States)

    2013-10-01

    The High Performance Conjugate Gradient (HPCG) benchmark [cite SNL, UTK reports] is a tool for ranking computer systems based on a simple additive Schwarz, symmetric Gauss-Seidel preconditioned conjugate gradient solver. HPCG is similar to the High Performance Linpack (HPL), or Top 500, benchmark [1] in its purpose, but HPCG is intended to better represent how today’s applications perform. In this paper we describe the technical details of HPCG: how it is designed and implemented, what code transformations are permitted and how to interpret and report results.

  10. RB reactor benchmark cores

    International Nuclear Information System (INIS)

    Pesic, M.

    1998-01-01

    A selected set of the RB reactor benchmark cores is presented in this paper. The first results of validation of the well-known Monte Carlo MCNP TM code and adjoining neutron cross section libraries are given. They confirm the idea for the proposal of the new U-D 2 O criticality benchmark system and support the intention to include this system in the next edition of the recent OECD/NEA Project: International Handbook of Evaluated Criticality Safety Experiment, in near future. (author)

  11. Preliminary results of the seventh three-dimensional AER dynamic benchmark problem calculation. Solution with DYN3D and RELAP5-3D codes

    International Nuclear Information System (INIS)

    Bencik, M.; Hadek, J.

    2011-01-01

    The paper gives a brief survey of the seventh three-dimensional AER dynamic benchmark calculation results received with the codes DYN3D and RELAP5-3D at Nuclear Research Institute Rez. This benchmark was defined at the twentieth AER Symposium in Hanassari (Finland). It is focused on investigation of transient behaviour in a WWER-440 nuclear power plant. Its initiating event is opening of the main isolation valve and re-connection of the loop with its main circulation pump in operation. The WWER-440 plant is at the end of the first fuel cycle and in hot full power conditions. Stationary and burnup calculations were performed with the code DYN3D. Transient calculation was made with the system code RELAP5-3D. The two-group homogenized cross sections library HELGD05 created by HELIOS code was used for the generation of reactor core neutronic parameters. The detailed six loops model of NPP Dukovany was adopted for the seventh AER dynamic benchmark purposes. The RELAP5-3D full core neutronic model was coupled with 49 core thermal-hydraulic channels and 8 reflector channels connected with the three-dimensional model of the reactor vessel. The detailed nodalization of reactor downcomer, lower and upper plenum was used. Mixing in lower and upper plenum was simulated. The first part of paper contains a brief characteristic of RELAP5-3D system code and a short description of NPP input deck and reactor core model. The second part shows the time dependencies of important global and local parameters. (Authors)

  12. Benchmarking computer platforms for lattice QCD applications

    International Nuclear Information System (INIS)

    Hasenbusch, M.; Jansen, K.; Pleiter, D.; Wegner, P.; Wettig, T.

    2003-09-01

    We define a benchmark suite for lattice QCD and report on benchmark results from several computer platforms. The platforms considered are apeNEXT, CRAY T3E, Hitachi SR8000, IBM p690, PC-Clusters, and QCDOC. (orig.)

  13. Benchmarking computer platforms for lattice QCD applications

    International Nuclear Information System (INIS)

    Hasenbusch, M.; Jansen, K.; Pleiter, D.; Stueben, H.; Wegner, P.; Wettig, T.; Wittig, H.

    2004-01-01

    We define a benchmark suite for lattice QCD and report on benchmark results from several computer platforms. The platforms considered are apeNEXT, CRAY T3E; Hitachi SR8000, IBM p690, PC-Clusters, and QCDOC

  14. Managing for Results in America's Great City Schools. A Report of the Performance Measurement and Benchmarking Project, Spring 2008

    Science.gov (United States)

    Council of the Great City Schools, 2008

    2008-01-01

    This report describes statistical indicators developed by the Council of the Great City Schools and its member districts to measure big-city school performance on a range of operational functions in business, finance, human resources and technology. The report also presents data city-by-city on those indicators. This is the second time that…

  15. Managing for Results in America's Great City Schools. A Report of the Performance Measurement and Benchmarking Project

    Science.gov (United States)

    Council of the Great City Schools, 2008

    2008-01-01

    This report describes statistical indicators developed by the Council of the Great City Schools and its member districts to measure big-city school performance on a range of operational functions in business, finance, human resources and technology. The report also presents data city-by-city on those indicators. This is the second time that…

  16. Benchmarking the Netherlands. Benchmarking for growth

    International Nuclear Information System (INIS)

    2003-01-01

    This is the fourth edition of the Ministry of Economic Affairs' publication 'Benchmarking the Netherlands', which aims to assess the competitiveness of the Dutch economy. The methodology and objective of the benchmarking remain the same. The basic conditions for economic activity (institutions, regulation, etc.) in a number of benchmark countries are compared in order to learn from the solutions found by other countries for common economic problems. This publication is devoted entirely to the potential output of the Dutch economy. In other words, its ability to achieve sustainable growth and create work over a longer period without capacity becoming an obstacle. This is important because economic growth is needed to increase prosperity in the broad sense and meeting social needs. Prosperity in both a material (per capita GDP) and immaterial (living environment, environment, health, etc) sense, in other words. The economy's potential output is determined by two structural factors: the growth of potential employment and the structural increase in labour productivity. Analysis by the Netherlands Bureau for Economic Policy Analysis (CPB) shows that in recent years the increase in the capacity for economic growth has been realised mainly by increasing the supply of labour and reducing the equilibrium unemployment rate. In view of the ageing of the population in the coming years and decades the supply of labour is unlikely to continue growing at the pace we have become accustomed to in recent years. According to a number of recent studies, to achieve a respectable rate of sustainable economic growth the aim will therefore have to be to increase labour productivity. To realise this we have to focus on for six pillars of economic policy: (1) human capital, (2) functioning of markets, (3) entrepreneurship, (4) spatial planning, (5) innovation, and (6) sustainability. These six pillars determine the course for economic policy aiming at higher productivity growth. Throughout

  17. Benchmarking the Netherlands. Benchmarking for growth

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2003-01-01

    This is the fourth edition of the Ministry of Economic Affairs' publication 'Benchmarking the Netherlands', which aims to assess the competitiveness of the Dutch economy. The methodology and objective of the benchmarking remain the same. The basic conditions for economic activity (institutions, regulation, etc.) in a number of benchmark countries are compared in order to learn from the solutions found by other countries for common economic problems. This publication is devoted entirely to the potential output of the Dutch economy. In other words, its ability to achieve sustainable growth and create work over a longer period without capacity becoming an obstacle. This is important because economic growth is needed to increase prosperity in the broad sense and meeting social needs. Prosperity in both a material (per capita GDP) and immaterial (living environment, environment, health, etc) sense, in other words. The economy's potential output is determined by two structural factors: the growth of potential employment and the structural increase in labour productivity. Analysis by the Netherlands Bureau for Economic Policy Analysis (CPB) shows that in recent years the increase in the capacity for economic growth has been realised mainly by increasing the supply of labour and reducing the equilibrium unemployment rate. In view of the ageing of the population in the coming years and decades the supply of labour is unlikely to continue growing at the pace we have become accustomed to in recent years. According to a number of recent studies, to achieve a respectable rate of sustainable economic growth the aim will therefore have to be to increase labour productivity. To realise this we have to focus on for six pillars of economic policy: (1) human capital, (2) functioning of markets, (3) entrepreneurship, (4) spatial planning, (5) innovation, and (6) sustainability. These six pillars determine the course for economic policy aiming at higher productivity

  18. Benchmarking in Foodservice Operations

    National Research Council Canada - National Science Library

    Johnson, Bonnie

    1998-01-01

    The objective of this study was to identify usage of foodservice performance measures, important activities in foodservice benchmarking, and benchmarking attitudes, beliefs, and practices by foodservice directors...

  19. Comparison of the PHISICS/RELAP5-3D Ring and Block Model Results for Phase I of the OECD MHTGR-350 Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Gerhard Strydom

    2014-04-01

    The INL PHISICS code system consists of three modules providing improved core simulation capability: INSTANT (performing 3D nodal transport core calculations), MRTAU (depletion and decay heat generation) and a perturbation/mixer module. Coupling of the PHISICS code suite to the thermal hydraulics system code RELAP5-3D has recently been finalized, and as part of the code verification and validation program the exercises defined for Phase I of the OECD/NEA MHTGR 350 MW Benchmark were completed. This paper provides an overview of the MHTGR Benchmark, and presents selected results of the three steady state exercises 1-3 defined for Phase I. For Exercise 1, a stand-alone steady-state neutronics solution for an End of Equilibrium Cycle Modular High Temperature Reactor (MHTGR) was calculated with INSTANT, using the provided geometry, material descriptions, and detailed cross-section libraries. Exercise 2 required the modeling of a stand-alone thermal fluids solution. The RELAP5-3D results of four sub-cases are discussed, consisting of various combinations of coolant bypass flows and material thermophysical properties. Exercise 3 combined the first two exercises in a coupled neutronics and thermal fluids solution, and the coupled code suite PHISICS/RELAP5-3D was used to calculate the results of two sub-cases. The main focus of the paper is a comparison of the traditional RELAP5-3D “ring” model approach vs. a much more detailed model that include kinetics feedback on individual block level and thermal feedbacks on a triangular sub-mesh. The higher fidelity of the block model is illustrated with comparison results on the temperature, power density and flux distributions, and the typical under-predictions produced by the ring model approach are highlighted.

  20. New results for 5-point functions

    International Nuclear Information System (INIS)

    Gluza, J.

    2007-12-01

    Bhabha scattering is one of the processes at the ILC where high precision data will be expected. The complete NNLO corrections include radiative loop corrections, with contributions from Feynman diagrams with five external legs. We take these diagrams as an example and discuss several features of the evaluation of pentagon diagrams. The tensor functions are usually reduced to simpler scalar functions. Here we study, as an alternative, the application of Mellin-Barnes representations to 5-point functions. There is no evidence for an improved numerical evaluation of their finite, physical parts. However, the approach gives interesting insights into the treatment of the IR- singularities. (orig.)

  1. Effects of Secondary Circuit Modeling on Results of Pressurized Water Reactor Main Steam Line Break Benchmark Calculations with New Coupled Code TRAB-3D/SMABRE

    International Nuclear Information System (INIS)

    Daavittila, Antti; Haemaelaeinen, Anitta; Kyrki-Rajamaeki, Riitta

    2003-01-01

    All of the three exercises of the Organization for Economic Cooperation and Development/Nuclear Regulatory Commission pressurized water reactor main steam line break (PWR MSLB) benchmark were calculated at VTT, the Technical Research Centre of Finland. For the first exercise, the plant simulation with point-kinetic neutronics, the thermal-hydraulics code SMABRE was used. The second exercise was calculated with the three-dimensional reactor dynamics code TRAB-3D, and the third exercise with the combination TRAB-3D/SMABRE. VTT has over ten years' experience of coupling neutronic and thermal-hydraulic codes, but this benchmark was the first time these two codes, both developed at VTT, were coupled together. The coupled code system is fast and efficient; the total computation time of the 100-s transient in the third exercise was 16 min on a modern UNIX workstation. The results of all the exercises are similar to those of the other participants. In order to demonstrate the effect of secondary circuit modeling on the results, three different cases were calculated. In case 1 there is no phase separation in the steam lines and no flow reversal in the aspirator. In case 2 the flow reversal in the aspirator is allowed, but there is no phase separation in the steam lines. Finally, in case 3 the drift-flux model is used for the phase separation in the steam lines, but the aspirator flow reversal is not allowed. With these two modeling variations, it is possible to cover a remarkably broad range of results. The maximum power level reached after the reactor trip varies from 534 to 904 MW, the range of the time of the power maximum being close to 30 s. Compared to the total calculated transient time of 100 s, the effect of the secondary side modeling is extremely important

  2. Functional results after treatment for rectal cancer

    Directory of Open Access Journals (Sweden)

    Katrine Jossing Emmertsen

    2014-01-01

    Full Text Available Introduction: With improving survival of rectal cancer, functional outcome has become in- creasingly important. Following sphincter-preserving resection many patients suffer from severe bowel dysfunction with an impact on quality of life (QoL – referred to as low ante- rior resection syndrome (LARS. Study objective: To provide an overview of the current knowledge of LARS regarding symp- tomatology, occurrence, risk factors, pathophysiology, evaluation instruments and treat- ment options. Results: LARS is characterized by urgency, frequent bowel movements, emptying difficulties and incontinence, and occurs in up to 50-75% of patients on a long-term basis. Known risk factors are low anastomosis, use of radiotherapy, direct nerve injury and straight anasto- mosis. The pathophysiology seems to be multifactorial, with elements of anatomical, sen- sory and motility dysfunction. Use of validated instruments for evaluation of LARS is es- sential. Currently, there is a lack of evidence for treatment of LARS. Yet, transanal irrigation and sacral nerve stimulation are promising. Conclusion: LARS is a common problem following sphincter-preserving resection. All pa- tients should be informed about the risk of LARS before surgery, and routinely be screened for LARS postoperatively. Patients with severe LARS should be offered treatment in order to improve QoL. Future focus should be on the possibilities of non-resectional treatment in order to prevent LARS. Resumo: Introdução: Com o aumento da sobrevida após câncer retal, o resultado funcional se tornou cada vez mais importante. Após ressecção com preservação do esfíncter, muitos pacientes sofrem de disfunção intestinal com um impacto sobre a qualidade de vida (QdV – denomi- nada síndrome da ressecção anterior baixa (LARS. Objetivo do estudo: Fornecer uma visão geral do conhecimento atual da LARS com relação à sintomatologia, à ocorrência, aos fatores de risco, à fisiopatologia, aos

  3. WWER-1000 Burnup Credit Benchmark (CB5)

    International Nuclear Information System (INIS)

    Manolova, M.A.

    2002-01-01

    In the paper the specification of WWER-1000 Burnup Credit Benchmark first phase (depletion calculations), given. The second phase - criticality calculations for the WWER-1000 fuel pin cell, will be given after the evaluation of the results, obtained at the first phase. The proposed benchmark is a continuation of the WWER benchmark activities in this field (Author)

  4. Atlas-based functional radiosurgery: Early results

    Energy Technology Data Exchange (ETDEWEB)

    Stancanello, J.; Romanelli, P.; Pantelis, E.; Sebastiano, F.; Modugno, N. [Politecnico di Milano, Bioengineering Department and NEARlab, Milano, 20133 (Italy) and Siemens AG, Research and Clinical Collaborations, Erlangen, 91052 (Germany); Functional Neurosurgery Deptartment, Neuromed IRCCS, Pozzilli, 86077 (Italy); CyberKnife Center, Iatropolis, Athens, 15231 (Greece); Functional Neurosurgery Deptartment, Neuromed IRCCS, Pozzilli, 86077 (Italy)

    2009-02-15

    Functional disorders of the brain, such as dystonia and neuropathic pain, may respond poorly to medical therapy. Deep brain stimulation (DBS) of the globus pallidus pars interna (GPi) and the centromedian nucleus of the thalamus (CMN) may alleviate dystonia and neuropathic pain, respectively. A noninvasive alternative to DBS is radiosurgical ablation [internal pallidotomy (IP) and medial thalamotomy (MT)]. The main technical limitation of radiosurgery is that targets are selected only on the basis of MRI anatomy, without electrophysiological confirmation. This means that, to be feasible, image-based targeting must be highly accurate and reproducible. Here, we report on the feasibility of an atlas-based approach to targeting for functional radiosurgery. In this method, masks of the GPi, CMN, and medio-dorsal nucleus were nonrigidly registered to patients' T1-weighted MRI (T1w-MRI) and superimposed on patients' T2-weighted MRI (T2w-MRI). Radiosurgical targets were identified on the T2w-MRI registered to the planning CT by an expert functional neurosurgeon. To assess its feasibility, two patients were treated with the CyberKnife using this method of targeting; a patient with dystonia received an IP (120 Gy prescribed to the 65% isodose) and a patient with neuropathic pain received a MT (120 Gy to the 77% isodose). Six months after treatment, T2w-MRIs and contrast-enhanced T1w-MRIs showed edematous regions around the lesions; target placements were reevaluated by DW-MRIs. At 12 months post-treatment steroids for radiation-induced edema and medications for dystonia and neuropathic pain were suppressed. Both patients experienced significant relief from pain and dystonia-related problems. Fifteen months after treatment edema had disappeared. Thus, this work shows promising feasibility of atlas-based functional radiosurgery to improve patient condition. Further investigations are indicated for optimizing treatment dose.

  5. SCWEB, Scientific Workstation Evaluation Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Raffenetti, R C [Computing Services-Support Services Division, Argonne National Laboratory, 9700 South Cass Avenue, Argonne, Illinois 60439 (United States)

    1988-06-16

    1 - Description of program or function: The SCWEB (Scientific Workstation Evaluation Benchmark) software includes 16 programs which are executed in a well-defined scenario to measure the following performance capabilities of a scientific workstation: implementation of FORTRAN77, processor speed, memory management, disk I/O, monitor (or display) output, scheduling of processing (multiprocessing), and scheduling of print tasks (spooling). 2 - Method of solution: The benchmark programs are: DK1, DK2, and DK3, which do Fourier series fitting based on spline techniques; JC1, which checks the FORTRAN function routines which produce numerical results; JD1 and JD2, which solve dense systems of linear equations in double- and single-precision, respectively; JD3 and JD4, which perform matrix multiplication in single- and double-precision, respectively; RB1, RB2, and RB3, which perform substantial amounts of I/O processing on files other than the input and output files; RR1, which does intense single-precision floating-point multiplication in a tight loop, RR2, which initializes a 512x512 integer matrix in a manner which skips around in the address space rather than initializing each consecutive memory cell in turn; RR3, which writes alternating text buffers to the output file; RR4, which evaluates the timer routines and demonstrates that they conform to the specification; and RR5, which determines whether the workstation is capable of executing a 4-megabyte program

  6. Benchmarking and Performance Measurement.

    Science.gov (United States)

    Town, J. Stephen

    This paper defines benchmarking and its relationship to quality management, describes a project which applied the technique in a library context, and explores the relationship between performance measurement and benchmarking. Numerous benchmarking methods contain similar elements: deciding what to benchmark; identifying partners; gathering…

  7. Final results of the 'Benchmark on computer simulation of radioactive nuclides production rate and heat generation rate in a spallation target'

    International Nuclear Information System (INIS)

    Janczyszyn, J.; Pohorecki, W.; Domanska, G.; Maiorino, R.J.; David, J.C.; Velarde, F.A.

    2011-01-01

    A benchmark has been organized to assess the computer simulation of nuclide production and heat generation in a spallation lead target. The physical models applied for the calculation of thick lead target activation do not produce satisfactory results for the majority of analysed nuclides, however one can observe better or worse quantitative compliance with the experimental results. Analysis of the quality of calculated results show the best performance for heavy nuclides (A: 170 - 190). For intermediate nuclides (A: 60 - 130) almost all are underestimated while for A: 130 - 170 mainly overestimated. The shape of the activity distribution in the target is well reproduced in calculations by all models but the numerical comparison shows similar performance as for the whole target. The Isabel model yields best results. As for the whole target heating rate, the results from all participants are consistent. Only small differences are observed between results from physical models. As for the heating distribution in the target it looks not quite similar. The quantitative comparison of the distributions yielded by different spallation reaction models shows for the major part of the target no serious differences - generally below 10%. However, in the most outside parts of the target front layers and the part of the target at its end behind the primary protons range, a spread higher than 40 % is obtained

  8. Benchmarking in the Netherlands

    International Nuclear Information System (INIS)

    1999-01-01

    In two articles an overview is given of the activities in the Dutch industry and energy sector with respect to benchmarking. In benchmarking operational processes of different competitive businesses are compared to improve your own performance. Benchmark covenants for energy efficiency between the Dutch government and industrial sectors contribute to a growth of the number of benchmark surveys in the energy intensive industry in the Netherlands. However, some doubt the effectiveness of the benchmark studies

  9. HS06 Benchmark for an ARM Server

    Science.gov (United States)

    Kluth, Stefan

    2014-06-01

    We benchmarked an ARM cortex-A9 based server system with a four-core CPU running at 1.1 GHz. The system used Ubuntu 12.04 as operating system and the HEPSPEC 2006 (HS06) benchmarking suite was compiled natively with gcc-4.4 on the system. The benchmark was run for various settings of the relevant gcc compiler options. We did not find significant influence from the compiler options on the benchmark result. The final HS06 benchmark result is 10.4.

  10. HS06 benchmark for an ARM server

    International Nuclear Information System (INIS)

    Kluth, Stefan

    2014-01-01

    We benchmarked an ARM cortex-A9 based server system with a four-core CPU running at 1.1 GHz. The system used Ubuntu 12.04 as operating system and the HEPSPEC 2006 (HS06) benchmarking suite was compiled natively with gcc-4.4 on the system. The benchmark was run for various settings of the relevant gcc compiler options. We did not find significant influence from the compiler options on the benchmark result. The final HS06 benchmark result is 10.4.

  11. Benchmark study of ionization potentials and electron affinities of armchair single-walled carbon nanotubes using density functional theory

    Science.gov (United States)

    Zhou, Bin; Hu, Zhubin; Jiang, Yanrong; He, Xiao; Sun, Zhenrong; Sun, Haitao

    2018-05-01

    The intrinsic parameters of carbon nanotubes (CNTs) such as ionization potential (IP) and electron affinity (EA) are closely related to their unique properties and associated applications. In this work, we demonstrated the success of optimal tuning method based on range-separated (RS) density functionals for both accurate and efficient prediction of vertical IPs and electron affinities (EAs) of a series of armchair single-walled carbon nanotubes C20n H20 (n  =  2–6) compared to the high-level IP/EA equation-of-motion coupled-cluster method with single and double substitutions (IP/EA-EOM-CCSD). Notably, the resulting frontier orbital energies (–ε HOMO and –ε LUMO) from the tuning method exhibit an excellent approximation to the corresponding IPs and EAs, that significantly outperform other conventional density functionals. In addition, it is suggested that the RS density functionals that possess both a fixed amount of exact exchange in the short-range and a correct long-range asymptotic behavior are suitable for calculating electronic structures of finite-sized CNTs. Next the performance of density functionals for description of various molecular properties such as chemical potential, hardness and electrophilicity are assessed as a function of tube length. Thanks to the efficiency and accuracy of this tuning method, the related behaviors of much longer armchair single-walled CNTs until C200H20 were studied. Lastly, the present work is proved to provide an efficient theoretical tool for future materials design and reliable characterization of other interesting properties of CNT-based systems.

  12. Benchmark calculations of excess electrons in water cluster cavities: balancing the addition of atom-centered diffuse functions versus floating diffuse functions.

    Science.gov (United States)

    Zhang, Changzhe; Bu, Yuxiang

    2016-09-14

    Diffuse functions have been proved to be especially crucial for the accurate characterization of excess electrons which are usually bound weakly in intermolecular zones far away from the nuclei. To examine the effects of diffuse functions on the nature of the cavity-shaped excess electrons in water cluster surroundings, both the HOMO and LUMO distributions, vertical detachment energies (VDEs) and visible absorption spectra of two selected (H2O)24(-) isomers are investigated in the present work. Two main types of diffuse functions are considered in calculations including the Pople-style atom-centered diffuse functions and the ghost-atom-based floating diffuse functions. It is found that augmentation of atom-centered diffuse functions contributes to a better description of the HOMO (corresponding to the VDE convergence), in agreement with previous studies, but also leads to unreasonable diffuse characters of the LUMO with significant red-shifts in the visible spectra, which is against the conventional point of view that the more the diffuse functions, the better the results. The issue of designing extra floating functions for excess electrons has also been systematically discussed, which indicates that the floating diffuse functions are necessary not only for reducing the computational cost but also for improving both the HOMO and LUMO accuracy. Thus, the basis sets with a combination of partial atom-centered diffuse functions and floating diffuse functions are recommended for a reliable description of the weakly bound electrons. This work presents an efficient way for characterizing the electronic properties of weakly bound electrons accurately by balancing the addition of atom-centered diffuse functions and floating diffuse functions and also by balancing the computational cost and accuracy of the calculated results, and thus is very useful in the relevant calculations of various solvated electron systems and weakly bound anionic systems.

  13. Fort St. Vrain hot functional test results

    International Nuclear Information System (INIS)

    Phelps, R.D.

    1974-01-01

    A description is given of Fort St. Vrain hot functional tests performed to evaluate the initial nonnuclear performance of the primary coolant system and the associated effects on the various internal components of the reactor vessel and primary coolant system. The components included the twelve steam generator modules, the four helium circulators, the PCRV thermal barrier and liner coolant system, the helium purification system, and the primary and secondary closures at each of the PCRV penetrations. Additional objectives included analysis of the parallel operation of the four helium circulators and the performance of several circulator start/stop transients under various conditions of primary coolant temperature and pressure. Vibration and acoustical phenomena within the vessel were measured, recorded, and compared to theoretical analyses; a verification of reverse flow in the shutdown loop steam generator during one loop operation was performed; the PCRV was again observed for its structural response to internal pressure; and comparisons were made relative to data recorded during the initial pressure test completed in July 1971. (U.S.)

  14. Byblos Speech Recognition Benchmark Results

    National Research Council Canada - National Science Library

    Kubala, F; Austin, S; Barry, C; Makhoul, J; Placeway, P; Schwartz, R

    1991-01-01

    .... Surprisingly, the 12-speaker model performs as well as the one made from 109 speakers. Also within the RM domain, we demonstrate that state-of-the-art SI models perform poorly for speakers with strong dialects...

  15. Benchmark Analysis for Condition Monitoring Test Techniques of Aged Low Voltage Cables in Nuclear Power Plants. Final Results of a Coordinated Research Project

    International Nuclear Information System (INIS)

    2017-10-01

    This publication provides information and guidelines on how to monitor the performance of insulation and jacket materials of existing cables and establish a programme of cable degradation monitoring and ageing management for operating reactors and the next generation of nuclear facilities. This research was done through a coordinated research project (CRP) with participants from 17 Member States. This group of experts compiled the current knowledge in a report together with areas of future research and development to cover aging mechanisms and means to identify and manage the consequences of aging. They established a benchmarking programme using cable samples aged under thermal and/or radiation conditions, and tested before and after ageing by various methods and organizations. In particular, 12 types of cable insulation or jacket material were tested, each using 14 different condition monitoring techniques. Condition monitoring techniques yield usable and traceable results. Techniques such as elongation at break, indenter modulus, oxidation induction time and oxidation induction temperature were found to work reasonably well for degradation trending of all materials. However, other condition monitoring techniques, such as insulation resistance, were only partially successful on some cables and other methods like ultrasonic or Tan δ were either unsuccessful or failed to provide reliable information to qualify the method for degradation trending or ageing assessment of cables. The electrical in situ tests did not show great promise for cable degradation trending or ageing assessment, although these methods are known to be very effective for finding and locating faults in cable insulation material. In particular, electrical methods such as insulation resistance and reflectometry techniques are known to be rather effective for locating insulation damage, hot spots or other faults in essentially all cable types. The advantage of electrical methods is that they can be

  16. Full dimensional (15-dimensional) quantum-dynamical simulation of the protonated water-dimer III: Mixed Jacobi-valence parametrization and benchmark results for the zero point energy, vibrationally excited states, and infrared spectrum.

    Science.gov (United States)

    Vendrell, Oriol; Brill, Michael; Gatti, Fabien; Lauvergnat, David; Meyer, Hans-Dieter

    2009-06-21

    Quantum dynamical calculations are reported for the zero point energy, several low-lying vibrational states, and the infrared spectrum of the H(5)O(2)(+) cation. The calculations are performed by the multiconfiguration time-dependent Hartree (MCTDH) method. A new vector parametrization based on a mixed Jacobi-valence description of the system is presented. With this parametrization the potential energy surface coupling is reduced with respect to a full Jacobi description, providing a better convergence of the n-mode representation of the potential. However, new coupling terms appear in the kinetic energy operator. These terms are derived and discussed. A mode-combination scheme based on six combined coordinates is used, and the representation of the 15-dimensional potential in terms of a six-combined mode cluster expansion including up to some 7-dimensional grids is discussed. A statistical analysis of the accuracy of the n-mode representation of the potential at all orders is performed. Benchmark, fully converged results are reported for the zero point energy, which lie within the statistical uncertainty of the reference diffusion Monte Carlo result for this system. Some low-lying vibrationally excited eigenstates are computed by block improved relaxation, illustrating the applicability of the approach to large systems. Benchmark calculations of the linear infrared spectrum are provided, and convergence with increasing size of the time-dependent basis and as a function of the order of the n-mode representation is studied. The calculations presented here make use of recent developments in the parallel version of the MCTDH code, which are briefly discussed. We also show that the infrared spectrum can be computed, to a very good approximation, within D(2d) symmetry, instead of the G(16) symmetry used before, in which the complete rotation of one water molecule with respect to the other is allowed, thus simplifying the dynamical problem.

  17. Benchmarking DFT and TD-DFT Functionals for the Ground and Excited States of Hydrogen-Rich Peptide Radicals.

    Science.gov (United States)

    Riffet, Vanessa; Jacquemin, Denis; Cauët, Emilie; Frison, Gilles

    2014-08-12

    We assess the pros and cons of a large panel of DFT exchange-correlation functionals for the prediction of the electronic structure of hydrogen-rich peptide radicals formed after electron attachment on a protonated peptide. Indeed, despite its importance in the understanding of the chemical changes associated with the reduction step, the question of the attachment site of an electron and, more generally, of the reduced species formed in the gas phase through electron-induced dissociation (ExD) processes in mass spectrometry is still a matter of debate. For hydrogen-rich peptide radicals in which several positive groups and low-lying π* orbitals can capture the incoming electron in ExD, inclusion of full Hartree-Fock exchange at long-range interelectronic distance is a prerequisite for an accurate description of the electronic states, thereby excluding several popular exchange-correlation functionals, e.g., B3LYP, M06-2X, or CAM-B3LYP. However, we show that this condition is not sufficient by comparing the results obtained with asymptotically correct range-separated hybrids (M11, LC-BLYP, LC-BPW91, ωB97, ωB97X, and ωB97X-D) and with reference CASSCF-MRCI and EOM-CCSD calculations. The attenuation parameter ω significantly tunes the spin density distribution and the excited states vertical energies. The investigated model structures, ranging from methylammonium to hexapeptide, allow us to obtain a description of the nature and energy of the electronic states, depending on (i) the presence of hydrogen bond(s) around the cationic site(s), (ii) the presence of π* molecular orbitals (MOs), and (iii) the selected DFT approach. It turns out that, in the present framework, LC-BLYP and ωB97 yields the most accurate results.

  18. Density functional for van der Waals forces accounts for hydrogen bond in benchmark set of water hexamers

    DEFF Research Database (Denmark)

    Kelkkanen, Kari André; Lundqvist, Bengt; Nørskov, Jens Kehlet

    2009-01-01

    A recent extensive study has investigated how various exchange-correlation (XC) functionals treat hydrogen bonds in water hexamers and has shown traditional generalized gradient approximation and hybrid functionals used in density-functional (DF) theory to give the wrong dissociation-energy trend...... of low-lying isomers and van der Waals (vdW) dispersion forces to give key contributions to the dissociation energy. The question raised whether functionals that incorporate vdW forces implicitly into the XC functional predict the correct lowest-energy structure for the water hexamer and yield accurate...

  19. Transport benchmarks for one-dimensional binary Markovian mixtures revisited

    International Nuclear Information System (INIS)

    Malvagi, F.

    2013-01-01

    The classic benchmarks for transport through a binary Markovian mixture are revisited to look at the probability distribution function of the chosen 'results': reflection, transmission and scalar flux. We argue that the knowledge of the ensemble averaged results is not sufficient for reliable predictions: a measure of the dispersion must also be obtained. An algorithm to estimate this dispersion is tested. (author)

  20. Benchmark calculations with correlated molecular wave functions. IX. The weakly bound complexes Ar - H2 and Ar - HCl

    International Nuclear Information System (INIS)

    Woon, D.E.; Peterson, K.A.; Dunning, T.H. Jr.

    1998-01-01

    The interaction of Ar with H 2 and HCl has been studied using Moeller - Plesset perturbation theory (MP2, MP3, MP4) and coupled-cluster [CCSD, CCSD(T)] methods with augmented correlation consistent basis sets. Basis sets as large as triply augmented quadruple zeta quality were used to investigate the convergence trends. Interaction energies were determined using the supermolecule approach with the counterpoise correction to account for basis set superposition error. Comparison with the available empirical potentials finds excellent agreement for both binding energies and transition state. For Ar - H 2 , the estimated complete basis set (CBS) limits for the binding energies of the two equivalent minima and the connecting transition state (TS) are, respectively, 55 and 47cm -1 at the MP4 level and 54 and 46cm -1 at the CCSD(T) level, respectively [the XC(fit) empirical potential of Bissonnette et al. [J. Chem. Phys. 105, 2639 (1996)] yields 56.6 and 47.8cm -1 for H 2 (v=0)]. The estimated CBS limits for the binding energies of the two minima and transition state of Ar - HCl are 185, 155, and 109cm -1 at the MP4 level and 176, 147, and 105cm -1 at the CCSD(T) level, respectively [the H6(4,3,0) empirical potential of Hutson [J. Phys. Chem. 96, 4237 (1992)] yields 176.0, 148.3, and 103.3cm -1 for HCl (v=0)]. Basis sets containing diffuse functions of (dfg) symmetries were found to be essential for accurately modeling these two complexes, which are largely bound by dispersion and induction forces. Highly correlated wave functions were also required for accurate results. This was found to be particularly true for ArHCl, where significant differences in calculated binding energies were observed between MP2, MP4, and CCSD(T). copyright 1998 American Institute of Physics

  1. Aquatic Life Benchmarks

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Aquatic Life Benchmarks is an EPA-developed set of criteria for freshwater species. These benchmarks are based on toxicity values reviewed by EPA and used in the...

  2. Benchmarking in Czech Higher Education

    Directory of Open Access Journals (Sweden)

    Plaček Michal

    2015-12-01

    Full Text Available The first part of this article surveys the current experience with the use of benchmarking at Czech universities specializing in economics and management. The results indicate that collaborative benchmarking is not used on this level today, but most actors show some interest in its introduction. The expression of the need for it and the importance of benchmarking as a very suitable performance-management tool in less developed countries are the impetus for the second part of our article. Based on an analysis of the current situation and existing needs in the Czech Republic, as well as on a comparison with international experience, recommendations for public policy are made, which lie in the design of a model of a collaborative benchmarking for Czech economics and management in higher-education programs. Because the fully complex model cannot be implemented immediately – which is also confirmed by structured interviews with academics who have practical experience with benchmarking –, the final model is designed as a multi-stage model. This approach helps eliminate major barriers to the implementation of benchmarking.

  3. The CEC benchmark interclay on rheological models for clays results of pilot phase (January-June 1989) about the boom clay at Mol (B)

    International Nuclear Information System (INIS)

    Come, B.

    1990-01-01

    A pilot phase of a benchmark exercise for rheological models for boom clay, called interclay, was launched by the CEC in January 1989. The purpose of the benchmark is to compare predictions of calculations made about well-defined rock-mechanical problems, similar to real cases at the Mol facilities, using existing data from laboratory tests on samples. Basically, two approaches were to be compared: one considering clay as an elasto-visco-plastic medium (rock-mechanics approach), and one isolating the role of pore-pressure dissipation (soil-mechanics approach)

  4. Benchmarking for Higher Education.

    Science.gov (United States)

    Jackson, Norman, Ed.; Lund, Helen, Ed.

    The chapters in this collection explore the concept of benchmarking as it is being used and developed in higher education (HE). Case studies and reviews show how universities in the United Kingdom are using benchmarking to aid in self-regulation and self-improvement. The chapters are: (1) "Introduction to Benchmarking" (Norman Jackson…

  5. Results of a benchmark study for the seismic analysis and testing of WWER type NPPs: Overview and general comparison for Paks NPP

    International Nuclear Information System (INIS)

    Guerpinar, A.; Zola, M.

    2001-01-01

    Within the framework of the IAEA coordinated 'Benchmark Study for the seismic analysis and testing of WWER-type NPPs', in-situ dynamic structural testing activities have been performed at the Paks Nuclear Power Plant in Hungary. The specific objective of the investigation was to obtain experimental data on the actual dynamic structural behaviour of the plant's major constructions and equipment under normal operating conditions, for enabling a valid seismic safety review to be made. This paper refers on the comparison of the results obtained from the experimental activities performed by ISMES with those coming from analytical studies performed for the Coordinated Research Programme (CRP) by Siemens (Germany), EQE (Bulgaria), Central Laboratory (Bulgaria), M. David Consulting (Czech Republic), IVO (Finland). This paper gives a synthetic description of the conducted experiments and presents some results, regarding in particular the free-field excitations produced during the earthquake-simulation experiments and an experiment of the dynamic soil-structure interaction global effects at the base of the reactor containment structure. The specific objective of the experimental investigation was to obtain valid data on the dynamic behaviour of the plant's major constructions, under normal operating conditions, to support the analytical assessment of their actual seismic safety. The full-scale dynamic structural testing activities have been performed in December 1994 at the Paks (H) Nuclear Power Plant. The Paks NPP site has been subjected to low level earthquake-like ground shaking, through appropriately devised underground explosions, and the dynamic response of the plant's 1st reactor unit important structures was appropriately measured and digitally recorded, with the whole nuclear power plant under normal operating conditions. In-situ free field response was measured concurrently and, moreover, site-specific geophysical and seismological data were simultaneously

  6. Benchmarking of AREVA BWR FDIC-PEZOG model against first BFE3 cycle 15 application of On-Line NobleChem results

    International Nuclear Information System (INIS)

    Pop, M.G.; Lamanna, L.S.; Hoornik, A.; Storey, G.C.; Lemons, J.F.

    2015-01-01

    The combination of AREVA's BWR FDIC-PEZOG tools allows the calculation of the total liftoff as a measure of fuel performance and a risk indicator for fuel reliability. The AREVA BWR FDIC tool is a crud modeling tool. The PEZOG tool models the platinum-enhanced zirconium oxide growth of fuel cladding when exposed to platinum during operation. Continuous effort to improve these tools used for the total liftoff calculations is illustrated by the benchmarking of the tools after the application of On-Line NobleChem TM at TVA Browns Ferry Unit 3 during Cycle 15. A set of runs using the modified FDIC-PEZOG model and actual plant water chemistry for Cycle 15 and partial data for Cycle 16 were performed. The updated results' deposit thickness and deposit composition predictions for EOC15 were compared to the measured data from EOC15 and are presented in this paper. The updated predicted deposit thickness matched the actual, measured value exactly. Predicted deposit composition near the fuel rod boundary, nearer to the bulk reactor water, and as an averaged deposit, as presented in the paper, compared extremely well with the measured data at EOC15. The updated AREVA methodology resulted in lower fuel oxide thickness predictions over the life of the fuel as compared to the initial evaluations for BFE3 by incorporating more recent experimental data on the thermal conductivity of zirconia; unnecessary conservatism in the prediction of the fuel oxide thickness over the life of the fuel was removed in the improved model. (authors)

  7. BWR stability analysis: methodology of the stability analysis and results of PSI for the NEA/NCR benchmark task; SWR Stabilitaetsanalyse: Methodik der Stabilitaetsanalyse und PSI-Ergebnisse zur NEA/NCR Benchmarkaufgabe

    Energy Technology Data Exchange (ETDEWEB)

    Hennig, D.; Nechvatal, L. [Paul Scherrer Inst. (PSI), Villigen (Switzerland)

    1996-09-01

    The report describes the PSI stability analysis methodology and the validation of this methodology based on the international OECD/NEA BWR stability benchmark task. In the frame of this work, the stability properties of some operation points of the NPP Ringhals 1 have been analysed and compared with the experimental results. (author) figs., tabs., 45 refs.

  8. Benchmarking semantic web technology

    CERN Document Server

    García-Castro, R

    2009-01-01

    This book addresses the problem of benchmarking Semantic Web Technologies; first, from a methodological point of view, proposing a general methodology to follow in benchmarking activities over Semantic Web Technologies and, second, from a practical point of view, presenting two international benchmarking activities that involved benchmarking the interoperability of Semantic Web technologies using RDF(S) as the interchange language in one activity and OWL in the other.The book presents in detail how the different resources needed for these interoperability benchmarking activities were defined:

  9. Analysis of a molten salt reactor benchmark

    International Nuclear Information System (INIS)

    Ghosh, Biplab; Bajpai, Anil; Degweker, S.B.

    2013-01-01

    This paper discusses results of our studies of an IAEA molten salt reactor (MSR) benchmark. The benchmark, proposed by Japan, involves burnup calculations of a single lattice cell of a MSR for burning plutonium and other minor actinides. We have analyzed this cell with in-house developed burnup codes BURNTRAN and McBURN. This paper also presents a comparison of the results of our codes and those obtained by the proposers of the benchmark. (author)

  10. Benchmarking in University Toolbox

    Directory of Open Access Journals (Sweden)

    Katarzyna Kuźmicz

    2015-06-01

    Full Text Available In the face of global competition and rising challenges that higher education institutions (HEIs meet, it is imperative to increase innovativeness and efficiency of their management. Benchmarking can be the appropriate tool to search for a point of reference necessary to assess institution’s competitive position and learn from the best in order to improve. The primary purpose of the paper is to present in-depth analysis of benchmarking application in HEIs worldwide. The study involves indicating premises of using benchmarking in HEIs. It also contains detailed examination of types, approaches and scope of benchmarking initiatives. The thorough insight of benchmarking applications enabled developing classification of benchmarking undertakings in HEIs. The paper includes review of the most recent benchmarking projects and relating them to the classification according to the elaborated criteria (geographical range, scope, type of data, subject, support and continuity. The presented examples were chosen in order to exemplify different approaches to benchmarking in higher education setting. The study was performed on the basis of the published reports from benchmarking projects, scientific literature and the experience of the author from the active participation in benchmarking projects. The paper concludes with recommendations for university managers undertaking benchmarking, derived on the basis of the conducted analysis.

  11. Core Benchmarks Descriptions

    International Nuclear Information System (INIS)

    Pavlovichev, A.M.

    2001-01-01

    Actual regulations while designing of new fuel cycles for nuclear power installations comprise a calculational justification to be performed by certified computer codes. It guarantees that obtained calculational results will be within the limits of declared uncertainties that are indicated in a certificate issued by Gosatomnadzor of Russian Federation (GAN) and concerning a corresponding computer code. A formal justification of declared uncertainties is the comparison of calculational results obtained by a commercial code with the results of experiments or of calculational tests that are calculated with an uncertainty defined by certified precision codes of MCU type or of other one. The actual level of international cooperation provides an enlarging of the bank of experimental and calculational benchmarks acceptable for a certification of commercial codes that are being used for a design of fuel loadings with MOX fuel. In particular, the work is practically finished on the forming of calculational benchmarks list for a certification of code TVS-M as applied to MOX fuel assembly calculations. The results on these activities are presented

  12. Criteria of benchmark selection for efficient flexible multibody system formalisms

    Directory of Open Access Journals (Sweden)

    Valášek M.

    2007-10-01

    Full Text Available The paper deals with the selection process of benchmarks for testing and comparing efficient flexible multibody formalisms. The existing benchmarks are briefly summarized. The purposes for benchmark selection are investigated. The result of this analysis is the formulation of the criteria of benchmark selection for flexible multibody formalisms. Based on them the initial set of suitable benchmarks is described. Besides that the evaluation measures are revised and extended.

  13. Dynamic benchmarking of simulation codes

    International Nuclear Information System (INIS)

    Henry, R.E.; Paik, C.Y.; Hauser, G.M.

    1996-01-01

    Computer simulation of nuclear power plant response can be a full-scope control room simulator, an engineering simulator to represent the general behavior of the plant under normal and abnormal conditions, or the modeling of the plant response to conditions that would eventually lead to core damage. In any of these, the underlying foundation for their use in analysing situations, training of vendor/utility personnel, etc. is how well they represent what has been known from industrial experience, large integral experiments and separate effects tests. Typically, simulation codes are benchmarked with some of these; the level of agreement necessary being dependent upon the ultimate use of the simulation tool. However, these analytical models are computer codes, and as a result, the capabilities are continually enhanced, errors are corrected, new situations are imposed on the code that are outside of the original design basis, etc. Consequently, there is a continual need to assure that the benchmarks with important transients are preserved as the computer code evolves. Retention of this benchmarking capability is essential to develop trust in the computer code. Given the evolving world of computer codes, how is this retention of benchmarking capabilities accomplished? For the MAAP4 codes this capability is accomplished through a 'dynamic benchmarking' feature embedded in the source code. In particular, a set of dynamic benchmarks are included in the source code and these are exercised every time the archive codes are upgraded and distributed to the MAAP users. Three different types of dynamic benchmarks are used: plant transients; large integral experiments; and separate effects tests. Each of these is performed in a different manner. The first is accomplished by developing a parameter file for the plant modeled and an input deck to describe the sequence; i.e. the entire MAAP4 code is exercised. The pertinent plant data is included in the source code and the computer

  14. Zn Coordination Chemistry:  Development of Benchmark Suites for Geometries, Dipole Moments, and Bond Dissociation Energies and Their Use To Test and Validate Density Functionals and Molecular Orbital Theory.

    Science.gov (United States)

    Amin, Elizabeth A; Truhlar, Donald G

    2008-01-01

    We present nonrelativistic and relativistic benchmark databases (obtained by coupled cluster calculations) of 10 Zn-ligand bond distances, 8 dipole moments, and 12 bond dissociation energies in Zn coordination compounds with O, S, NH3, H2O, OH, SCH3, and H ligands. These are used to test the predictions of 39 density functionals, Hartree-Fock theory, and seven more approximate molecular orbital theories. In the nonrelativisitic case, the M05-2X, B97-2, and mPW1PW functionals emerge as the most accurate ones for this test data, with unitless balanced mean unsigned errors (BMUEs) of 0.33, 0.38, and 0.43, respectively. The best local functionals (i.e., functionals with no Hartree-Fock exchange) are M06-L and τ-HCTH with BMUEs of 0.54 and 0.60, respectively. The popular B3LYP functional has a BMUE of 0.51, only slightly better than the value of 0.54 for the best local functional, which is less expensive. Hartree-Fock theory itself has a BMUE of 1.22. The M05-2X functional has a mean unsigned error of 0.008 Å for bond lengths, 0.19 D for dipole moments, and 4.30 kcal/mol for bond energies. The X3LYP functional has a smaller mean unsigned error (0.007 Å) for bond lengths but has mean unsigned errors of 0.43 D for dipole moments and 5.6 kcal/mol for bond energies. The M06-2X functional has a smaller mean unsigned error (3.3 kcal/mol) for bond energies but has mean unsigned errors of 0.017 Å for bond lengths and 0.37 D for dipole moments. The best of the semiempirical molecular orbital theories are PM3 and PM6, with BMUEs of 1.96 and 2.02, respectively. The ten most accurate functionals from the nonrelativistic benchmark analysis are then tested in relativistic calculations against new benchmarks obtained with coupled-cluster calculations and a relativistic effective core potential, resulting in M05-2X (BMUE = 0.895), PW6B95 (BMUE = 0.90), and B97-2 (BMUE = 0.93) as the top three functionals. We find significant relativistic effects (∼0.01 Å in bond lengths, ∼0

  15. Information Literacy and Office Tool Competencies: A Benchmark Study

    Science.gov (United States)

    Heinrichs, John H.; Lim, Jeen-Su

    2010-01-01

    Present information science literature recognizes the importance of information technology to achieve information literacy. The authors report the results of a benchmarking student survey regarding perceived functional skills and competencies in word-processing and presentation tools. They used analysis of variance and regression analysis to…

  16. High Energy Physics (HEP) benchmark program

    International Nuclear Information System (INIS)

    Yasu, Yoshiji; Ichii, Shingo; Yashiro, Shigeo; Hirayama, Hideo; Kokufuda, Akihiro; Suzuki, Eishin.

    1993-01-01

    High Energy Physics (HEP) benchmark programs are indispensable tools to select suitable computer for HEP application system. Industry standard benchmark programs can not be used for this kind of particular selection. The CERN and the SSC benchmark suite are famous HEP benchmark programs for this purpose. The CERN suite includes event reconstruction and event generator programs, while the SSC one includes event generators. In this paper, we found that the results from these two suites are not consistent. And, the result from the industry benchmark does not agree with either of these two. Besides, we describe comparison of benchmark results using EGS4 Monte Carlo simulation program with ones from two HEP benchmark suites. Then, we found that the result from EGS4 in not consistent with the two ones. The industry standard of SPECmark values on various computer systems are not consistent with the EGS4 results either. Because of these inconsistencies, we point out the necessity of a standardization of HEP benchmark suites. Also, EGS4 benchmark suite should be developed for users of applications such as medical science, nuclear power plant, nuclear physics and high energy physics. (author)

  17. Selected examples on multi physics researches at KFKI AEKI-results for phase I of the OECD/NEA UAM benchmark

    International Nuclear Information System (INIS)

    Panka, I.; Kereszturi, A.; Maraczy, C.

    2010-01-01

    Nowadays, there is a tendency to use best estimate plus uncertainty methods in the field of nuclear energy. This implies the application of best estimate code systems and the determination of the corresponding uncertainties. For the latter one an OECD benchmark was set up. The objective of the OECD/NEA Uncertainty Analysis in Best-Estimate Modeling (UAM) LWR benchmark is to determine the uncertainties of the coupled reactor physics/thermal hydraulics LWR calculations at all stages. In this paper the AEKI participation in Phase I will be presented. This Phase is dealing with the evaluation of the uncertainties of the neutronic calculations starting from the pin cell spectral calculations up to the stand-alone neutronics core simulations. (Authors)

  18. The Isprs Benchmark on Indoor Modelling

    Science.gov (United States)

    Khoshelham, K.; Díaz Vilariño, L.; Peter, M.; Kang, Z.; Acharya, D.

    2017-09-01

    Automated generation of 3D indoor models from point cloud data has been a topic of intensive research in recent years. While results on various datasets have been reported in literature, a comparison of the performance of different methods has not been possible due to the lack of benchmark datasets and a common evaluation framework. The ISPRS benchmark on indoor modelling aims to address this issue by providing a public benchmark dataset and an evaluation framework for performance comparison of indoor modelling methods. In this paper, we present the benchmark dataset comprising several point clouds of indoor environments captured by different sensors. We also discuss the evaluation and comparison of indoor modelling methods based on manually created reference models and appropriate quality evaluation criteria. The benchmark dataset is available for download at: html"target="_blank">http://www2.isprs.org/commissions/comm4/wg5/benchmark-on-indoor-modelling.html.

  19. Radiation Detection Computational Benchmark Scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Shaver, Mark W.; Casella, Andrew M.; Wittman, Richard S.; McDonald, Ben S.

    2013-09-24

    Modeling forms an important component of radiation detection development, allowing for testing of new detector designs, evaluation of existing equipment against a wide variety of potential threat sources, and assessing operation performance of radiation detection systems. This can, however, result in large and complex scenarios which are time consuming to model. A variety of approaches to radiation transport modeling exist with complementary strengths and weaknesses for different problems. This variety of approaches, and the development of promising new tools (such as ORNL’s ADVANTG) which combine benefits of multiple approaches, illustrates the need for a means of evaluating or comparing different techniques for radiation detection problems. This report presents a set of 9 benchmark problems for comparing different types of radiation transport calculations, identifying appropriate tools for classes of problems, and testing and guiding the development of new methods. The benchmarks were drawn primarily from existing or previous calculations with a preference for scenarios which include experimental data, or otherwise have results with a high level of confidence, are non-sensitive, and represent problem sets of interest to NA-22. From a technical perspective, the benchmarks were chosen to span a range of difficulty and to include gamma transport, neutron transport, or both and represent different important physical processes and a range of sensitivity to angular or energy fidelity. Following benchmark identification, existing information about geometry, measurements, and previous calculations were assembled. Monte Carlo results (MCNP decks) were reviewed or created and re-run in order to attain accurate computational times and to verify agreement with experimental data, when present. Benchmark information was then conveyed to ORNL in order to guide testing and development of hybrid calculations. The results of those ADVANTG calculations were then sent to PNNL for

  20. 3-D neutron transport benchmarks

    International Nuclear Information System (INIS)

    Takeda, T.; Ikeda, H.

    1991-03-01

    A set of 3-D neutron transport benchmark problems proposed by the Osaka University to NEACRP in 1988 has been calculated by many participants and the corresponding results are summarized in this report. The results of K eff , control rod worth and region-averaged fluxes for the four proposed core models, calculated by using various 3-D transport codes are compared and discussed. The calculational methods used were: Monte Carlo, Discrete Ordinates (Sn), Spherical Harmonics (Pn), Nodal Transport and others. The solutions of the four core models are quite useful as benchmarks for checking the validity of 3-D neutron transport codes

  1. Strategic behaviour under regulatory benchmarking

    Energy Technology Data Exchange (ETDEWEB)

    Jamasb, T. [Cambridge Univ. (United Kingdom). Dept. of Applied Economics; Nillesen, P. [NUON NV (Netherlands); Pollitt, M. [Cambridge Univ. (United Kingdom). Judge Inst. of Management

    2004-09-01

    In order to improve the efficiency of electricity distribution networks, some regulators have adopted incentive regulation schemes that rely on performance benchmarking. Although regulation benchmarking can influence the ''regulation game,'' the subject has received limited attention. This paper discusses how strategic behaviour can result in inefficient behaviour by firms. We then use the Data Envelopment Analysis (DEA) method with US utility data to examine implications of illustrative cases of strategic behaviour reported by regulators. The results show that gaming can have significant effects on the measured performance and profitability of firms. (author)

  2. MCNP neutron benchmarks

    International Nuclear Information System (INIS)

    Hendricks, J.S.; Whalen, D.J.; Cardon, D.A.; Uhle, J.L.

    1991-01-01

    Over 50 neutron benchmark calculations have recently been completed as part of an ongoing program to validate the MCNP Monte Carlo radiation transport code. The new and significant aspects of this work are as follows: These calculations are the first attempt at a validation program for MCNP and the first official benchmarking of version 4 of the code. We believe the chosen set of benchmarks is a comprehensive set that may be useful for benchmarking other radiation transport codes and data libraries. These calculations provide insight into how well neutron transport calculations can be expected to model a wide variety of problems

  3. Benchmarks for dynamic multi-objective optimisation

    CSIR Research Space (South Africa)

    Helbig, M

    2013-06-01

    Full Text Available When algorithms solve dynamic multi-objective optimisation problems (DMOOPs), benchmark functions should be used to determine whether the algorithm can overcome specific difficulties that can occur in real-world problems. However, for dynamic multi...

  4. First 5 tower WIMP-search results from the Cryogenic Dark Matter Search with improved understanding of neutron backgrounds and benchmarking

    Energy Technology Data Exchange (ETDEWEB)

    Hennings-Yeomans, Raul [Case Western Reserve Univ., Cleveland, OH (United States)

    2009-02-01

    Non-baryonic dark matter makes one quarter of the energy density of the Universe and is concentrated in the halos of galaxies, including the Milky Way. The Weakly Interacting Massive Particle (WIMP) is a dark matter candidate with a scattering cross section with an atomic nucleus of the order of the weak interaction and a mass comparable to that of an atomic nucleus. The Cryogenic Dark Matter Search (CDMS-II) experiment, using Ge and Si cryogenic particle detectors at the Soudan Underground Laboratory, aims to directly detect nuclear recoils from WIMP interactions. This thesis presents the first 5 tower WIMP-search results from CDMS-II, an estimate of the cosmogenic neutron backgrounds expected at the Soudan Underground Laboratory, and a proposal for a new measurement of high-energy neutrons underground to benchmark the Monte Carlo simulations. Based on the non-observation of WIMPs and using standard assumptions about the galactic halo [68], the 90% C.L. upper limit of the spin-independent WIMPnucleon cross section for the first 5 tower run is 6.6 × 10-44cm2 for a 60 GeV/c2 WIMP mass. A combined limit using all the data taken at Soudan results in an upper limit of 4.6×10-44cm2 at 90% C.L.for a 60 GeV/c2 WIMP mass. This new limit corresponds to a factor of ~3 improvement over any previous CDMS-II limit and a factor of ~2 above 60 GeV/c 2 better than any other WIMP search to date. This thesis presents an estimation, based on Monte Carlo simulations, of the nuclear recoils produced by cosmic-ray muons and their secondaries (at the Soudan site) for a 5 tower Ge and Si configuration as well as for a 7 supertower array. The results of the Monte Carlo are that CDMS-II should expect 0.06 ± 0.02+0.18 -0.02 /kgyear unvetoed single nuclear recoils in Ge for the 5 tower configuration, and 0.05 ± 0.01+0.15 -0.02 /kg-year for the 7 supertower configuration. The systematic error is based on the available

  5. ABM11 parton distributions and benchmarks

    International Nuclear Information System (INIS)

    Alekhin, Sergey; Bluemlein, Johannes; Moch, Sven-Olaf

    2012-08-01

    We present a determination of the nucleon parton distribution functions (PDFs) and of the strong coupling constant α s at next-to-next-to-leading order (NNLO) in QCD based on the world data for deep-inelastic scattering and the fixed-target data for the Drell-Yan process. The analysis is performed in the fixed-flavor number scheme for n f =3,4,5 and uses the MS scheme for α s and the heavy quark masses. The fit results are compared with other PDFs and used to compute the benchmark cross sections at hadron colliders to the NNLO accuracy.

  6. ABM11 parton distributions and benchmarks

    Energy Technology Data Exchange (ETDEWEB)

    Alekhin, Sergey [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Institut Fiziki Vysokikh Ehnergij, Protvino (Russian Federation); Bluemlein, Johannes; Moch, Sven-Olaf [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany)

    2012-08-15

    We present a determination of the nucleon parton distribution functions (PDFs) and of the strong coupling constant {alpha}{sub s} at next-to-next-to-leading order (NNLO) in QCD based on the world data for deep-inelastic scattering and the fixed-target data for the Drell-Yan process. The analysis is performed in the fixed-flavor number scheme for n{sub f}=3,4,5 and uses the MS scheme for {alpha}{sub s} and the heavy quark masses. The fit results are compared with other PDFs and used to compute the benchmark cross sections at hadron colliders to the NNLO accuracy.

  7. Benchmarking Academic Anatomic Pathologists

    Directory of Open Access Journals (Sweden)

    Barbara S. Ducatman MD

    2016-10-01

    Full Text Available The most common benchmarks for faculty productivity are derived from Medical Group Management Association (MGMA or Vizient-AAMC Faculty Practice Solutions Center ® (FPSC databases. The Association of Pathology Chairs has also collected similar survey data for several years. We examined the Association of Pathology Chairs annual faculty productivity data and compared it with MGMA and FPSC data to understand the value, inherent flaws, and limitations of benchmarking data. We hypothesized that the variability in calculated faculty productivity is due to the type of practice model and clinical effort allocation. Data from the Association of Pathology Chairs survey on 629 surgical pathologists and/or anatomic pathologists from 51 programs were analyzed. From review of service assignments, we were able to assign each pathologist to a specific practice model: general anatomic pathologists/surgical pathologists, 1 or more subspecialties, or a hybrid of the 2 models. There were statistically significant differences among academic ranks and practice types. When we analyzed our data using each organization’s methods, the median results for the anatomic pathologists/surgical pathologists general practice model compared to MGMA and FPSC results for anatomic and/or surgical pathology were quite close. Both MGMA and FPSC data exclude a significant proportion of academic pathologists with clinical duties. We used the more inclusive FPSC definition of clinical “full-time faculty” (0.60 clinical full-time equivalent and above. The correlation between clinical full-time equivalent effort allocation, annual days on service, and annual work relative value unit productivity was poor. This study demonstrates that effort allocations are variable across academic departments of pathology and do not correlate well with either work relative value unit effort or reported days on service. Although the Association of Pathology Chairs–reported median work relative

  8. Communication: energy benchmarking with quantum Monte Carlo for water nano-droplets and bulk liquid water.

    Science.gov (United States)

    Alfè, D; Bartók, A P; Csányi, G; Gillan, M J

    2013-06-14

    We show the feasibility of using quantum Monte Carlo (QMC) to compute benchmark energies for configuration samples of thermal-equilibrium water clusters and the bulk liquid containing up to 64 molecules. Evidence that the accuracy of these benchmarks approaches that of basis-set converged coupled-cluster calculations is noted. We illustrate the usefulness of the benchmarks by using them to analyze the errors of the popular BLYP approximation of density functional theory (DFT). The results indicate the possibility of using QMC as a routine tool for analyzing DFT errors for non-covalent bonding in many types of condensed-phase molecular system.

  9. SSI and structural benchmarks

    International Nuclear Information System (INIS)

    Philippacopoulos, A.J.; Miller, C.A.; Costantino, C.J.; Graves, H.

    1987-01-01

    This paper presents the latest results of the ongoing program entitled, Standard Problems for Structural Computer Codes, currently being worked on at BNL for the USNRC, Office of Nuclear Regulatory Research. During FY 1986, efforts were focussed on three tasks, namely, (1) an investigation of ground water effects on the response of Category I structures, (2) the Soil-Structure Interaction Workshop and (3) studies on structural benchmarks associated with Category I structures. The objective of the studies on ground water effects is to verify the applicability and the limitations of the SSI methods currently used by the industry in performing seismic evaluations of nuclear plants which are located at sites with high water tables. In a previous study by BNL (NUREG/CR-4588), it has been concluded that the pore water can influence significantly the soil-structure interaction process. This result, however, is based on the assumption of fully saturated soil profiles. Consequently, the work was further extended to include cases associated with variable water table depths. In this paper, results related to cut-off depths beyond which the pore water effects can be ignored in seismic calculations, are addressed. Comprehensive numerical data are given for soil configurations typical to those encountered in nuclear plant sites. These data were generated by using a modified version of the SLAM code which is capable of handling problems related to the dynamic response of saturated soils. Further, the paper presents some key aspects of the Soil-Structure Interaction Workshop (NUREG/CP-0054) which was held in Bethesda, MD on June 1, 1986. Finally, recent efforts related to the task on the structural benchmarks are described

  10. Benchmarking af kommunernes sagsbehandling

    DEFF Research Database (Denmark)

    Amilon, Anna

    Fra 2007 skal Ankestyrelsen gennemføre benchmarking af kommuernes sagsbehandlingskvalitet. Formålet med benchmarkingen er at udvikle praksisundersøgelsernes design med henblik på en bedre opfølgning og at forbedre kommunernes sagsbehandling. Dette arbejdspapir diskuterer metoder for benchmarking...

  11. Internet based benchmarking

    DEFF Research Database (Denmark)

    Bogetoft, Peter; Nielsen, Kurt

    2005-01-01

    We discuss the design of interactive, internet based benchmarking using parametric (statistical) as well as nonparametric (DEA) models. The user receives benchmarks and improvement potentials. The user is also given the possibility to search different efficiency frontiers and hereby to explore...

  12. The Drill Down Benchmark

    NARCIS (Netherlands)

    P.A. Boncz (Peter); T. Rühl (Tim); F. Kwakkel

    1998-01-01

    textabstractData Mining places specific requirements on DBMS query performance that cannot be evaluated satisfactorily using existing OLAP benchmarks. The DD Benchmark - defined here - provides a practical case and yardstick to explore how well a DBMS is able to support Data Mining applications. It

  13. Benchmarking Tool Kit.

    Science.gov (United States)

    Canadian Health Libraries Association.

    Nine Canadian health libraries participated in a pilot test of the Benchmarking Tool Kit between January and April, 1998. Although the Tool Kit was designed specifically for health libraries, the content and approach are useful to other types of libraries as well. Used to its full potential, benchmarking can provide a common measuring stick to…

  14. Medical school benchmarking - from tools to programmes.

    Science.gov (United States)

    Wilkinson, Tim J; Hudson, Judith N; Mccoll, Geoffrey J; Hu, Wendy C Y; Jolly, Brian C; Schuwirth, Lambert W T

    2015-02-01

    Benchmarking among medical schools is essential, but may result in unwanted effects. To apply a conceptual framework to selected benchmarking activities of medical schools. We present an analogy between the effects of assessment on student learning and the effects of benchmarking on medical school educational activities. A framework by which benchmarking can be evaluated was developed and applied to key current benchmarking activities in Australia and New Zealand. The analogy generated a conceptual framework that tested five questions to be considered in relation to benchmarking: what is the purpose? what are the attributes of value? what are the best tools to assess the attributes of value? what happens to the results? and, what is the likely "institutional impact" of the results? If the activities were compared against a blueprint of desirable medical graduate outcomes, notable omissions would emerge. Medical schools should benchmark their performance on a range of educational activities to ensure quality improvement and to assure stakeholders that standards are being met. Although benchmarking potentially has positive benefits, it could also result in perverse incentives with unforeseen and detrimental effects on learning if it is undertaken using only a few selected assessment tools.

  15. On the accuracy of density-functional theory exchange-correlation functionals for H bonds in small water clusters: Benchmarks approaching the complete basis set limit

    Science.gov (United States)

    Santra, Biswajit; Michaelides, Angelos; Scheffler, Matthias

    2007-11-01

    The ability of several density-functional theory (DFT) exchange-correlation functionals to describe hydrogen bonds in small water clusters (dimer to pentamer) in their global minimum energy structures is evaluated with reference to second order Møller-Plesset perturbation theory (MP2). Errors from basis set incompleteness have been minimized in both the MP2 reference data and the DFT calculations, thus enabling a consistent systematic evaluation of the true performance of the tested functionals. Among all the functionals considered, the hybrid X3LYP and PBE0 functionals offer the best performance and among the nonhybrid generalized gradient approximation functionals, mPWLYP and PBE1W perform best. The popular BLYP and B3LYP functionals consistently underbind and PBE and PW91 display rather variable performance with cluster size.

  16. How Activists Use Benchmarks

    DEFF Research Database (Denmark)

    Seabrooke, Leonard; Wigan, Duncan

    2015-01-01

    Non-governmental organisations use benchmarks as a form of symbolic violence to place political pressure on firms, states, and international organisations. The development of benchmarks requires three elements: (1) salience, that the community of concern is aware of the issue and views...... are put to the test. The first is a reformist benchmarking cycle where organisations defer to experts to create a benchmark that conforms with the broader system of politico-economic norms. The second is a revolutionary benchmarking cycle driven by expert-activists that seek to contest strong vested...... interests and challenge established politico-economic norms. Differentiating these cycles provides insights into how activists work through organisations and with expert networks, as well as how campaigns on complex economic issues can be mounted and sustained....

  17. EGS4 benchmark program

    International Nuclear Information System (INIS)

    Yasu, Y.; Hirayama, H.; Namito, Y.; Yashiro, S.

    1995-01-01

    This paper proposes EGS4 Benchmark Suite which consists of three programs called UCSAMPL4, UCSAMPL4I and XYZDOS. This paper also evaluates optimization methods of recent RISC/UNIX systems, such as IBM, HP, DEC, Hitachi and Fujitsu, for the benchmark suite. When particular compiler option and math library were included in the evaluation process, system performed significantly better. Observed performance of some of the RISC/UNIX systems were beyond some so-called Mainframes of IBM, Hitachi or Fujitsu. The computer performance of EGS4 Code System on an HP9000/735 (99MHz) was defined to be the unit of EGS4 Unit. The EGS4 Benchmark Suite also run on various PCs such as Pentiums, i486 and DEC alpha and so forth. The performance of recent fast PCs reaches that of recent RISC/UNIX systems. The benchmark programs have been evaluated with correlation of industry benchmark programs, namely, SPECmark. (author)

  18. Shielding benchmark test

    International Nuclear Information System (INIS)

    Kawai, Masayoshi

    1984-01-01

    Iron data in JENDL-2 have been tested by analyzing shielding benchmark experiments for neutron transmission through iron block performed at KFK using CF-252 neutron source and at ORNL using collimated neutron beam from reactor. The analyses are made by a shielding analysis code system RADHEAT-V4 developed at JAERI. The calculated results are compared with the measured data. As for the KFK experiments, the C/E values are about 1.1. For the ORNL experiments, the calculated values agree with the measured data within an accuracy of 33% for the off-center geometry. The d-t neutron transmission measurements through carbon sphere made at LLNL are also analyzed preliminarily by using the revised JENDL data for fusion neutronics calculation. (author)

  19. MOx Depletion Calculation Benchmark

    International Nuclear Information System (INIS)

    San Felice, Laurence; Eschbach, Romain; Dewi Syarifah, Ratna; Maryam, Seif-Eddine; Hesketh, Kevin

    2016-01-01

    Under the auspices of the NEA Nuclear Science Committee (NSC), the Working Party on Scientific Issues of Reactor Systems (WPRS) has been established to study the reactor physics, fuel performance, radiation transport and shielding, and the uncertainties associated with modelling of these phenomena in present and future nuclear power systems. The WPRS has different expert groups to cover a wide range of scientific issues in these fields. The Expert Group on Reactor Physics and Advanced Nuclear Systems (EGRPANS) was created in 2011 to perform specific tasks associated with reactor physics aspects of present and future nuclear power systems. EGRPANS provides expert advice to the WPRS and the nuclear community on the development needs (data and methods, validation experiments, scenario studies) for different reactor systems and also provides specific technical information regarding: core reactivity characteristics, including fuel depletion effects; core power/flux distributions; Core dynamics and reactivity control. In 2013 EGRPANS published a report that investigated fuel depletion effects in a Pressurised Water Reactor (PWR). This was entitled 'International Comparison of a Depletion Calculation Benchmark on Fuel Cycle Issues' NEA/NSC/DOC(2013) that documented a benchmark exercise for UO 2 fuel rods. This report documents a complementary benchmark exercise that focused on PuO 2 /UO 2 Mixed Oxide (MOX) fuel rods. The results are especially relevant to the back-end of the fuel cycle, including irradiated fuel transport, reprocessing, interim storage and waste repository. Saint-Laurent B1 (SLB1) was the first French reactor to use MOx assemblies. SLB1 is a 900 MWe PWR, with 30% MOx fuel loading. The standard MOx assemblies, used in Saint-Laurent B1 reactor, include three zones with different plutonium enrichments, high Pu content (5.64%) in the center zone, medium Pu content (4.42%) in the intermediate zone and low Pu content (2.91%) in the peripheral zone

  20. A simplified approach to WWER-440 fuel assembly head benchmark

    International Nuclear Information System (INIS)

    Muehlbauer, P.

    2010-01-01

    The WWER-440 fuel assembly head benchmark was simulated with FLUENT 12 code as a first step of validation of the code for nuclear reactor safety analyses. Results of the benchmark together with comparison of results provided by other participants and results of measurement will be presented in another paper by benchmark organisers. This presentation is therefore focused on our approach to this simulation as illustrated on the case 323-34, which represents a peripheral assembly with five neighbours. All steps of the simulation and some lessons learned are described. Geometry of the computational region supplied as STEP file by organizers of the benchmark was first separated into two parts (inlet part with spacer grid, and the rest of assembly head) in order to keep the size of the computational mesh manageable with regard to the hardware available (HP Z800 workstation with Intel Zeon four-core CPU 3.2 GHz, 32 GB of RAM) and then further modified at places where shape of the geometry would probably lead to highly distorted cells. Both parts of the geometry were connected via boundary profile file generated at cross section, where effect of grid spacers is still felt but the effect of out flow boundary condition used in the computations of the inlet part of geometry is negligible. Computation proceeded in several steps: start with basic mesh, standard k-ε model of turbulence with standard wall functions and first order upwind numerical schemes; after convergence (scaled residuals lower than 10-3) and near-wall meshes local adaptation when needed, realizable k-ε of turbulence was used with second order upwind numerical schemes for momentum and energy equations. During iterations, area-average temperature of thermocouples and area-averaged outlet temperature which are the main figures of merit of the benchmark were also monitored. In this 'blind' phase of the benchmark, effect of spacers was neglected. After results of measurements are available, standard validation

  1. On some results for meromorphic univalent functions having

    Indian Academy of Sciences (India)

    71

    2017-08-05

    Aug 5, 2017 ... We consider the class Σ(p) of univalent meromorphic functions f on. D having simple ... sharp distortion result for functions in Σ(p) and as a consequence, we obtain a distortion ...... Using the relation |a| = (|b|−|b|−1)/(1 − p2) and.

  2. TRIPOLI 01, a three-dimensional polykinetic Monte Carlo program. Pt.2. Constant data input and results obtained for a complex bench-mark example

    International Nuclear Information System (INIS)

    Baur, A.; Bourdet, L.; Gonnord, J.; Nimal, J.C.; Vergnaud, T.

    1977-01-01

    Some properties of the sub-routines included in TRIPOLI for the constant data input are briefly reviewed. The bench-mark example then proposed is a problem of the contour lines of a neutron-opaque shield. The example is derived from activation calculations for the secondary sodium in a fast reactor provided with an integrated shield of Phenix type. The dimensions have been reduced so as the geometry to be simplified, and the constant data load reduced. Besides the rate of 24 Na formation by capture in the exchanger region, the attenuation of various detector responses all along the neutron trajectory was taken into account. The detectors chiefly used were: 103 Rh activation from (n,n') reactions for fast neutron detection, the activation Mn detectors, either bare or under cadmium, for detecting thermal and epithermal neutrons respectively [fr

  3. Numisheet2005 Benchmark Analysis on Forming of an Automotive Underbody Cross Member: Benchmark 2

    International Nuclear Information System (INIS)

    Buranathiti, Thaweepat; Cao Jian

    2005-01-01

    This report presents an international cooperation benchmark effort focusing on simulations of a sheet metal stamping process. A forming process of an automotive underbody cross member using steel and aluminum blanks is used as a benchmark. Simulation predictions from each submission are analyzed via comparison with the experimental results. A brief summary of various models submitted for this benchmark study is discussed. Prediction accuracy of each parameter of interest is discussed through the evaluation of cumulative errors from each submission

  4. Shielding benchmark problems, (2)

    International Nuclear Information System (INIS)

    Tanaka, Shun-ichi; Sasamoto, Nobuo; Oka, Yoshiaki; Shin, Kazuo; Tada, Keiko.

    1980-02-01

    Shielding benchmark problems prepared by Working Group of Assessment of Shielding Experiments in the Research Committee on Shielding Design in the Atomic Energy Society of Japan were compiled by Shielding Laboratory in Japan Atomic Energy Research Institute. Fourteen shielding benchmark problems are presented newly in addition to twenty-one problems proposed already, for evaluating the calculational algorithm and accuracy of computer codes based on discrete ordinates method and Monte Carlo method and for evaluating the nuclear data used in codes. The present benchmark problems are principally for investigating the backscattering and the streaming of neutrons and gamma rays in two- and three-dimensional configurations. (author)

  5. Benchmarking the energy efficiency of commercial buildings

    International Nuclear Information System (INIS)

    Chung, William; Hui, Y.V.; Lam, Y. Miu

    2006-01-01

    Benchmarking energy-efficiency is an important tool to promote the efficient use of energy in commercial buildings. Benchmarking models are mostly constructed in a simple benchmark table (percentile table) of energy use, which is normalized with floor area and temperature. This paper describes a benchmarking process for energy efficiency by means of multiple regression analysis, where the relationship between energy-use intensities (EUIs) and the explanatory factors (e.g., operating hours) is developed. Using the resulting regression model, these EUIs are then normalized by removing the effect of deviance in the significant explanatory factors. The empirical cumulative distribution of the normalized EUI gives a benchmark table (or percentile table of EUI) for benchmarking an observed EUI. The advantage of this approach is that the benchmark table represents a normalized distribution of EUI, taking into account all the significant explanatory factors that affect energy consumption. An application to supermarkets is presented to illustrate the development and the use of the benchmarking method

  6. BN-600 MOX Core Benchmark Analysis. Results from Phases 4 and 6 of a Coordinated Research Project on Updated Codes and Methods to Reduce the Calculational Uncertainties of the LMFR Reactivity Effects

    International Nuclear Information System (INIS)

    2013-12-01

    For those Member States that have or have had significant fast reactor development programmes, it is of utmost importance that they have validated up to date codes and methods for fast reactor physics analysis in support of R and D and core design activities in the area of actinide utilization and incineration. In particular, some Member States have recently focused on fast reactor systems for minor actinide transmutation and on cores optimized for consuming rather than breeding plutonium; the physics of the breeder reactor cycle having already been widely investigated. Plutonium burning systems may have an important role in managing plutonium stocks until the time when major programmes of self-sufficient fast breeder reactors are established. For assessing the safety of these systems, it is important to determine the prediction accuracy of transient simulations and their associated reactivity coefficients. In response to Member States' expressed interest, the IAEA sponsored a coordinated research project (CRP) on Updated Codes and Methods to Reduce the Calculational Uncertainties of the LMFR Reactivity Effects. The CRP started in November 1999 and, at the first meeting, the members of the CRP endorsed a benchmark on the BN-600 hybrid core for consideration in its first studies. Benchmark analyses of the BN-600 hybrid core were performed during the first three phases of the CRP, investigating different nuclear data and levels of approximation in the calculation of safety related reactivity effects and their influence on uncertainties in transient analysis prediction. In an additional phase of the benchmark studies, experimental data were used for the verification and validation of nuclear data libraries and methods in support of the previous three phases. The results of phases 1, 2, 3 and 5 of the CRP are reported in IAEA-TECDOC-1623, BN-600 Hybrid Core Benchmark Analyses, Results from a Coordinated Research Project on Updated Codes and Methods to Reduce the

  7. Impact testing and analysis for structural code benchmarking

    International Nuclear Information System (INIS)

    Glass, R.E.

    1989-01-01

    Sandia National Laboratories, in cooperation with industry and other national laboratories, has been benchmarking computer codes (''Structural Code Benchmarking for the Analysis of Impact Response of Nuclear Material Shipping Cask,'' R.E. Glass, Sandia National Laboratories, 1985; ''Sample Problem Manual for Benchmarking of Cask Analysis Codes,'' R.E. Glass, Sandia National Laboratories, 1988; ''Standard Thermal Problem Set for the Evaluation of Heat Transfer Codes Used in the Assessment of Transportation Packages, R.E. Glass, et al., Sandia National Laboratories, 1988) used to predict the structural, thermal, criticality, and shielding behavior of radioactive materials packages. The first step in the benchmarking of the codes was to develop standard problem sets and to compare the results from several codes and users. This step for structural analysis codes has been completed as described in ''Structural Code Benchmarking for the Analysis of Impact Response of Nuclear Material Shipping Casks,'' R.E. Glass, Sandia National Laboratories, 1985. The problem set is shown in Fig. 1. This problem set exercised the ability of the codes to predict the response to end (axisymmetric) and side (plane strain) impacts with both elastic and elastic/plastic materials. The results from these problems showed that there is good agreement in predicting elastic response. Significant differences occurred in predicting strains for the elastic/plastic models. An example of the variation in predicting plastic behavior is given, which shows the hoop strain as a function of time at the impacting end of Model B. These differences in predicting plastic strains demonstrated a need for benchmark data for a cask-like problem. 6 refs., 5 figs

  8. Geothermal Heat Pump Benchmarking Report

    Energy Technology Data Exchange (ETDEWEB)

    None

    1997-01-17

    A benchmarking study was conducted on behalf of the Department of Energy to determine the critical factors in successful utility geothermal heat pump programs. A Successful program is one that has achieved significant market penetration. Successfully marketing geothermal heat pumps has presented some major challenges to the utility industry. However, select utilities have developed programs that generate significant GHP sales. This benchmarking study concludes that there are three factors critical to the success of utility GHP marking programs: (1) Top management marketing commitment; (2) An understanding of the fundamentals of marketing and business development; and (3) An aggressive competitive posture. To generate significant GHP sales, competitive market forces must by used. However, because utilities have functioned only in a regulated arena, these companies and their leaders are unschooled in competitive business practices. Therefore, a lack of experience coupled with an intrinsically non-competitive culture yields an industry environment that impedes the generation of significant GHP sales in many, but not all, utilities.

  9. Diagnostic Algorithm Benchmarking

    Science.gov (United States)

    Poll, Scott

    2011-01-01

    A poster for the NASA Aviation Safety Program Annual Technical Meeting. It describes empirical benchmarking on diagnostic algorithms using data from the ADAPT Electrical Power System testbed and a diagnostic software framework.

  10. Benchmarking and Regulation

    DEFF Research Database (Denmark)

    Agrell, Per J.; Bogetoft, Peter

    . The application of benchmarking in regulation, however, requires specific steps in terms of data validation, model specification and outlier detection that are not systematically documented in open publications, leading to discussions about regulatory stability and economic feasibility of these techniques...

  11. Financial Integrity Benchmarks

    Data.gov (United States)

    City of Jackson, Mississippi — This data compiles standard financial integrity benchmarks that allow the City to measure its financial standing. It measure the City's debt ratio and bond ratings....

  12. Benchmarking in Foodservice Operations

    National Research Council Canada - National Science Library

    Johnson, Bonnie

    1998-01-01

    .... The design of this study included two parts: (1) eleven expert panelists involved in a Delphi technique to identify and rate importance of foodservice performance measures and rate the importance of benchmarking activities, and (2...

  13. EVA Health and Human Performance Benchmarking Study

    Science.gov (United States)

    Abercromby, A. F.; Norcross, J.; Jarvis, S. L.

    2016-01-01

    Multiple HRP Risks and Gaps require detailed characterization of human health and performance during exploration extravehicular activity (EVA) tasks; however, a rigorous and comprehensive methodology for characterizing and comparing the health and human performance implications of current and future EVA spacesuit designs does not exist. This study will identify and implement functional tasks and metrics, both objective and subjective, that are relevant to health and human performance, such as metabolic expenditure, suit fit, discomfort, suited postural stability, cognitive performance, and potentially biochemical responses for humans working inside different EVA suits doing functional tasks under the appropriate simulated reduced gravity environments. This study will provide health and human performance benchmark data for humans working in current EVA suits (EMU, Mark III, and Z2) as well as shirtsleeves using a standard set of tasks and metrics with quantified reliability. Results and methodologies developed during this test will provide benchmark data against which future EVA suits, and different suit configurations (eg, varied pressure, mass, CG) may be reliably compared in subsequent tests. Results will also inform fitness for duty standards as well as design requirements and operations concepts for future EVA suits and other exploration systems.

  14. Benchmarking Density Functional Theory Based Methods To Model NiOOH Material Properties: Hubbard and van der Waals Corrections vs Hybrid Functionals.

    Science.gov (United States)

    Zaffran, Jeremie; Caspary Toroker, Maytal

    2016-08-09

    NiOOH has recently been used to catalyze water oxidation by way of electrochemical water splitting. Few experimental data are available to rationalize the successful catalytic capability of NiOOH. Thus, theory has a distinctive role for studying its properties. However, the unique layered structure of NiOOH is associated with the presence of essential dispersion forces within the lattice. Hence, the choice of an appropriate exchange-correlation functional within Density Functional Theory (DFT) is not straightforward. In this work, we will show that standard DFT is sufficient to evaluate the geometry, but DFT+U and hybrid functionals are required to calculate the oxidation states. Notably, the benefit of DFT with van der Waals correction is marginal. Furthermore, only hybrid functionals succeed in opening a bandgap, and such methods are necessary to study NiOOH electronic structure. In this work, we expect to give guidelines to theoreticians dealing with this material and to present a rational approach in the choice of the DFT method of calculation.

  15. MFTF TOTAL benchmark

    International Nuclear Information System (INIS)

    Choy, J.H.

    1979-06-01

    A benchmark of the TOTAL data base management system as applied to the Mirror Fusion Test Facility (MFTF) data base was implemented and run in February and March of 1979. The benchmark was run on an Interdata 8/32 and involved the following tasks: (1) data base design, (2) data base generation, (3) data base load, and (4) develop and implement programs to simulate MFTF usage of the data base

  16. Accelerator shielding benchmark problems

    International Nuclear Information System (INIS)

    Hirayama, H.; Ban, S.; Nakamura, T.

    1993-01-01

    Accelerator shielding benchmark problems prepared by Working Group of Accelerator Shielding in the Research Committee on Radiation Behavior in the Atomic Energy Society of Japan were compiled by Radiation Safety Control Center of National Laboratory for High Energy Physics. Twenty-five accelerator shielding benchmark problems are presented for evaluating the calculational algorithm, the accuracy of computer codes and the nuclear data used in codes. (author)

  17. Shielding benchmark problems

    International Nuclear Information System (INIS)

    Tanaka, Shun-ichi; Sasamoto, Nobuo; Oka, Yoshiaki; Kawai, Masayoshi; Nakazawa, Masaharu.

    1978-09-01

    Shielding benchmark problems were prepared by the Working Group of Assessment of Shielding Experiments in the Research Comittee on Shielding Design of the Atomic Energy Society of Japan, and compiled by the Shielding Laboratory of Japan Atomic Energy Research Institute. Twenty-one kinds of shielding benchmark problems are presented for evaluating the calculational algorithm and the accuracy of computer codes based on the discrete ordinates method and the Monte Carlo method and for evaluating the nuclear data used in the codes. (author)

  18. Regression Benchmarking: An Approach to Quality Assurance in Performance

    OpenAIRE

    Bulej, Lubomír

    2005-01-01

    The paper presents a short summary of our work in the area of regression benchmarking and its application to software development. Specially, we explain the concept of regression benchmarking, the requirements for employing regression testing in a software project, and methods used for analyzing the vast amounts of data resulting from repeated benchmarking. We present the application of regression benchmarking on a real software project and conclude with a glimpse at the challenges for the fu...

  19. Human factors reliability benchmark exercise

    International Nuclear Information System (INIS)

    Poucet, A.

    1989-08-01

    The Joint Research Centre of the European Commission has organised a Human Factors Reliability Benchmark Exercise (HF-RBE) with the aim of assessing the state of the art in human reliability modelling and assessment. Fifteen teams from eleven countries, representing industry, utilities, licensing organisations and research institutes, participated in the HF-RBE. The HF-RBE was organised around two study cases: (1) analysis of routine functional Test and Maintenance (TPM) procedures: with the aim of assessing the probability of test induced failures, the probability of failures to remain unrevealed and the potential to initiate transients because of errors performed in the test; (2) analysis of human actions during an operational transient: with the aim of assessing the probability that the operators will correctly diagnose the malfunctions and take proper corrective action. This report summarises the contributions received from the participants and analyses these contributions on a comparative basis. The aim of this analysis was to compare the procedures, modelling techniques and quantification methods used, to obtain insight in the causes and magnitude of the variability observed in the results, to try to identify preferred human reliability assessment approaches and to get an understanding of the current state of the art in the field identifying the limitations that are still inherent to the different approaches

  20. Benchmarking electricity distribution

    Energy Technology Data Exchange (ETDEWEB)

    Watts, K. [Department of Justice and Attorney-General, QLD (Australia)

    1995-12-31

    Benchmarking has been described as a method of continuous improvement that involves an ongoing and systematic evaluation and incorporation of external products, services and processes recognised as representing best practice. It is a management tool similar to total quality management (TQM) and business process re-engineering (BPR), and is best used as part of a total package. This paper discusses benchmarking models and approaches and suggests a few key performance indicators that could be applied to benchmarking electricity distribution utilities. Some recent benchmarking studies are used as examples and briefly discussed. It is concluded that benchmarking is a strong tool to be added to the range of techniques that can be used by electricity distribution utilities and other organizations in search of continuous improvement, and that there is now a high level of interest in Australia. Benchmarking represents an opportunity for organizations to approach learning from others in a disciplined and highly productive way, which will complement the other micro-economic reforms being implemented in Australia. (author). 26 refs.

  1. Benchmarking in the globalised world and its impact on South ...

    African Journals Online (AJOL)

    In order to understand the potential impact of international benchmarking on South African institutions, it is important to explore the future role of benchmarking on the international level. In this regard, examples of transnational benchmarking activities will be considered. As a result of the involvement of South African ...

  2. Benchmark assessment of density functional methods on group II-VI MX (M = Zn, Cd; X = S, Se, Te) quantum dots

    NARCIS (Netherlands)

    Azpiroz, Jon M.; Ugalde, Jesus M.; Infante, Ivan

    2014-01-01

    In this work, we build a benchmark data set of geometrical parameters, vibrational normal modes, and low-lying excitation energies for MX quantum dots, with M = Cd, Zn, and X = S, Se, Te. The reference database has been constructed by ab initio resolution-of-identity second-order approximate coupled

  3. Vver-1000 Mox core computational benchmark

    International Nuclear Information System (INIS)

    2006-01-01

    The NEA Nuclear Science Committee has established an Expert Group that deals with the status and trends of reactor physics, fuel performance and fuel cycle issues related to disposing of weapons-grade plutonium in mixed-oxide fuel. The objectives of the group are to provide NEA member countries with up-to-date information on, and to develop consensus regarding, core and fuel cycle issues associated with burning weapons-grade plutonium in thermal water reactors (PWR, BWR, VVER-1000, CANDU) and fast reactors (BN-600). These issues concern core physics, fuel performance and reliability, and the capability and flexibility of thermal water reactors and fast reactors to dispose of weapons-grade plutonium in standard fuel cycles. The activities of the NEA Expert Group on Reactor-based Plutonium Disposition are carried out in close co-operation (jointly, in most cases) with the NEA Working Party on Scientific Issues in Reactor Systems (WPRS). A prominent part of these activities include benchmark studies. At the time of preparation of this report, the following benchmarks were completed or in progress: VENUS-2 MOX Core Benchmarks: carried out jointly with the WPRS (formerly the WPPR) (completed); VVER-1000 LEU and MOX Benchmark (completed); KRITZ-2 Benchmarks: carried out jointly with the WPRS (formerly the WPPR) (completed); Hollow and Solid MOX Fuel Behaviour Benchmark (completed); PRIMO MOX Fuel Performance Benchmark (ongoing); VENUS-2 MOX-fuelled Reactor Dosimetry Calculation (ongoing); VVER-1000 In-core Self-powered Neutron Detector Calculational Benchmark (started); MOX Fuel Rod Behaviour in Fast Power Pulse Conditions (started); Benchmark on the VENUS Plutonium Recycling Experiments Configuration 7 (started). This report describes the detailed results of the benchmark investigating the physics of a whole VVER-1000 reactor core using two-thirds low-enriched uranium (LEU) and one-third MOX fuel. It contributes to the computer code certification process and to the

  4. Benchmarking ENDF/B-VII.1, JENDL-4.0 and JEFF-3.1.1 with MCNP6

    International Nuclear Information System (INIS)

    Marck, Steven C. van der

    2012-01-01

    Recent releases of three major world nuclear reaction data libraries, ENDF/B-VII.1, JENDL-4.0, and JEFF-3.1.1, have been tested extensively using benchmark calculations. The calculations were performed with the latest release of the continuous energy Monte Carlo neutronics code MCNP, i.e. MCNP6. Three types of benchmarks were used, viz. criticality safety benchmarks, (fusion) shielding benchmarks, and reference systems for which the effective delayed neutron fraction is reported. For criticality safety, more than 2000 benchmarks from the International Handbook of Criticality Safety Benchmark Experiments were used. Benchmarks from all categories were used, ranging from low-enriched uranium, compound fuel, thermal spectrum ones (LEU-COMP-THERM), to mixed uranium-plutonium, metallic fuel, fast spectrum ones (MIX-MET-FAST). For fusion shielding many benchmarks were based on IAEA specifications for the Oktavian experiments (for Al, Co, Cr, Cu, LiF, Mn, Mo, Si, Ti, W, Zr), Fusion Neutronics Source in Japan (for Be, C, N, O, Fe, Pb), and Pulsed Sphere experiments at Lawrence Livermore National Laboratory (for 6 Li, 7 Li, Be, C, N, O, Mg, Al, Ti, Fe, Pb, D2O, H2O, concrete, polyethylene and teflon). The new functionality in MCNP6 to calculate the effective delayed neutron fraction was tested by comparison with more than thirty measurements in widely varying systems. Among these were measurements in the Tank Critical Assembly (TCA in Japan) and IPEN/MB-01 (Brazil), both with a thermal spectrum, two cores in Masurca (France) and three cores in the Fast Critical Assembly (FCA, Japan), all with fast spectra. The performance of the three libraries, in combination with MCNP6, is shown to be good. The results for the LEU-COMP-THERM category are on average very close to the benchmark value. Also for most other categories the results are satisfactory. Deviations from the benchmark values do occur in certain benchmark series, or in isolated cases within benchmark series. Such

  5. Benchmarking local healthcare-associated infections: Available benchmarks and interpretation challenges

    Directory of Open Access Journals (Sweden)

    Aiman El-Saed

    2013-10-01

    Full Text Available Summary: Growing numbers of healthcare facilities are routinely collecting standardized data on healthcare-associated infection (HAI, which can be used not only to track internal performance but also to compare local data to national and international benchmarks. Benchmarking overall (crude HAI surveillance metrics without accounting or adjusting for potential confounders can result in misleading conclusions. Methods commonly used to provide risk-adjusted metrics include multivariate logistic regression analysis, stratification, indirect standardization, and restrictions. The characteristics of recognized benchmarks worldwide, including the advantages and limitations are described. The choice of the right benchmark for the data from the Gulf Cooperation Council (GCC states is challenging. The chosen benchmark should have similar data collection and presentation methods. Additionally, differences in surveillance environments including regulations should be taken into consideration when considering such a benchmark. The GCC center for infection control took some steps to unify HAI surveillance systems in the region. GCC hospitals still need to overcome legislative and logistic difficulties in sharing data to create their own benchmark. The availability of a regional GCC benchmark may better enable health care workers and researchers to obtain more accurate and realistic comparisons. Keywords: Benchmarking, Comparison, Surveillance, Healthcare-associated infections

  6. The KMAT: Benchmarking Knowledge Management.

    Science.gov (United States)

    de Jager, Martha

    Provides an overview of knowledge management and benchmarking, including the benefits and methods of benchmarking (e.g., competitive, cooperative, collaborative, and internal benchmarking). Arthur Andersen's KMAT (Knowledge Management Assessment Tool) is described. The KMAT is a collaborative benchmarking tool, designed to help organizations make…

  7. Some results associated with a generalized basic hypergeometric function

    Directory of Open Access Journals (Sweden)

    Rajeev K. Gupta

    2009-05-01

    Full Text Available In this paper, we define a q-extension of the new generalized hypergeometric function given by Saxena et al. in [13], and have investigated the properties of the above new function such as q-differentiation and q-integral representation. The results presented are of general character and the results given earlier by Saxena and Kalla in [14], Virchenko, Kalla and Al-Zamel in [15], Al-Musallam and Kalla in [2, 3], Kobayashi in [7, 8], Saxena et al. in [13], Kumbhat et al. in [11] follow as special cases.

  8. Thermal reactor benchmark tests on JENDL-2

    International Nuclear Information System (INIS)

    Takano, Hideki; Tsuchihashi, Keichiro; Yamane, Tsuyoshi; Akino, Fujiyoshi; Ishiguro, Yukio; Ido, Masaru.

    1983-11-01

    A group constant library for the thermal reactor standard nuclear design code system SRAC was produced by using the evaluated nuclear data JENDL-2. Furthermore, the group constants for 235 U were calculated also from ENDF/B-V. Thermal reactor benchmark calculations were performed using the produced group constant library. The selected benchmark cores are two water-moderated lattices (TRX-1 and 2), two heavy water-moderated cores (DCA and ETA-1), two graphite-moderated cores (SHE-8 and 13) and eight critical experiments for critical safety. The effective multiplication factors and lattice cell parameters were calculated and compared with the experimental values. The results are summarized as follows. (1) Effective multiplication factors: The results by JENDL-2 are considerably improved in comparison with ones by ENDF/B-IV. The best agreement is obtained by using JENDL-2 and ENDF/B-V (only 235 U) data. (2) Lattice cell parameters: For the rho 28 (the ratio of epithermal to thermal 238 U captures) and C* (the ratio of 238 U captures to 235 U fissions), the values calculated by JENDL-2 are in good agreement with the experimental values. The rho 28 (the ratio of 238 U to 235 U fissions) are overestimated as found also for the fast reactor benchmarks. The rho 02 (the ratio of epithermal to thermal 232 Th captures) calculated by JENDL-2 or ENDF/B-IV are considerably underestimated. The functions of the SRAC system have been continued to be extended according to the needs of its users. A brief description will be given, in Appendix B, to the extended parts of the SRAC system together with the input specification. (author)

  9. The development of code benchmarks

    International Nuclear Information System (INIS)

    Glass, R.E.

    1986-01-01

    Sandia National Laboratories has undertaken a code benchmarking effort to define a series of cask-like problems having both numerical solutions and experimental data. The development of the benchmarks includes: (1) model problem definition, (2) code intercomparison, and (3) experimental verification. The first two steps are complete and a series of experiments are planned. The experiments will examine the elastic/plastic behavior of cylinders for both the end and side impacts resulting from a nine meter drop. The cylinders will be made from stainless steel and aluminum to give a range of plastic deformations. This paper presents the results of analyses simulating the model's behavior using materials properties for stainless steel and aluminum

  10. Impact testing and analysis for structural code benchmarking

    International Nuclear Information System (INIS)

    Glass, R.E.

    1989-01-01

    Sandia National Laboratories, in cooperation with industry and other national laboratories, has been benchmarking computer codes used to predict the structural, thermal, criticality, and shielding behavior of radioactive materials packages. The first step in the benchmarking of the codes was to develop standard problem sets and to compare the results from several codes and users. This step for structural analysis codes has been completed as described in Structural Code Benchmarking for the Analysis of Impact Response of Nuclear Material Shipping Casks, R.E. Glass, Sandia National Laboratories, 1985. The problem set is shown in Fig. 1. This problem set exercised the ability of the codes to predict the response to end (axisymmetric) and side (plane strain) impacts with both elastic and elastic/plastic materials. The results from these problems showed that there is good agreement in predicting elastic response. Significant differences occurred in predicting strains for the elastic/plastic models. An example of the variation in predicting plastic behavior is given, which shows the hoop strain as a function of time at the impacting end of Model B. These differences in predicting plastic strains demonstrated a need for benchmark data for a cask-like problem

  11. Validation of neutron-transport calculations in benchmark facilities for improved damage-fluence predictions

    International Nuclear Information System (INIS)

    Williams, M.L.; Stallmann, F.W.; Maerker, R.E.; Kam, F.B.K.

    1983-01-01

    An accurate determination of damage fluence accumulated by reactor pressure vessels (RPV) as a function of time is essential in order to evaluate the vessel integrity for both pressurized thermal shock (PTS) transients and end-of-life considerations. The desired accuracy for neutron exposure parameters such as displacements per atom or fluence (E > 1 MeV) is of the order of 20 to 30%. However, these types of accuracies can only be obtained realistically by validation of nuclear data and calculational methods in benchmark facilities. The purposes of this paper are to review the needs and requirements for benchmark experiments, to discuss the status of current benchmark experiments, to summarize results and conclusions obtained so far, and to suggest areas where further benchmarking is needed

  12. Benchmarking in Mobarakeh Steel Company

    Directory of Open Access Journals (Sweden)

    Sasan Ghasemi

    2008-05-01

    Full Text Available Benchmarking is considered as one of the most effective ways of improving performance incompanies. Although benchmarking in business organizations is a relatively new concept and practice, ithas rapidly gained acceptance worldwide. This paper introduces the benchmarking project conducted in Esfahan’s Mobarakeh Steel Company, as the first systematic benchmarking project conducted in Iran. It aimsto share the process deployed for the benchmarking project in this company and illustrate how the projectsystematic implementation led to succes.

  13. Benchmarking in Mobarakeh Steel Company

    OpenAIRE

    Sasan Ghasemi; Mohammad Nazemi; Mehran Nejati

    2008-01-01

    Benchmarking is considered as one of the most effective ways of improving performance in companies. Although benchmarking in business organizations is a relatively new concept and practice, it has rapidly gained acceptance worldwide. This paper introduces the benchmarking project conducted in Esfahan's Mobarakeh Steel Company, as the first systematic benchmarking project conducted in Iran. It aims to share the process deployed for the benchmarking project in this company and illustrate how th...

  14. IAEA sodium void reactivity benchmark calculations

    International Nuclear Information System (INIS)

    Hill, R.N.; Finck, P.J.

    1992-01-01

    In this paper, the IAEA-1 992 ''Benchmark Calculation of Sodium Void Reactivity Effect in Fast Reactor Core'' problem is evaluated. The proposed design is a large axially heterogeneous oxide-fueled fast reactor as described in Section 2; the core utilizes a sodium plenum above the core to enhance leakage effects. The calculation methods used in this benchmark evaluation are described in Section 3. In Section 4, the calculated core performance results for the benchmark reactor model are presented; and in Section 5, the influence of steel and interstitial sodium heterogeneity effects is estimated

  15. Benchmarking gate-based quantum computers

    Science.gov (United States)

    Michielsen, Kristel; Nocon, Madita; Willsch, Dennis; Jin, Fengping; Lippert, Thomas; De Raedt, Hans

    2017-11-01

    With the advent of public access to small gate-based quantum processors, it becomes necessary to develop a benchmarking methodology such that independent researchers can validate the operation of these processors. We explore the usefulness of a number of simple quantum circuits as benchmarks for gate-based quantum computing devices and show that circuits performing identity operations are very simple, scalable and sensitive to gate errors and are therefore very well suited for this task. We illustrate the procedure by presenting benchmark results for the IBM Quantum Experience, a cloud-based platform for gate-based quantum computing.

  16. Benchmarking Nuclear Power Plants

    International Nuclear Information System (INIS)

    Jakic, I.

    2016-01-01

    One of the main tasks an owner have is to keep its business competitive on the market while delivering its product. Being owner of nuclear power plant bear the same (or even more complex and stern) responsibility due to safety risks and costs. In the past, nuclear power plant managements could (partly) ignore profit or it was simply expected and to some degree assured through the various regulatory processes governing electricity rate design. It is obvious now that, with the deregulation, utility privatization and competitive electricity market, key measure of success used at nuclear power plants must include traditional metrics of successful business (return on investment, earnings and revenue generation) as well as those of plant performance, safety and reliability. In order to analyze business performance of (specific) nuclear power plant, benchmarking, as one of the well-established concept and usual method was used. Domain was conservatively designed, with well-adjusted framework, but results have still limited application due to many differences, gaps and uncertainties. (author).

  17. Virtual machine performance benchmarking.

    Science.gov (United States)

    Langer, Steve G; French, Todd

    2011-10-01

    The attractions of virtual computing are many: reduced costs, reduced resources and simplified maintenance. Any one of these would be compelling for a medical imaging professional attempting to support a complex practice on limited resources in an era of ever tightened reimbursement. In particular, the ability to run multiple operating systems optimized for different tasks (computational image processing on Linux versus office tasks on Microsoft operating systems) on a single physical machine is compelling. However, there are also potential drawbacks. High performance requirements need to be carefully considered if they are to be executed in an environment where the running software has to execute through multiple layers of device drivers before reaching the real disk or network interface. Our lab has attempted to gain insight into the impact of virtualization on performance by benchmarking the following metrics on both physical and virtual platforms: local memory and disk bandwidth, network bandwidth, and integer and floating point performance. The virtual performance metrics are compared to baseline performance on "bare metal." The results are complex, and indeed somewhat surprising.

  18. Functional results-oriented healthcare leadership: a novel leadership model.

    Science.gov (United States)

    Al-Touby, Salem Said

    2012-03-01

    This article modifies the traditional functional leadership model to accommodate contemporary needs in healthcare leadership based on two findings. First, the article argues that it is important that the ideal healthcare leadership emphasizes the outcomes of the patient care more than processes and structures used to deliver such care; and secondly, that the leadership must strive to attain effectiveness of their care provision and not merely targeting the attractive option of efficient operations. Based on these premises, the paper reviews the traditional Functional Leadership Model and the three elements that define the type of leadership an organization has namely, the tasks, the individuals, and the team. The article argues that concentrating on any one of these elements is not ideal and proposes adding a new element to the model to construct a novel Functional Result-Oriented healthcare leadership model. The recommended Functional-Results Oriented leadership model embosses the results element on top of the other three elements so that every effort on healthcare leadership is directed towards attaining excellent patient outcomes.

  19. BENCHMARKING LEARNER EDUCATION USING ONLINE BUSINESS SIMULATION

    Directory of Open Access Journals (Sweden)

    Alfred H. Miller

    2016-06-01

    Full Text Available For programmatic accreditation by the Accreditation Council of Business Schools and Programs (ACBSP, business programs are required to meet STANDARD #4, Measurement and Analysis of Student Learning and Performance. Business units must demonstrate that outcome assessment systems are in place using documented evidence that shows how the results are being used to further develop or improve the academic business program. The Higher Colleges of Technology, a 17 campus federal university in the United Arab Emirates, differentiates its applied degree programs through a ‘learning by doing ethos,’ which permeates the entire curricula. This paper documents benchmarking of education for managing innovation. Using business simulation for Bachelors of Business, Year 3 learners, in a business strategy class; learners explored through a simulated environment the following functional areas; research and development, production, and marketing of a technology product. Student teams were required to use finite resources and compete against other student teams in the same universe. The study employed an instrument developed in a 60-sample pilot study of business simulation learners against which subsequent learners participating in online business simulation could be benchmarked. The results showed incremental improvement in the program due to changes made in assessment strategies, including the oral defense.

  20. Benchmarking Variable Selection in QSAR.

    Science.gov (United States)

    Eklund, Martin; Norinder, Ulf; Boyer, Scott; Carlsson, Lars

    2012-02-01

    Variable selection is important in QSAR modeling since it can improve model performance and transparency, as well as reduce the computational cost of model fitting and predictions. Which variable selection methods that perform well in QSAR settings is largely unknown. To address this question we, in a total of 1728 benchmarking experiments, rigorously investigated how eight variable selection methods affect the predictive performance and transparency of random forest models fitted to seven QSAR datasets covering different endpoints, descriptors sets, types of response variables, and number of chemical compounds. The results show that univariate variable selection methods are suboptimal and that the number of variables in the benchmarked datasets can be reduced with about 60 % without significant loss in model performance when using multivariate adaptive regression splines MARS and forward selection. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Method of vacuum correlation functions: Results and prospects

    International Nuclear Information System (INIS)

    Badalian, A. M.; Simonov, Yu. A.; Shevchenko, V. I.

    2006-01-01

    Basic results obtained within the QCD method of vacuum correlation functions over the past 20 years in the context of investigations into strong-interaction physics at the Institute of Theoretical and Experimental Physics (ITEP, Moscow) are formulated Emphasis is placed primarily on the prospects of the general theory developed within QCD by employing both nonperturbative and perturbative methods. On the basis of ab initio arguments, it is shown that the lowest two field correlation functions play a dominant role in QCD dynamics. A quantitative theory of confinement and deconfinement, as well as of the spectra of light and heavy quarkonia, glueballs, and hybrids, is given in terms of these two correlation functions. Perturbation theory in a nonperturbative vacuum (background perturbation theory) plays a significant role, not possessing drawbacks of conventional perturbation theory and leading to the infrared freezing of the coupling constant α s

  2. Functional requirements regarding medical registries--preliminary results.

    Science.gov (United States)

    Oberbichler, Stefan; Hörbst, Alexander

    2013-01-01

    The term medical registry is used to reference tools and processes to support clinical or epidemiologic research or provide a data basis for decisions regarding health care policies. In spite of this wide range of applications the term registry and the functional requirements which a registry should support are not clearly defined. This work presents preliminary results of a literature review to discover functional requirements which form a registry. To extract these requirements a set of peer reviewed articles was collected. These set of articles was screened by using methods from qualitative research. Up to now most discovered functional requirements focus on data quality (e. g. prevent transcription error by conducting automatic domain checks).

  3. QFD Based Benchmarking Logic Using TOPSIS and Suitability Index

    Directory of Open Access Journals (Sweden)

    Jaeho Cho

    2015-01-01

    Full Text Available Users’ satisfaction on quality is a key that leads successful completion of the project in relation to decision-making issues in building design solutions. This study proposed QFD (quality function deployment based benchmarking logic of market products for building envelope solutions. Benchmarking logic is composed of QFD-TOPSIS and QFD-SI. QFD-TOPSIS assessment model is able to evaluate users’ preferences on building envelope solutions that are distributed in the market and may allow quick achievement of knowledge. TOPSIS (Technique for Order of Preference by Similarity to Ideal Solution provides performance improvement criteria that help defining users’ target performance criteria. SI (Suitability Index allows analysis on suitability of the building envelope solution based on users’ required performance criteria. In Stage 1 of the case study, QFD-TOPSIS was used to benchmark the performance criteria of market envelope products. In Stage 2, a QFD-SI assessment was performed after setting user performance targets. The results of this study contribute to confirming the feasibility of QFD based benchmarking in the field of Building Envelope Performance Assessment (BEPA.

  4. FUNCTIONAL ABILITIES AS PREDICTORS OF PREADOSLESCENT STUDENTS’ ATHLETIC RESULTS OUTCOME

    Directory of Open Access Journals (Sweden)

    Miroljub Ivanović

    2011-09-01

    Full Text Available Aim of this research has been directed to the functional abilities relation testing (as predictors and athletic results (as criterion of students, who are VII and VIII grade of primary school (Χ= 13, 9 years; SD = 1, 17. The research has been conducted in Valjevo during November 2010. on the sample of 108 examinees. Variables’ sample has been assembled from 3 tests for functional abilities (maximal oxygen consumption, pulse frequency and vital lungs capacity evaluation and 4 athletic disciplines (high jump, long jump, shot put and 60 meters low start sprint from current physical education curriculum. Crombah-alfa coefficient values indicate to satisfactory reliability of applied instruments. In data processing canonical correlation analysis and multiple regression analysis have been used. Achieved canonical correlation analysis results showed that functional abilities set is statistically and significantly related to criterion variables set (R=.67, manifesting one canonical factor on the level p<.03. Achieved determination coefficient (R² = .43 indicates to functional abilities prognostic significance of explained variance 46% criterion. Using hierarchy regression model following statistically significant beta coefficient of functional abilities as partial predictors of athletics outcome have been determined: I for vital lungs capacity- high jump (β = .67, p < .01, II for vital lungs capacity- long jump (β = .55, p < .01, III for vital lungs capacity and pulse frequency- shot put (β =.-.34, p < .01; β =.42, p < .02 and IV for vital lungs capacity- 60 meters sprint (β = .-.39. Regression equation calculation of other applied functional abilities preadolescents’ predictor variables has not statistically and significantly contributed to univariance prediction of criterion variable variance

  5. Benchmarking for Best Practice

    CERN Document Server

    Zairi, Mohamed

    1998-01-01

    Benchmarking for Best Practice uses up-to-the-minute case-studies of individual companies and industry-wide quality schemes to show how and why implementation has succeeded. For any practitioner wanting to establish best practice in a wide variety of business areas, this book makes essential reading. .It is also an ideal textbook on the applications of TQM since it describes concepts, covers definitions and illustrates the applications with first-hand examples. Professor Mohamed Zairi is an international expert and leading figure in the field of benchmarking. His pioneering work in this area l

  6. Benchmarking Danish Industries

    DEFF Research Database (Denmark)

    Gammelgaard, Britta; Bentzen, Eric; Aagaard Andreassen, Mette

    2003-01-01

    compatible survey. The International Manufacturing Strategy Survey (IMSS) doesbring up the question of supply chain management, but unfortunately, we did not have access to thedatabase. Data from the members of the SCOR-model, in the form of benchmarked performance data,may exist, but are nonetheless...... not public. The survey is a cooperative project "Benchmarking DanishIndustries" with CIP/Aalborg University, the Danish Technological University, the DanishTechnological Institute and Copenhagen Business School as consortia partners. The project has beenfunded by the Danish Agency for Trade and Industry...

  7. [Do you mean benchmarking?].

    Science.gov (United States)

    Bonnet, F; Solignac, S; Marty, J

    2008-03-01

    The purpose of benchmarking is to settle improvement processes by comparing the activities to quality standards. The proposed methodology is illustrated by benchmark business cases performed inside medical plants on some items like nosocomial diseases or organization of surgery facilities. Moreover, the authors have built a specific graphic tool, enhanced with balance score numbers and mappings, so that the comparison between different anesthesia-reanimation services, which are willing to start an improvement program, is easy and relevant. This ready-made application is even more accurate as far as detailed tariffs of activities are implemented.

  8. Reactor fuel depletion benchmark of TINDER

    International Nuclear Information System (INIS)

    Martin, W.J.; Oliveira, C.R.E. de; Hecht, A.A.

    2014-01-01

    Highlights: • A reactor burnup benchmark of TINDER, coupling MCNP6 to CINDER2008, was performed. • TINDER is a poor candidate for fuel depletion calculations using its current libraries. • Data library modification is necessary if fuel depletion is desired from TINDER. - Abstract: Accurate burnup calculations are key to proper nuclear reactor design, fuel cycle modeling, and disposal estimations. The TINDER code, originally designed for activation analyses, has been modified to handle full burnup calculations, including the widely used predictor–corrector feature. In order to properly characterize the performance of TINDER for this application, a benchmark calculation was performed. Although the results followed the trends of past benchmarked codes for a UO 2 PWR fuel sample from the Takahama-3 reactor, there were obvious deficiencies in the final result, likely in the nuclear data library that was used. Isotopic comparisons versus experiment and past code benchmarks are given, as well as hypothesized areas of deficiency and future work

  9. Benchmark analysis of MCNP trademark ENDF/B-VI iron

    International Nuclear Information System (INIS)

    Court, J.D.; Hendricks, J.S.

    1994-12-01

    The MCNP ENDF/B-VI iron cross-section data was subjected to four benchmark studies as part of the Hiroshima/Nagasaki dose re-evaluation for the National Academy of Science and the Defense Nuclear Agency. The four benchmark studies were: (1) the iron sphere benchmarks from the Lawrence Livermore Pulsed Spheres; (2) the Oak Ridge National Laboratory Fusion Reactor Shielding Benchmark; (3) a 76-cm diameter iron sphere benchmark done at the University of Illinois; (4) the Oak Ridge National Laboratory Benchmark for Neutron Transport through Iron. MCNP4A was used to model each benchmark and computational results from the ENDF/B-VI iron evaluations were compared to ENDF/B-IV, ENDF/B-V, the MCNP Recommended Data Set (which includes Los Alamos National Laboratory Group T-2 evaluations), and experimental data. The results show that the ENDF/B-VI iron evaluations are as good as, or better than, previous data sets

  10. Perforator plus flaps: Optimizing results while preserving function and esthesis

    Directory of Open Access Journals (Sweden)

    Mehrotra Sandeep

    2010-01-01

    Full Text Available Background: The tenuous blood supply of traditional flaps for wound cover combined with collateral damage by sacrifice of functional muscle, truncal vessels, or nerves has been the bane of reconstructive procedures. The concept of perforator plus flaps employs dual vascular supply to flaps. By safeguarding perforators along with supply from its base, robust flaps can be raised in diverse situations. This is achieved while limiting collateral damage and preserving nerves, vessels, and functioning muscle with better function and aesthesis. Materials and Methods: The perforator plus concept was applied in seven different clinical situations. Functional muscle and fasciocutaneous flaps were employed in five and adipofascial flaps in two cases, primarily involving lower extremity defects and back. Adipofascial perforator plus flaps were employed to provide cover for tibial fracture in one patients and chronic venous ulcer in another. Results: All flaps survived without any loss and provided long-term stable cover, both over soft tissue and bone. Functional preservation was achieved in all cases where muscle flaps were employed with no clinical evidence of loss of power. There was no sensory loss or significant oedema in or distal to the flap in both cases where neurovascular continuity was preserved during flap elevation. Fracture union and consolidation were satisfactory. One patient had minimal graft loss over fascia which required application of stored grafts with subsequent take. No patient required re-operation. Conclusions: Perforator plus concept is holistic and applicable to most flap types in varied situations. It permits the exercise of many locoregional flap options while limiting collateral functional damage. Aesthetic considerations are also addressed while raising adipofascial flaps because of no appreciable donor defects. With quick operating times and low failure risk, these flaps can be a better substitute to traditional flaps and at

  11. Benchmarking Danish Vocational Education and Training Programmes

    DEFF Research Database (Denmark)

    Bogetoft, Peter; Wittrup, Jesper

    This study paper discusses methods whereby Danish vocational education and training colleges can be benchmarked, and presents results from a number of models. It is conceptually complicated to benchmark vocational colleges, as the various colleges in Denmark offer a wide range of course programmes...... attempt to summarise the various effects that the colleges have in two relevant figures, namely retention rates of students and employment rates among students who have completed training programmes....

  12. Recommendations for Benchmarking Preclinical Studies of Nanomedicines.

    Science.gov (United States)

    Dawidczyk, Charlene M; Russell, Luisa M; Searson, Peter C

    2015-10-01

    Nanoparticle-based delivery systems provide new opportunities to overcome the limitations associated with traditional small-molecule drug therapy for cancer and to achieve both therapeutic and diagnostic functions in the same platform. Preclinical trials are generally designed to assess therapeutic potential and not to optimize the design of the delivery platform. Consequently, progress in developing design rules for cancer nanomedicines has been slow, hindering progress in the field. Despite the large number of preclinical trials, several factors restrict comparison and benchmarking of different platforms, including variability in experimental design, reporting of results, and the lack of quantitative data. To solve this problem, we review the variables involved in the design of preclinical trials and propose a protocol for benchmarking that we recommend be included in in vivo preclinical studies of drug-delivery platforms for cancer therapy. This strategy will contribute to building the scientific knowledge base that enables development of design rules and accelerates the translation of new technologies. ©2015 American Association for Cancer Research.

  13. Benchmarking and Performance Management

    Directory of Open Access Journals (Sweden)

    Adrian TANTAU

    2010-12-01

    Full Text Available The relevance of the chosen topic is explained by the meaning of the firm efficiency concept - the firm efficiency means the revealed performance (how well the firm performs in the actual market environment given the basic characteristics of the firms and their markets that are expected to drive their profitability (firm size, market power etc.. This complex and relative performance could be due to such things as product innovation, management quality, work organization, some other factors can be a cause even if they are not directly observed by the researcher. The critical need for the management individuals/group to continuously improve their firm/company’s efficiency and effectiveness, the need for the managers to know which are the success factors and the competitiveness determinants determine consequently, what performance measures are most critical in determining their firm’s overall success. Benchmarking, when done properly, can accurately identify both successful companies and the underlying reasons for their success. Innovation and benchmarking firm level performance are critical interdependent activities. Firm level variables, used to infer performance, are often interdependent due to operational reasons. Hence, the managers need to take the dependencies among these variables into account when forecasting and benchmarking performance. This paper studies firm level performance using financial ratio and other type of profitability measures. It uses econometric models to describe and then propose a method to forecast and benchmark performance.

  14. Surveys and Benchmarks

    Science.gov (United States)

    Bers, Trudy

    2012-01-01

    Surveys and benchmarks continue to grow in importance for community colleges in response to several factors. One is the press for accountability, that is, for colleges to report the outcomes of their programs and services to demonstrate their quality and prudent use of resources, primarily to external constituents and governing boards at the state…

  15. Numerical methods: Analytical benchmarking in transport theory

    International Nuclear Information System (INIS)

    Ganapol, B.D.

    1988-01-01

    Numerical methods applied to reactor technology have reached a high degree of maturity. Certainly one- and two-dimensional neutron transport calculations have become routine, with several programs available on personal computer and the most widely used programs adapted to workstation and minicomputer computational environments. With the introduction of massive parallelism and as experience with multitasking increases, even more improvement in the development of transport algorithms can be expected. Benchmarking an algorithm is usually not a very pleasant experience for the code developer. Proper algorithmic verification by benchmarking involves the following considerations: (1) conservation of particles, (2) confirmation of intuitive physical behavior, and (3) reproduction of analytical benchmark results. By using today's computational advantages, new basic numerical methods have been developed that allow a wider class of benchmark problems to be considered

  16. Benchmarking ENDF/B-VII.0

    International Nuclear Information System (INIS)

    Marck, Steven C. van der

    2006-01-01

    The new major release VII.0 of the ENDF/B nuclear data library has been tested extensively using benchmark calculations. These were based upon MCNP-4C3 continuous-energy Monte Carlo neutronics simulations, together with nuclear data processed using the code NJOY. Three types of benchmarks were used, viz., criticality safety benchmarks (fusion) shielding benchmarks, and reference systems for which the effective delayed neutron fraction is reported. For criticality safety, more than 700 benchmarks from the International Handbook of Criticality Safety Benchmark Experiments were used. Benchmarks from all categories were used, ranging from low-enriched uranium, compound fuel, thermal spectrum ones (LEU-COMP-THERM), to mixed uranium-plutonium, metallic fuel, fast spectrum ones (MIX-MET-FAST). For fusion shielding many benchmarks were based on IAEA specifications for the Oktavian experiments (for Al, Co, Cr, Cu, LiF, Mn, Mo, Si, Ti, W, Zr), Fusion Neutronics Source in Japan (for Be, C, N, O, Fe, Pb), and Pulsed Sphere experiments at Lawrence Livermore National Laboratory (for 6 Li, 7 Li, Be, C, N, O, Mg, Al, Ti, Fe, Pb, D 2 O, H 2 O, concrete, polyethylene and teflon). For testing delayed neutron data more than thirty measurements in widely varying systems were used. Among these were measurements in the Tank Critical Assembly (TCA in Japan) and IPEN/MB-01 (Brazil), both with a thermal spectrum, and two cores in Masurca (France) and three cores in the Fast Critical Assembly (FCA, Japan), all with fast spectra. In criticality safety, many benchmarks were chosen from the category with a thermal spectrum, low-enriched uranium, compound fuel (LEU-COMP-THERM), because this is typical of most current-day reactors, and because these benchmarks were previously underpredicted by as much as 0.5% by most nuclear data libraries (such as ENDF/B-VI.8, JEFF-3.0). The calculated results presented here show that this underprediction is no longer there for ENDF/B-VII.0. The average over 257

  17. Hartman-Wintner growth results for sublinear functional differential equations

    Directory of Open Access Journals (Sweden)

    John A. D. Appleby

    2017-01-01

    Full Text Available This article determines the rate of growth to infinity of scalar autonomous nonlinear functional and Volterra differential equations. In these equations, the right-hand side is a positive continuous linear functional of f(x. We assume f grows sublinearly, leading to subexponential growth in the solutions. The main results show that the solution of the functional differential equations are asymptotic to that of an auxiliary autonomous ordinary differential equation with right-hand side proportional to f. This happens provided f grows more slowly than l(x=x/log(x. The linear-logarithmic growth rate is also shown to be critical: if f grows more rapidly than l, the ODE dominates the FDE; if f is asymptotic to a constant multiple of l, the FDE and ODE grow at the same rate, modulo a constant non-unit factor; if f grows more slowly than l, the ODE and FDE grow at exactly the same rate. A partial converse of the last result is also proven. In the case when the growth rate is slower than that of the ODE, sharp bounds on the growth rate are determined. The Volterra and finite memory equations can have differing asymptotic behaviour and we explore the source of these differences.

  18. False Positive Functional Analysis Results as a Contributor of Treatment Failure during Functional Communication Training

    Science.gov (United States)

    Mann, Amanda J.; Mueller, Michael M.

    2009-01-01

    Research has shown that functional analysis results are beneficial for treatment selection because they identify reinforcers for severe behavior that can then be used to reinforce replacement behaviors either differentially or noncontingently. Theoretically then, if a reinforcer is identified in a functional analysis erroneously, a well researched…

  19. Benchmarking infrastructure for mutation text mining

    Science.gov (United States)

    2014-01-01

    Background Experimental research on the automatic extraction of information about mutations from texts is greatly hindered by the lack of consensus evaluation infrastructure for the testing and benchmarking of mutation text mining systems. Results We propose a community-oriented annotation and benchmarking infrastructure to support development, testing, benchmarking, and comparison of mutation text mining systems. The design is based on semantic standards, where RDF is used to represent annotations, an OWL ontology provides an extensible schema for the data and SPARQL is used to compute various performance metrics, so that in many cases no programming is needed to analyze results from a text mining system. While large benchmark corpora for biological entity and relation extraction are focused mostly on genes, proteins, diseases, and species, our benchmarking infrastructure fills the gap for mutation information. The core infrastructure comprises (1) an ontology for modelling annotations, (2) SPARQL queries for computing performance metrics, and (3) a sizeable collection of manually curated documents, that can support mutation grounding and mutation impact extraction experiments. Conclusion We have developed the principal infrastructure for the benchmarking of mutation text mining tasks. The use of RDF and OWL as the representation for corpora ensures extensibility. The infrastructure is suitable for out-of-the-box use in several important scenarios and is ready, in its current state, for initial community adoption. PMID:24568600

  20. Benchmarking : una herramienta para gestionar la excelencia en las bibliotecas y los servicios de información

    OpenAIRE

    Alonso-Arévalo, Julio; Martín Cerro, Sonia

    2000-01-01

    An organization can use diverse methodologic tools with the purpose of obtaining the best results based on the conditions of competitiveness. One of them is benchmarking, that is used to identify the best practices in other organizations, with the objective to learn them and to improve the process or certain function. The concept of benchmarking is defined and his different modalities. The characteristics and possibilities of their application in libraries and units of information are analyze...

  1. EPRI depletion benchmark calculations using PARAGON

    International Nuclear Information System (INIS)

    Kucukboyaci, Vefa N.

    2015-01-01

    Highlights: • PARAGON depletion calculations are benchmarked against the EPRI reactivity decrement experiments. • Benchmarks cover a wide range of enrichments, burnups, cooling times, and burnable absorbers, and different depletion and storage conditions. • Results from PARAGON-SCALE scheme are more conservative relative to the benchmark data. • ENDF/B-VII based data reduces the excess conservatism and brings the predictions closer to benchmark reactivity decrement values. - Abstract: In order to conservatively apply burnup credit in spent fuel pool criticality analyses, code validation for both fresh and used fuel is required. Fresh fuel validation is typically done by modeling experiments from the “International Handbook.” A depletion validation can determine a bias and bias uncertainty for the worth of the isotopes not found in the fresh fuel critical experiments. Westinghouse’s burnup credit methodology uses PARAGON™ (Westinghouse 2-D lattice physics code) and its 70-group cross-section library, which have been benchmarked, qualified, and licensed both as a standalone transport code and as a nuclear data source for core design simulations. A bias and bias uncertainty for the worth of depletion isotopes, however, are not available for PARAGON. Instead, the 5% decrement approach for depletion uncertainty is used, as set forth in the Kopp memo. Recently, EPRI developed a set of benchmarks based on a large set of power distribution measurements to ascertain reactivity biases. The depletion reactivity has been used to create 11 benchmark cases for 10, 20, 30, 40, 50, and 60 GWd/MTU and 3 cooling times 100 h, 5 years, and 15 years. These benchmark cases are analyzed with PARAGON and the SCALE package and sensitivity studies are performed using different cross-section libraries based on ENDF/B-VI.3 and ENDF/B-VII data to assess that the 5% decrement approach is conservative for determining depletion uncertainty

  2. Benchmark neutron porosity log calculations

    International Nuclear Information System (INIS)

    Little, R.C.; Michael, M.; Verghese, K.; Gardner, R.P.

    1989-01-01

    Calculations have been made for a benchmark neutron porosity log problem with the general purpose Monte Carlo code MCNP and the specific purpose Monte Carlo code McDNL. For accuracy and timing comparison purposes the CRAY XMP and MicroVax II computers have been used with these codes. The CRAY has been used for an analog version of the MCNP code while the MicroVax II has been used for the optimized variance reduction versions of both codes. Results indicate that the two codes give the same results within calculated standard deviations. Comparisons are given and discussed for accuracy (precision) and computation times for the two codes

  3. Benchmarking infrastructure for mutation text mining.

    Science.gov (United States)

    Klein, Artjom; Riazanov, Alexandre; Hindle, Matthew M; Baker, Christopher Jo

    2014-02-25

    Experimental research on the automatic extraction of information about mutations from texts is greatly hindered by the lack of consensus evaluation infrastructure for the testing and benchmarking of mutation text mining systems. We propose a community-oriented annotation and benchmarking infrastructure to support development, testing, benchmarking, and comparison of mutation text mining systems. The design is based on semantic standards, where RDF is used to represent annotations, an OWL ontology provides an extensible schema for the data and SPARQL is used to compute various performance metrics, so that in many cases no programming is needed to analyze results from a text mining system. While large benchmark corpora for biological entity and relation extraction are focused mostly on genes, proteins, diseases, and species, our benchmarking infrastructure fills the gap for mutation information. The core infrastructure comprises (1) an ontology for modelling annotations, (2) SPARQL queries for computing performance metrics, and (3) a sizeable collection of manually curated documents, that can support mutation grounding and mutation impact extraction experiments. We have developed the principal infrastructure for the benchmarking of mutation text mining tasks. The use of RDF and OWL as the representation for corpora ensures extensibility. The infrastructure is suitable for out-of-the-box use in several important scenarios and is ready, in its current state, for initial community adoption.

  4. Benchmarking for Cost Improvement. Final report

    Energy Technology Data Exchange (ETDEWEB)

    1993-09-01

    The US Department of Energy`s (DOE) Office of Environmental Restoration and Waste Management (EM) conducted the Benchmarking for Cost Improvement initiative with three objectives: Pilot test benchmarking as an EM cost improvement tool; identify areas for cost improvement and recommend actions to address these areas; provide a framework for future cost improvement. The benchmarking initiative featured the use of four principal methods (program classification, nationwide cost improvement survey, paired cost comparison and component benchmarking). Interested parties contributed during both the design and execution phases. The benchmarking initiative was conducted on an accelerated basis. Of necessity, it considered only a limited set of data that may not be fully representative of the diverse and complex conditions found at the many DOE installations. The initiative generated preliminary data about cost differences and it found a high degree of convergence on several issues. Based on this convergence, the report recommends cost improvement strategies and actions. This report describes the steps taken as part of the benchmarking initiative and discusses the findings and recommended actions for achieving cost improvement. The results and summary recommendations, reported below, are organized by the study objectives.

  5. Benchmarking i den offentlige sektor

    DEFF Research Database (Denmark)

    Bukh, Per Nikolaj; Dietrichson, Lars; Sandalgaard, Niels

    2008-01-01

    I artiklen vil vi kort diskutere behovet for benchmarking i fraværet af traditionelle markedsmekanismer. Herefter vil vi nærmere redegøre for, hvad benchmarking er med udgangspunkt i fire forskellige anvendelser af benchmarking. Regulering af forsyningsvirksomheder vil blive behandlet, hvorefter...

  6. Interpretation of Blood Microbiology Results - Function of the Clinical Microbiologist.

    Science.gov (United States)

    Kristóf, Katalin; Pongrácz, Júlia

    2016-04-01

    The proper use and interpretation of blood microbiology results may be one of the most challenging and one of the most important functions of clinical microbiology laboratories. Effective implementation of this function requires careful consideration of specimen collection and processing, pathogen detection techniques, and prompt and precise reporting of identification and susceptibility results. The responsibility of the treating physician is proper formulation of the analytical request and to provide the laboratory with complete and precise patient information, which are inevitable prerequisites of a proper testing and interpretation. The clinical microbiologist can offer advice concerning the differential diagnosis, sampling techniques and detection methods to facilitate diagnosis. Rapid detection methods are essential, since the sooner a pathogen is detected, the better chance the patient has of getting cured. Besides the gold-standard blood culture technique, microbiologic methods that decrease the time in obtaining a relevant result are more and more utilized today. In the case of certain pathogens, the pathogen can be identified directly from the blood culture bottle after propagation with serological or automated/semi-automated systems or molecular methods or with MALDI-TOF MS (matrix-assisted laser desorption-ionization time of flight mass spectrometry). Molecular biology methods are also suitable for the rapid detection and identification of pathogens from aseptically collected blood samples. Another important duty of the microbiology laboratory is to notify the treating physician immediately about all relevant information if a positive sample is detected. The clinical microbiologist may provide important guidance regarding the clinical significance of blood isolates, since one-third to one-half of blood culture isolates are contaminants or isolates of unknown clinical significance. To fully exploit the benefits of blood culture and other (non- culture

  7. Benchmarking, benchmarks, or best practices? Applying quality improvement principles to decrease surgical turnaround time.

    Science.gov (United States)

    Mitchell, L

    1996-01-01

    The processes of benchmarking, benchmark data comparative analysis, and study of best practices are distinctly different. The study of best practices is explained with an example based on the Arthur Andersen & Co. 1992 "Study of Best Practices in Ambulatory Surgery". The results of a national best practices study in ambulatory surgery were used to provide our quality improvement team with the goal of improving the turnaround time between surgical cases. The team used a seven-step quality improvement problem-solving process to improve the surgical turnaround time. The national benchmark for turnaround times between surgical cases in 1992 was 13.5 minutes. The initial turnaround time at St. Joseph's Medical Center was 19.9 minutes. After the team implemented solutions, the time was reduced to an average of 16.3 minutes, an 18% improvement. Cost-benefit analysis showed a potential enhanced revenue of approximately $300,000, or a potential savings of $10,119. Applying quality improvement principles to benchmarking, benchmarks, or best practices can improve process performance. Understanding which form of benchmarking the institution wishes to embark on will help focus a team and use appropriate resources. Communicating with professional organizations that have experience in benchmarking will save time and money and help achieve the desired results.

  8. Cloud benchmarking for performance

    OpenAIRE

    Varghese, Blesson; Akgun, Ozgur; Miguel, Ian; Thai, Long; Barker, Adam

    2014-01-01

    Date of Acceptance: 20/09/2014 How can applications be deployed on the cloud to achieve maximum performance? This question has become significant and challenging with the availability of a wide variety of Virtual Machines (VMs) with different performance capabilities in the cloud. The above question is addressed by proposing a six step benchmarking methodology in which a user provides a set of four weights that indicate how important each of the following groups: memory, processor, computa...

  9. Full sphere hydrodynamic and dynamo benchmarks

    KAUST Repository

    Marti, P.

    2014-01-26

    Convection in planetary cores can generate fluid flow and magnetic fields, and a number of sophisticated codes exist to simulate the dynamic behaviour of such systems. We report on the first community activity to compare numerical results of computer codes designed to calculate fluid flow within a whole sphere. The flows are incompressible and rapidly rotating and the forcing of the flow is either due to thermal convection or due to moving boundaries. All problems defined have solutions that alloweasy comparison, since they are either steady, slowly drifting or perfectly periodic. The first two benchmarks are defined based on uniform internal heating within the sphere under the Boussinesq approximation with boundary conditions that are uniform in temperature and stress-free for the flow. Benchmark 1 is purely hydrodynamic, and has a drifting solution. Benchmark 2 is a magnetohydrodynamic benchmark that can generate oscillatory, purely periodic, flows and magnetic fields. In contrast, Benchmark 3 is a hydrodynamic rotating bubble benchmark using no slip boundary conditions that has a stationary solution. Results from a variety of types of code are reported, including codes that are fully spectral (based on spherical harmonic expansions in angular coordinates and polynomial expansions in radius), mixed spectral and finite difference, finite volume, finite element and also a mixed Fourier-finite element code. There is good agreement between codes. It is found that in Benchmarks 1 and 2, the approximation of a whole sphere problem by a domain that is a spherical shell (a sphere possessing an inner core) does not represent an adequate approximation to the system, since the results differ from whole sphere results. © The Authors 2014. Published by Oxford University Press on behalf of The Royal Astronomical Society.

  10. Benchmark calculations for fusion blanket development

    International Nuclear Information System (INIS)

    Sawan, M.E.; Cheng, E.T.

    1985-01-01

    Benchmark problems representing the leading fusion blanket concepts are presented. Benchmark calculations for self-cooled Li/sub 17/Pb/sub 83/ and helium-cooled blankets were performed. Multigroup data libraries generated from ENDF/B-IV and V files using the NJOY and AMPX processing codes with different weighting functions were used. The sensitivity of the TBR to group structure and weighting spectrum increases and Li enrichment decrease with up to 20% discrepancies for thin natural Li/sub 17/Pb/sub 83/ blankets

  11. Benchmark calculations for fusion blanket development

    International Nuclear Information System (INIS)

    Sawan, M.L.; Cheng, E.T.

    1986-01-01

    Benchmark problems representing the leading fusion blanket concepts are presented. Benchmark calculations for self-cooled Li 17 Pb 83 and helium-cooled blankets were performed. Multigroup data libraries generated from ENDF/B-IV and V files using the NJOY and AMPX processing codes with different weighting functions were used. The sensitivity of the tritium breeding ratio to group structure and weighting spectrum increases as the thickness and Li enrichment decrease with up to 20% discrepancies for thin natural Li 17 Pb 83 blankets. (author)

  12. Benchmarking reference services: an introduction.

    Science.gov (United States)

    Marshall, J G; Buchanan, H S

    1995-01-01

    Benchmarking is based on the common sense idea that someone else, either inside or outside of libraries, has found a better way of doing certain things and that your own library's performance can be improved by finding out how others do things and adopting the best practices you find. Benchmarking is one of the tools used for achieving continuous improvement in Total Quality Management (TQM) programs. Although benchmarking can be done on an informal basis, TQM puts considerable emphasis on formal data collection and performance measurement. Used to its full potential, benchmarking can provide a common measuring stick to evaluate process performance. This article introduces the general concept of benchmarking, linking it whenever possible to reference services in health sciences libraries. Data collection instruments that have potential application in benchmarking studies are discussed and the need to develop common measurement tools to facilitate benchmarking is emphasized.

  13. The zero-dimensional O(N) vector model as a benchmark for perturbation theory, the large-N expansion and the functional renormalization group

    International Nuclear Information System (INIS)

    Keitel, Jan; Bartosch, Lorenz

    2012-01-01

    We consider the zero-dimensional O(N) vector model as a simple example to calculate n-point correlation functions using perturbation theory, the large-N expansion and the functional renormalization group (FRG). Comparing our findings with exact results, we show that perturbation theory breaks down for moderate interactions for all N, as one should expect. While the interaction-induced shift of the free energy and the self-energy are well described by the large-N expansion even for small N, this is not the case for higher order correlation functions. However, using the FRG in its one-particle irreducible formalism, we see that very few running couplings suffice to get accurate results for arbitrary N in the strong coupling regime, outperforming the large-N expansion for small N. We further remark on how the derivative expansion, a well-known approximation strategy for the FRG, reduces to an exact method for the zero-dimensional O(N) vector model. (paper)

  14. Benchmark calculations of power distribution within assemblies

    International Nuclear Information System (INIS)

    Cavarec, C.; Perron, J.F.; Verwaerde, D.; West, J.P.

    1994-09-01

    The main objective of this Benchmark is to compare different techniques for fine flux prediction based upon coarse mesh diffusion or transport calculations. We proposed 5 ''core'' configurations including different assembly types (17 x 17 pins, ''uranium'', ''absorber'' or ''MOX'' assemblies), with different boundary conditions. The specification required results in terms of reactivity, pin by pin fluxes and production rate distributions. The proposal for these Benchmark calculations was made by J.C. LEFEBVRE, J. MONDOT, J.P. WEST and the specification (with nuclear data, assembly types, core configurations for 2D geometry and results presentation) was distributed to correspondents of the OECD Nuclear Energy Agency. 11 countries and 19 companies answered the exercise proposed by this Benchmark. Heterogeneous calculations and homogeneous calculations were made. Various methods were used to produce the results: diffusion (finite differences, nodal...), transport (P ij , S n , Monte Carlo). This report presents an analysis and intercomparisons of all the results received

  15. Benchmarking Non-Hardware Balance of System (Soft) Costs for U.S. Photovoltaic Systems Using a Data-Driven Analysis from PV Installer Survey Results

    Energy Technology Data Exchange (ETDEWEB)

    Ardani, K.; Barbose, G.; Margolis, R.; Wiser, R.; Feldman, D.; Ong, S.

    2012-11-01

    This report presents results from the first U.S. Department of Energy (DOE) sponsored, bottom-up data-collection and analysis of non-hardware balance-of-system costs--often referred to as 'business process' or 'soft' costs--for residential and commercial photovoltaic (PV) systems.

  16. HYDROCOIN [HYDROlogic COde INtercomparison] Level 1: Benchmarking and verification test results with CFEST [Coupled Fluid, Energy, and Solute Transport] code: Draft report

    International Nuclear Information System (INIS)

    Yabusaki, S.; Cole, C.; Monti, A.M.; Gupta, S.K.

    1987-04-01

    Part of the safety analysis is evaluating groundwater flow through the repository and the host rock to the accessible environment by developing mathematical or analytical models and numerical computer codes describing the flow mechanisms. This need led to the establishment of an international project called HYDROCOIN (HYDROlogic COde INtercomparison) organized by the Swedish Nuclear Power Inspectorate, a forum for discussing techniques and strategies in subsurface hydrologic modeling. The major objective of the present effort, HYDROCOIN Level 1, is determining the numerical accuracy of the computer codes. The definition of each case includes the input parameters, the governing equations, the output specifications, and the format. The Coupled Fluid, Energy, and Solute Transport (CFEST) code was applied to solve cases 1, 2, 4, 5, and 7; the Finite Element Three-Dimensional Groundwater (FE3DGW) Flow Model was used to solve case 6. Case 3 has been ignored because unsaturated flow is not pertinent to SRP. This report presents the Level 1 results furnished by the project teams. The numerical accuracy of the codes is determined by (1) comparing the computational results with analytical solutions for cases that have analytical solutions (namely cases 1 and 4), and (2) intercomparing results from codes for cases which do not have analytical solutions (cases 2, 5, 6, and 7). Cases 1, 2, 6, and 7 relate to flow analyses, whereas cases 4 and 5 require nonlinear solutions. 7 refs., 71 figs., 9 tabs

  17. Production of neutronic discrete equations for a cylindrical geometry in one group energy and benchmark the results with MCNP-4B code with one group energy library

    International Nuclear Information System (INIS)

    Salehi, A. A.; Vosoughi, N.; Shahriari, M.

    2002-01-01

    In reactor core neutronic calculations, we usually choose a control volume and investigate about the input, output, production and absorption inside it. Finally, we derive neutron transport equation. This equation is not easy to solve for simple and symmetrical geometry. The objective of this paper is to introduce a new direct method for neutronic calculations. This method is based on physics of problem and with meshing of the desired geometry, writing the balance equation for each mesh intervals and with notice to the conjunction between these mesh intervals, produce the final discrete equation series without production of neutron transport differential equation and mandatory passing form differential equation bridge. This method, which is named Direct Discrete Method, was applied in static state, for a cylindrical geometry in one group energy. The validity of the results from this new method are tested with MCNP-4B code with a one group energy library. One energy group direct discrete equation produces excellent results, which can be compared with the results of MCNP-4B

  18. Functional and social results of osseointegrated hearing aids

    Directory of Open Access Journals (Sweden)

    Inmaculada MORENO-ALARCÓN

    2017-06-01

    Full Text Available Introduction and objective: Osseointegrated implants are nowadays a good therapeutic option for patients suffering from transmission or mixed hearing loss. The aims of this study are both to assess audiology benefits for patients with osseointegrated implants and quantify the change in their quality of life. Method: The study included 10 patients who were implanted in our hospital between March 2013 and September 2014. The instrument used to quantify their quality of life was the Glasgow Benefit Inventory (GBI and a questionnaire including three questions: use of implant, postoperative pain and whether they would recommend the operation to other patients. Audiology assessment was performed through tone audiometry and free field speech audiometric testing. Results: The average total benefit score with the Glasgow Benefit Inventory was +58, and the general, social and physical scores were +75, +18 and +29, respectively. The improvement with the implant regarding free-field tonal audiometry at the frequencies of 500, 1000 and 2000 Hz was found to be statistically significant, as was the difference between verbal audiometry before and after implantation. Discussion: Improvements in surgical technique for osseointegrated implants, at present minimally invasive, foregrounds the assessment of functional and social aspects as a measure of their effectiveness. Conclusions: The use of the osseointegrated implant is related to an important improvement in the audiological level, especially in patients with conductive or mixed hearing loss, together with a great change in the quality of life of implanted patients.

  19. The benchmark testing of 9Be of CENDL-3

    International Nuclear Information System (INIS)

    Liu Ping

    2002-01-01

    CENDL-3, the latest version of China Evaluated Nuclear Data Library was finished. The data of 9 Be were updated, and distributed for benchmark analysis recently. The calculated results were presented, and compared with the experimental data and the results based on other evaluated nuclear data libraries. The results show that CENDL-3 is better than others for most benchmarks

  20. Benchmarking HIV health care

    DEFF Research Database (Denmark)

    Podlekareva, Daria; Reekie, Joanne; Mocroft, Amanda

    2012-01-01

    ABSTRACT: BACKGROUND: State-of-the-art care involving the utilisation of multiple health care interventions is the basis for an optimal long-term clinical prognosis for HIV-patients. We evaluated health care for HIV-patients based on four key indicators. METHODS: Four indicators of health care we...... document pronounced regional differences in adherence to guidelines and can help to identify gaps and direct target interventions. It may serve as a tool for assessment and benchmarking the clinical management of HIV-patients in any setting worldwide....

  1. Benchmarking Cloud Storage Systems

    OpenAIRE

    Wang, Xing

    2014-01-01

    With the rise of cloud computing, many cloud storage systems like Dropbox, Google Drive and Mega have been built to provide decentralized and reliable file storage. It is thus of prime importance to know their features, performance, and the best way to make use of them. In this context, we introduce BenchCloud, a tool designed as part of this thesis to conveniently and efficiently benchmark any cloud storage system. First, we provide a study of six commonly-used cloud storage systems to ident...

  2. The COST Benchmark

    DEFF Research Database (Denmark)

    Jensen, Christian Søndergaard; Tiesyte, Dalia; Tradisauskas, Nerius

    2006-01-01

    An infrastructure is emerging that enables the positioning of populations of on-line, mobile service users. In step with this, research in the management of moving objects has attracted substantial attention. In particular, quite a few proposals now exist for the indexing of moving objects...... takes into account that the available positions of the moving objects are inaccurate, an aspect largely ignored in previous indexing research. The concepts of data and query enlargement are introduced for addressing inaccuracy. As proof of concepts of the benchmark, the paper covers the application...

  3. Functional Echomyography of the human denervated muscle: first results

    Directory of Open Access Journals (Sweden)

    Riccardo Zanato

    2011-03-01

    Full Text Available In this study we followed with ultrasound three patients with permanent denervation to evaluate changes in morphology, thickness, contraction and vascularisation of muscles undergoing the home-based electrical stimulation program of the Rise2-Italy project. During a period of 1 year for the first subject, 6 months for the second subject and 3 months for the third subject we studied with ultrasound the denervated muscle comparing it (if possible to the contralateral normal muscle. We evaluated: 1. Changes in morphology and sonographic structure of the pathologic muscle; 2. Muscular thickness in response to the electrical stimulation therapy; 3. Short-term modifications in muscle perfusion and arterial flow patterns after stimulation; 4. Contraction-relaxation kinetic induced by volitional activity or electrical stimulation. Morphology and ultrasonographic structure of the denervated muscles changed during the period of stimulation from a pattern typical of complete muscular atrophy to a pattern which might be considered “normal” when detected in an old patient. Thickness improved significantly more in the middle third than in the proximal and distal third of the denervated muscle, reaching in the last measurements of the first subject approximately the same thickness as the contralateral normal muscle. In all the measurements done within this study, arterial flow of the denervated muscle showed at rest a low-resistance pattern with Doppler Ultra Sound (US, and a pulsed pattern after electrical stimulation. The stimulation- induced pattern is similar to the trifasic high-resistance pattern of the normal muscle. Contraction- relaxation kinetic, measured by recording the muscular movements during electrical stimulation, showed an abnormal behaviour of the denervated muscle during the relaxation phase, which resulted to be significantly longer than in normal muscle (880 msec in the denervated muscle vs 240 msec in the contralateral normal one

  4. Human factors reliability Benchmark exercise

    International Nuclear Information System (INIS)

    Poucet, A.

    1989-06-01

    The Joint Research Centre of the European Commission has organized a Human Factors Reliability Benchmark Exercise (HF-RBE) with the aim of assessing the state of the art in human reliability modelling and assessment. Fifteen teams from eleven countries, representing industry, utilities, licensing organisations and research institutes, participated in the HF-RBE. The HF-RBE was organized around two study cases: (1) analysis of routine functional Test and Maintenance (T and M) procedures: with the aim of assessing the probability of test induced failures, the probability of failures to remain unrevealed and the potential to initiate transients because of errors performed in the test; (2) analysis of human actions during an operational transient: with the aim of assessing the probability that the operators will correctly diagnose the malfunctions and take proper corrective action. This report contains the final summary reports produced by the participants in the exercise

  5. Benchmark calculations with correlated molecular wave functions. VI. Second row A2 and first row/second row AB diatomic molecules

    International Nuclear Information System (INIS)

    Woon, D.E.; Dunning, T.H. Jr.

    1994-01-01

    Benchmark calculations employing the correlation consistent basis sets of Dunning and co-workers are reported for the following diatomic species: Al 2 , Si 2 , P 2 , S 2 , Cl 2 , SiS, PS, PN, PO, and SO. Internally contracted multireference configuration interaction (CMRCI) calculations (correlating valence electrons only) have been performed for each species. For Cl 2 , P 2 , and PN, calculations have also been carried out using Moller--Plesset perturbation theory (MP2, MP3, MP4) and the singles and doubles coupled-cluster method with and without perturbative triples [CCSD, CCSD(T)]. Spectroscopic constants and dissociation energies are reported for the ground state of each species. In addition, the low-lying excited states of Al 2 and Si 2 have been investigated. Estimated complete basis set (CBS) limits for the dissociation energies, D e , and other spectroscopic constants are obtained from simple exponential extrapolations of the computed quantities. At the CBS limit the root-mean-square (rms) error in D e for the CMRCI calculations, the intrinsic error, on the ten species considered here is 3.9 kcal/mol; for r e the rms intrinsic error is 0.009 A, and for ω e it is 5.1 cm -1

  6. Benchmarking multimedia performance

    Science.gov (United States)

    Zandi, Ahmad; Sudharsanan, Subramania I.

    1998-03-01

    With the introduction of faster processors and special instruction sets tailored to multimedia, a number of exciting applications are now feasible on the desktops. Among these is the DVD playback consisting, among other things, of MPEG-2 video and Dolby digital audio or MPEG-2 audio. Other multimedia applications such as video conferencing and speech recognition are also becoming popular on computer systems. In view of this tremendous interest in multimedia, a group of major computer companies have formed, Multimedia Benchmarks Committee as part of Standard Performance Evaluation Corp. to address the performance issues of multimedia applications. The approach is multi-tiered with three tiers of fidelity from minimal to full compliant. In each case the fidelity of the bitstream reconstruction as well as quality of the video or audio output are measured and the system is classified accordingly. At the next step the performance of the system is measured. In many multimedia applications such as the DVD playback the application needs to be run at a specific rate. In this case the measurement of the excess processing power, makes all the difference. All these make a system level, application based, multimedia benchmark very challenging. Several ideas and methodologies for each aspect of the problems will be presented and analyzed.

  7. Pynamic: the Python Dynamic Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Lee, G L; Ahn, D H; de Supinksi, B R; Gyllenhaal, J C; Miller, P J

    2007-07-10

    Python is widely used in scientific computing to facilitate application development and to support features such as computational steering. Making full use of some of Python's popular features, which improve programmer productivity, leads to applications that access extremely high numbers of dynamically linked libraries (DLLs). As a result, some important Python-based applications severely stress a system's dynamic linking and loading capabilities and also cause significant difficulties for most development environment tools, such as debuggers. Furthermore, using the Python paradigm for large scale MPI-based applications can create significant file IO and further stress tools and operating systems. In this paper, we present Pynamic, the first benchmark program to support configurable emulation of a wide-range of the DLL usage of Python-based applications for large scale systems. Pynamic has already accurately reproduced system software and tool issues encountered by important large Python-based scientific applications on our supercomputers. Pynamic provided insight for our system software and tool vendors, and our application developers, into the impact of several design decisions. As we describe the Pynamic benchmark, we will highlight some of the issues discovered in our large scale system software and tools using Pynamic.

  8. A Comparative Study of Differential Evolution, Particle Swarm Optimization, and Evolutionary Algorithms on Numerical Benchmark Problems

    DEFF Research Database (Denmark)

    Vesterstrøm, Jacob Svaneborg; Thomsen, Rene

    2004-01-01

    Several extensions to evolutionary algorithms (EAs) and particle swarm optimization (PSO) have been suggested during the last decades offering improved performance on selected benchmark problems. Recently, another search heuristic termed differential evolution (DE) has shown superior performance...... in several real-world applications. In this paper, we evaluate the performance of DE, PSO, and EAs regarding their general applicability as numerical optimization techniques. The comparison is performed on a suite of 34 widely used benchmark problems. The results from our study show that DE generally...... outperforms the other algorithms. However, on two noisy functions, both DE and PSO were outperformed by the EA....

  9. Calculation of Single Cell and Fuel Assembly IRIS Benchmarks Using WIMSD5B and GNOMER Codes

    International Nuclear Information System (INIS)

    Pevec, D.; Grgic, D.; Jecmenica, R.

    2002-01-01

    IRIS reactor (an acronym for International Reactor Innovative and Secure) is a modular, integral, light water cooled, small to medium power (100-335 MWe/module) reactor, which addresses the requirements defined by the United States Department of Energy for Generation IV nuclear energy systems, i.e., proliferation resistance, enhanced safety, improved economics, and waste reduction. An international consortium led by Westinghouse/BNFL was created for development of IRIS reactor; it includes universities, institutes, commercial companies, and utilities. Faculty of Electrical Engineering and Computing, University of Zagreb joined the consortium in year 2001, with the aim to take part in IRIS neutronics design and safety analyses of IRIS transients. A set of neutronic benchmarks for IRIS reactor was defined with the objective to compare results of all participants with exactly the same assumptions. In this paper a calculation of Benchmark 44 for IRIS reactor is described. Benchmark 44 is defined as a core depletion benchmark problem for specified IRIS reactor operating conditions (e.g., temperatures, moderator density) without feedback. Enriched boron, inhomogeneously distributed in axial direction, is used as an integral fuel burnable absorber (IFBA). The aim of this benchmark was to enable a more direct comparison of results of different code systems. Calculations of Benchmark 44 were performed using the modified CORD-2 code package. The CORD-2 code package consists of WIMSD and GNOMER codes. WIMSD is a well-known lattice spectrum calculation code. GNOMER solves the neutron diffusion equation in three-dimensional Cartesian geometry by the Green's function nodal method. The following parameters were obtained in Benchmark 44 analysis: effective multiplication factor as a function of burnup, nuclear peaking factor as a function of burnup, axial offset as a function of burnup, core-average axial power profile, core radial power profile, axial power profile for selected

  10. FUNCTIONAL RESULTS OF ENDOSCOPIC EXTRAPERITONEAL RADICAL INTRAFASCIAL PROSTATECTOMY

    Directory of Open Access Journals (Sweden)

    D. V. Perlin

    2014-01-01

    Full Text Available Introduction. Endoscopic radical prostatectomy is a highly effective treatment for localized prostate cancer. Intrafascial prostate dissection ensures early recovery of urine continence function and erectile function. This article sums up our own experience of performing intrafascial endoscopic prostatectomy.Materials and methods. 25 patients have undergone this procedure. 12 months after surgery 88.2 % of the patients were fully continent, 11.7 % had symptoms of minimal stress urinary incontinence. We encountered no cases of positive surgical margins and one case of bio-chemical recurrence of the disease.Conclusion. Oncologically, intrafascial endoscopic radical prostatectomy is as effective as other modifications of radical prostatectomy and has the benefits of early recovery of urine continence function and erectile function

  11. Further Results on Constructions of Generalized Bent Boolean Functions

    Science.gov (United States)

    2016-03-01

    China; 2Naval Postgraduate School, Applied Mathematics Department, Monterey, CA 93943, USA; 3Science and Technology on Communication Security...in 1976 as an interesting combinatorial object with the important property of having op- timal nonlinearity [1]. Since bent functions have many...77–94 10 Zhao Y, Li H L. On bent functions with some symmet- ric properties. Discret Appl Math, 2006, 154: 2537– 2543

  12. A 3D stylized half-core CANDU benchmark problem

    International Nuclear Information System (INIS)

    Pounders, Justin M.; Rahnema, Farzad; Serghiuta, Dumitru; Tholammakkil, John

    2011-01-01

    A 3D stylized half-core Canadian deuterium uranium (CANDU) reactor benchmark problem is presented. The benchmark problem is comprised of a heterogeneous lattice of 37-element natural uranium fuel bundles, heavy water moderated, heavy water cooled, with adjuster rods included as reactivity control devices. Furthermore, a 2-group macroscopic cross section library has been developed for the problem to increase the utility of this benchmark for full-core deterministic transport methods development. Monte Carlo results are presented for the benchmark problem in cooled, checkerboard void, and full coolant void configurations.

  13. Benchmarking & European Sustainable Transport Policies

    DEFF Research Database (Denmark)

    Gudmundsson, H.

    2003-01-01

    , Benchmarking is one of the management tools that have recently been introduced in the transport sector. It is rapidly being applied to a wide range of transport operations, services and policies. This paper is a contribution to the discussion of the role of benchmarking in the future efforts to...... contribution to the discussions within the Eusponsored BEST Thematic Network (Benchmarking European Sustainable Transport) which ran from 2000 to 2003....

  14. Power reactor pressure vessel benchmarks

    International Nuclear Information System (INIS)

    Rahn, F.J.

    1978-01-01

    A review is given of the current status of experimental and calculational benchmarks for use in understanding the radiation embrittlement effects in the pressure vessels of operating light water power reactors. The requirements of such benchmarks for application to pressure vessel dosimetry are stated. Recent developments in active and passive neutron detectors sensitive in the ranges of importance to embrittlement studies are summarized and recommendations for improvements in the benchmark are made. (author)

  15. New results on holographic three-point functions

    International Nuclear Information System (INIS)

    Bianchi, Massimo; Prisco, Maurizio; Mueck, Wolfgang

    2003-01-01

    We exploit a gauge invariant approach for the analysis of the equations governing the dynamics of active scalar fluctuations coupled to the fluctuations of the metric along holographic RG flows. In the present approach, a second order ODE for the active scalar emerges rather simply and makes it possible to use the Green's function method to deal with (quadratic) interaction terms. We thus fill a gap for active scalar operators, whose three-point functions have been inaccessible so far, and derive a general, explicitly Bose symmetric formula thereof. As an application we compute the relevant three-point function along the GPPZ flow and extract the irreducible trilinear couplings of the corresponding super glueballs by amputating the external legs on-shell. (author)

  16. Benchmarking criticality safety calculations with subcritical experiments

    International Nuclear Information System (INIS)

    Mihalczo, J.T.

    1984-06-01

    Calculation of the neutron multiplication factor at delayed criticality may be necessary for benchmarking calculations but it may not be sufficient. The use of subcritical experiments to benchmark criticality safety calculations could result in substantial savings in fuel material costs for experiments. In some cases subcritical configurations could be used to benchmark calculations where sufficient fuel to achieve delayed criticality is not available. By performing a variety of measurements with subcritical configurations, much detailed information can be obtained which can be compared directly with calculations. This paper discusses several measurements that can be performed with subcritical assemblies and presents examples that include comparisons between calculation and experiment where possible. Where not, examples from critical experiments have been used but the measurement methods could also be used for subcritical experiments

  17. Self-benchmarking Guide for Cleanrooms: Metrics, Benchmarks, Actions

    Energy Technology Data Exchange (ETDEWEB)

    Mathew, Paul; Sartor, Dale; Tschudi, William

    2009-07-13

    This guide describes energy efficiency metrics and benchmarks that can be used to track the performance of and identify potential opportunities to reduce energy use in laboratory buildings. This guide is primarily intended for personnel who have responsibility for managing energy use in existing laboratory facilities - including facilities managers, energy managers, and their engineering consultants. Additionally, laboratory planners and designers may also use the metrics and benchmarks described in this guide for goal-setting in new construction or major renovation. This guide provides the following information: (1) A step-by-step outline of the benchmarking process. (2) A set of performance metrics for the whole building as well as individual systems. For each metric, the guide provides a definition, performance benchmarks, and potential actions that can be inferred from evaluating this metric. (3) A list and descriptions of the data required for computing the metrics. This guide is complemented by spreadsheet templates for data collection and for computing the benchmarking metrics. This guide builds on prior research supported by the national Laboratories for the 21st Century (Labs21) program, supported by the U.S. Department of Energy and the U.S. Environmental Protection Agency. Much of the benchmarking data are drawn from the Labs21 benchmarking database and technical guides. Additional benchmark data were obtained from engineering experts including laboratory designers and energy managers.

  18. Self-benchmarking Guide for Laboratory Buildings: Metrics, Benchmarks, Actions

    Energy Technology Data Exchange (ETDEWEB)

    Mathew, Paul; Greenberg, Steve; Sartor, Dale

    2009-07-13

    This guide describes energy efficiency metrics and benchmarks that can be used to track the performance of and identify potential opportunities to reduce energy use in laboratory buildings. This guide is primarily intended for personnel who have responsibility for managing energy use in existing laboratory facilities - including facilities managers, energy managers, and their engineering consultants. Additionally, laboratory planners and designers may also use the metrics and benchmarks described in this guide for goal-setting in new construction or major renovation. This guide provides the following information: (1) A step-by-step outline of the benchmarking process. (2) A set of performance metrics for the whole building as well as individual systems. For each metric, the guide provides a definition, performance benchmarks, and potential actions that can be inferred from evaluating this metric. (3) A list and descriptions of the data required for computing the metrics. This guide is complemented by spreadsheet templates for data collection and for computing the benchmarking metrics. This guide builds on prior research supported by the national Laboratories for the 21st Century (Labs21) program, supported by the U.S. Department of Energy and the U.S. Environmental Protection Agency. Much of the benchmarking data are drawn from the Labs21 benchmarking database and technical guides. Additional benchmark data were obtained from engineering experts including laboratory designers and energy managers.

  19. Exact results and open questions in first principle functional RG

    International Nuclear Information System (INIS)

    Le Doussal, Pierre

    2010-01-01

    Some aspects of the functional RG (FRG) approach to pinned elastic manifolds (of internal dimension d) at finite temperature T > 0 are reviewed and reexamined in this much expanded version of Le Doussal (2006) . The particle limit d = 0 provides a test for the theory: there the FRG is equivalent to the decaying Burgers equation, with viscosity ν ∼ T-both being formally irrelevant. An outstanding question in FRG, i.e. how temperature regularizes the otherwise singular flow of T = 0 FRG, maps to the viscous layer regularization of inertial range Burgers turbulence (i.e. to the construction of the inviscid limit). Analogy between Kolmogorov scaling and FRG cumulant scaling is discussed. First, multi-loop FRG corrections are examined and the direct loop expansion at T > 0 is shown to fail already in d = 0, a hierarchy of ERG equations being then required (introduced in Balents and Le Doussal (2005) ). Next we prove that the FRG function R(u) and higher cumulants defined from the field theory can be obtained for any d from moments of a renormalized potential defined in an sliding harmonic well. This allows to measure the fixed point function R(u) in numerics and experiments. In d = 0 the beta function (of the inviscid limit) is obtained from first principles to four loop. For Sinai model (uncorrelated Burgers initial velocities) the ERG hierarchy can be solved and the exact function R(u) is obtained. Connections to exact solutions for the statistics of shocks in Burgers and to ballistic aggregation are detailed. A relation is established between the size distribution of shocks and the one for droplets. A droplet solution to the ERG functional hierarchy is found for any d, and the form of R(u) in the thermal boundary layer is related to droplet probabilities. These being known for the d = 0 Sinai model the function R(u) is obtained there at any T. Consistency of the ε=4-d expansion in one and two loop FRG is studied from first principles, and connected to shock and

  20. Benchmarking monthly homogenization algorithms

    Science.gov (United States)

    Venema, V. K. C.; Mestre, O.; Aguilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertacnik, G.; Szentimrey, T.; Stepanek, P.; Zahradnicek, P.; Viarre, J.; Müller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratianni, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Prohom Duran, M.; Likso, T.; Esteban, P.; Brandsma, T.

    2011-08-01

    The COST (European Cooperation in Science and Technology) Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative). The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random break-type inhomogeneities were added to the simulated datasets modeled as a Poisson process with normally distributed breakpoint sizes. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide) trend was added. Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including (i) the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii) the error in linear trend estimates and (iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data

  1. Benchmarking foreign electronics technologies

    Energy Technology Data Exchange (ETDEWEB)

    Bostian, C.W.; Hodges, D.A.; Leachman, R.C.; Sheridan, T.B.; Tsang, W.T.; White, R.M.

    1994-12-01

    This report has been drafted in response to a request from the Japanese Technology Evaluation Center`s (JTEC) Panel on Benchmarking Select Technologies. Since April 1991, the Competitive Semiconductor Manufacturing (CSM) Program at the University of California at Berkeley has been engaged in a detailed study of quality, productivity, and competitiveness in semiconductor manufacturing worldwide. The program is a joint activity of the College of Engineering, the Haas School of Business, and the Berkeley Roundtable on the International Economy, under sponsorship of the Alfred P. Sloan Foundation, and with the cooperation of semiconductor producers from Asia, Europe and the United States. Professors David A. Hodges and Robert C. Leachman are the project`s Co-Directors. The present report for JTEC is primarily based on data and analysis drawn from that continuing program. The CSM program is being conducted by faculty, graduate students and research staff from UC Berkeley`s Schools of Engineering and Business, and Department of Economics. Many of the participating firms are represented on the program`s Industry Advisory Board. The Board played an important role in defining the research agenda. A pilot study was conducted in 1991 with the cooperation of three semiconductor plants. The research plan and survey documents were thereby refined. The main phase of the CSM benchmarking study began in mid-1992 and will continue at least through 1997. reports are presented on the manufacture of integrated circuits; data storage; wireless technology; human-machine interfaces; and optoelectronics. Selected papers are indexed separately for inclusion in the Energy Science and Technology Database.

  2. Visualization of the air flow behind the automotive benchmark vent

    OpenAIRE

    Pech, Ondřej; Jedelský, Jan; Caletka, Petr; Jícha, Miroslav

    2015-01-01

    Passenger comfort in cars depends on appropriate function of the cabin HVAC system. A great attention is therefore paid to the effective function of automotive vents and proper formation of the flow behind the ventilation outlet. The article deals with the visualization of air flow from the automotive benchmark vent. The visualization was made for two different shapes of the inlet channel connected to the benchmark vent. The smoke visualization with the laser knife was used. The influence of ...

  3. An isomeric reaction benchmark set to test if the performance of state-of-the-art density functionals can be regarded as independent of the external potential.

    Science.gov (United States)

    Schwabe, Tobias

    2014-07-28

    Some representative density functionals are assessed for isomerization reactions in which heteroatoms are systematically substituted with heavier members of the same element group. By this, it is investigated if the functional performance depends on the elements involved, i.e. on the external potential imposed by the atomic nuclei. Special emphasis is placed on reliable theoretical reference data and the attempt to minimize basis set effects. Both issues are challenging for molecules including heavy elements. The data suggest that no general bias can be identified for the functionals under investigation except for one case - M11-L. Nevertheless, large deviations from the reference data can be found for all functional approximations in some cases. The average error range for the nine functionals in this test is 17.6 kcal mol(-1). These outliers depreciate the general reliability of density functional approximations.

  4. Review for session K - benchmarks

    International Nuclear Information System (INIS)

    McCracken, A.K.

    1980-01-01

    Eight of the papers to be considered in Session K are directly concerned, at least in part, with the Pool Critical Assembly (P.C.A.) benchmark at Oak Ridge. The remaining seven papers in this session, the subject of this review, are concerned with a variety of topics related to the general theme of Benchmarks and will be considered individually

  5. Internal Benchmarking for Institutional Effectiveness

    Science.gov (United States)

    Ronco, Sharron L.

    2012-01-01

    Internal benchmarking is an established practice in business and industry for identifying best in-house practices and disseminating the knowledge about those practices to other groups in the organization. Internal benchmarking can be done with structures, processes, outcomes, or even individuals. In colleges or universities with multicampuses or a…

  6. Entropy-based benchmarking methods

    NARCIS (Netherlands)

    Temurshoev, Umed

    2012-01-01

    We argue that benchmarking sign-volatile series should be based on the principle of movement and sign preservation, which states that a bench-marked series should reproduce the movement and signs in the original series. We show that the widely used variants of Denton (1971) method and the growth

  7. Benchmark simulation models, quo vadis?

    DEFF Research Database (Denmark)

    Jeppsson, U.; Alex, J; Batstone, D. J.

    2013-01-01

    As the work of the IWA Task Group on Benchmarking of Control Strategies for wastewater treatment plants (WWTPs) is coming to an end, it is essential to disseminate the knowledge gained. For this reason, all authors of the IWA Scientific and Technical Report on benchmarking have come together to p...

  8. EPA's Benchmark Dose Modeling Software

    Science.gov (United States)

    The EPA developed the Benchmark Dose Software (BMDS) as a tool to help Agency risk assessors facilitate applying benchmark dose (BMD) method’s to EPA’s human health risk assessment (HHRA) documents. The application of BMD methods overcomes many well know limitations ...

  9. Benchmark for Strategic Performance Improvement.

    Science.gov (United States)

    Gohlke, Annette

    1997-01-01

    Explains benchmarking, a total quality management tool used to measure and compare the work processes in a library with those in other libraries to increase library performance. Topics include the main groups of upper management, clients, and staff; critical success factors for each group; and benefits of benchmarking. (Author/LRW)

  10. Benchmarking: A Process for Improvement.

    Science.gov (United States)

    Peischl, Thomas M.

    One problem with the outcome-based measures used in higher education is that they measure quantity but not quality. Benchmarking, or the use of some external standard of quality to measure tasks, processes, and outputs, is partially solving that difficulty. Benchmarking allows for the establishment of a systematic process to indicate if outputs…

  11. Benchmark job – Watch out!

    CERN Multimedia

    Staff Association

    2017-01-01

    On 12 December 2016, in Echo No. 259, we already discussed at length the MERIT and benchmark jobs. Still, we find that a couple of issues warrant further discussion. Benchmark job – administrative decision on 1 July 2017 On 12 January 2017, the HR Department informed all staff members of a change to the effective date of the administrative decision regarding benchmark jobs. The benchmark job title of each staff member will be confirmed on 1 July 2017, instead of 1 May 2017 as originally announced in HR’s letter on 18 August 2016. Postponing the administrative decision by two months will leave a little more time to address the issues related to incorrect placement in a benchmark job. Benchmark job – discuss with your supervisor, at the latest during the MERIT interview In order to rectify an incorrect placement in a benchmark job, it is essential that the supervisor and the supervisee go over the assigned benchmark job together. In most cases, this placement has been done autom...

  12. Teaching Benchmark Strategy for Fifth-Graders in Taiwan

    Science.gov (United States)

    Yang, Der-Ching; Lai, M. L.

    2013-01-01

    The key purpose of this study was how we taught the use of benchmark strategy when comparing fraction for fifth-graders in Taiwan. 26 fifth graders from a public elementary in south Taiwan were selected to join this study. Results of this case study showed that students had a much progress on the use of benchmark strategy when comparing fraction…

  13. WWER in-core fuel management benchmark definition

    International Nuclear Information System (INIS)

    Apostolov, T.; Alekova, G.; Prodanova, R.; Petrova, T.; Ivanov, K.

    1994-01-01

    Two benchmark problems for WWER-440, including design parameters, operating conditions and measured quantities are discussed in this paper. Some benchmark results for infinitive multiplication factor -K eff , natural boron concentration - C β and relative power distribution - K q obtained by use of the code package are represented. (authors). 5 refs., 3 tabs

  14. Benchmarking Academic Libraries: An Australian Case Study.

    Science.gov (United States)

    Robertson, Margaret; Trahn, Isabella

    1997-01-01

    Discusses experiences and outcomes of benchmarking at the Queensland University of Technology (Australia) library that compared acquisitions, cataloging, document delivery, and research support services with those of the University of New South Wales. Highlights include results as a catalyst for change, and the use of common output and performance…

  15. Benchmarking singlet and triplet excitation energies of molecular semiconductors for singlet fission: Tuning the amount of HF exchange and adjusting local correlation to obtain accurate functionals for singlet-triplet gaps

    Science.gov (United States)

    Brückner, Charlotte; Engels, Bernd

    2017-01-01

    Vertical and adiabatic singlet and triplet excitation energies of molecular p-type semiconductors calculated with various DFT functionals and wave-function based approaches are benchmarked against MS-CASPT2/cc-pVTZ reference values. A special focus lies on the singlet-triplet gaps that are very important in the process of singlet fission. Singlet fission has the potential to boost device efficiencies of organic solar cells, but the scope of existing singlet-fission compounds is still limited. A computational prescreening of candidate molecules could enlarge it; yet it requires efficient methods accurately predicting singlet and triplet excitation energies. Different DFT formulations (Tamm-Dancoff approximation, linear response time-dependent DFT, Δ-SCF) and spin scaling schemes along with several ab initio methods (CC2, ADC(2)/MP2, CIS(D), CIS) are evaluated. While wave-function based methods yield rather reliable singlet-triplet gaps, many DFT functionals are shown to systematically underestimate triplet excitation energies. To gain insight, the impact of exact exchange and correlation is in detail addressed.

  16. Benchmarking of nuclear economics tools

    International Nuclear Information System (INIS)

    Moore, Megan; Korinny, Andriy; Shropshire, David; Sadhankar, Ramesh

    2017-01-01

    Highlights: • INPRO and GIF economic tools exhibited good alignment in total capital cost estimation. • Subtle discrepancies in the cost result from differences in financing and the fuel cycle assumptions. • A common set of assumptions was found to reduce the discrepancies to 1% or less. • Opportunities for harmonisation of economic tools exists. - Abstract: Benchmarking of the economics methodologies developed by the Generation IV International Forum (GIF) and the International Atomic Energy Agency’s International Project on Innovative Nuclear Reactors and Fuel Cycles (INPRO), was performed for three Generation IV nuclear energy systems. The Economic Modeling Working Group of GIF developed an Excel based spreadsheet package, G4ECONS (Generation 4 Excel-based Calculation Of Nuclear Systems), to calculate the total capital investment cost (TCIC) and the levelised unit energy cost (LUEC). G4ECONS is sufficiently generic in the sense that it can accept the types of projected input, performance and cost data that are expected to become available for Generation IV systems through various development phases and that it can model both open and closed fuel cycles. The Nuclear Energy System Assessment (NESA) Economic Support Tool (NEST) was developed to enable an economic analysis using the INPRO methodology to easily calculate outputs including the TCIC, LUEC and other financial figures of merit including internal rate of return, return of investment and net present value. NEST is also Excel based and can be used to evaluate nuclear reactor systems using the open fuel cycle, MOX (mixed oxide) fuel recycling and closed cycles. A Super Critical Water-cooled Reactor system with an open fuel cycle and two Fast Reactor systems, one with a break-even fuel cycle and another with a burner fuel cycle, were selected for the benchmarking exercise. Published data on capital and operating costs were used for economics analyses using G4ECONS and NEST tools. Both G4ECONS and

  17. Benchmarking: applications to transfusion medicine.

    Science.gov (United States)

    Apelseth, Torunn Oveland; Molnar, Laura; Arnold, Emmy; Heddle, Nancy M

    2012-10-01

    Benchmarking is as a structured continuous collaborative process in which comparisons for selected indicators are used to identify factors that, when implemented, will improve transfusion practices. This study aimed to identify transfusion medicine studies reporting on benchmarking, summarize the benchmarking approaches used, and identify important considerations to move the concept of benchmarking forward in the field of transfusion medicine. A systematic review of published literature was performed to identify transfusion medicine-related studies that compared at least 2 separate institutions or regions with the intention of benchmarking focusing on 4 areas: blood utilization, safety, operational aspects, and blood donation. Forty-five studies were included: blood utilization (n = 35), safety (n = 5), operational aspects of transfusion medicine (n = 5), and blood donation (n = 0). Based on predefined criteria, 7 publications were classified as benchmarking, 2 as trending, and 36 as single-event studies. Three models of benchmarking are described: (1) a regional benchmarking program that collects and links relevant data from existing electronic sources, (2) a sentinel site model where data from a limited number of sites are collected, and (3) an institutional-initiated model where a site identifies indicators of interest and approaches other institutions. Benchmarking approaches are needed in the field of transfusion medicine. Major challenges include defining best practices and developing cost-effective methods of data collection. For those interested in initiating a benchmarking program, the sentinel site model may be most effective and sustainable as a starting point, although the regional model would be the ideal goal. Copyright © 2012 Elsevier Inc. All rights reserved.

  18. Recent structure function results from neutrino scattering at fermilab

    International Nuclear Information System (INIS)

    Yang, U.K.; Avvakumov, S.; Barbaro, P. de

    2001-01-01

    We report on the extraction of the structure functions F 2 and ΔxF 3 = xF ν 3 - xF ν -bar 3 from CCFR ν μ -Fe and ν-bar μ -Fe differential cross sections. The extraction is performed in a physics model independent (PMI) way. This first measurement of ΔxF 3 , which is useful in testing models of heavy charm production, is higher than current theoretical predictions. The ratio of the F 2 (PMI) values measured in ν μ , and μ scattering is in agreement (within 5%) with the NLO predictions using massive charm production schemes, thus resolving the long-standing discrepancy between the two sets of data. In addition, measurements of F L (or, equivalently, R) and 2xF 1 are reported in the kinematic region where anomalous nuclear effects in R are observed at HERMES. (author)

  19. Benchmarking school nursing practice: the North West Regional Benchmarking Group

    OpenAIRE

    Littler, Nadine; Mullen, Margaret; Beckett, Helen; Freshney, Alice; Pinder, Lynn

    2016-01-01

    It is essential that the quality of care is reviewed regularly through robust processes such as benchmarking to ensure all outcomes and resources are evidence-based so that children and young people’s needs are met effectively. This article provides an example of the use of benchmarking in school nursing practice. Benchmarking has been defined as a process for finding, adapting and applying best practices (Camp, 1994). This concept was first adopted in the 1970s ‘from industry where it was us...

  20. Hospital benchmarking: are U.S. eye hospitals ready?

    Science.gov (United States)

    de Korne, Dirk F; van Wijngaarden, Jeroen D H; Sol, Kees J C A; Betz, Robert; Thomas, Richard C; Schein, Oliver D; Klazinga, Niek S

    2012-01-01

    Benchmarking is increasingly considered a useful management instrument to improve quality in health care, but little is known about its applicability in hospital settings. The aims of this study were to assess the applicability of a benchmarking project in U.S. eye hospitals and compare the results with an international initiative. We evaluated multiple cases by applying an evaluation frame abstracted from the literature to five U.S. eye hospitals that used a set of 10 indicators for efficiency benchmarking. Qualitative analysis entailed 46 semistructured face-to-face interviews with stakeholders, document analyses, and questionnaires. The case studies only partially met the conditions of the evaluation frame. Although learning and quality improvement were stated as overall purposes, the benchmarking initiative was at first focused on efficiency only. No ophthalmic outcomes were included, and clinicians were skeptical about their reporting relevance and disclosure. However, in contrast with earlier findings in international eye hospitals, all U.S. hospitals worked with internal indicators that were integrated in their performance management systems and supported benchmarking. Benchmarking can support performance management in individual hospitals. Having a certain number of comparable institutes provide similar services in a noncompetitive milieu seems to lay fertile ground for benchmarking. International benchmarking is useful only when these conditions are not met nationally. Although the literature focuses on static conditions for effective benchmarking, our case studies show that it is a highly iterative and learning process. The journey of benchmarking seems to be more important than the destination. Improving patient value (health outcomes per unit of cost) requires, however, an integrative perspective where clinicians and administrators closely cooperate on both quality and efficiency issues. If these worlds do not share such a relationship, the added

  1. OWL2 benchmarking for the evaluation of knowledge based systems.

    Directory of Open Access Journals (Sweden)

    Sher Afgun Khan

    Full Text Available OWL2 semantics are becoming increasingly popular for the real domain applications like Gene engineering and health MIS. The present work identifies the research gap that negligible attention has been paid to the performance evaluation of Knowledge Base Systems (KBS using OWL2 semantics. To fulfil this identified research gap, an OWL2 benchmark for the evaluation of KBS is proposed. The proposed benchmark addresses the foundational blocks of an ontology benchmark i.e. data schema, workload and performance metrics. The proposed benchmark is tested on memory based, file based, relational database and graph based KBS for performance and scalability measures. The results show that the proposed benchmark is able to evaluate the behaviour of different state of the art KBS on OWL2 semantics. On the basis of the results, the end users (i.e. domain expert would be able to select a suitable KBS appropriate for his domain.

  2. Benchmark problems for numerical implementations of phase field models

    International Nuclear Information System (INIS)

    Jokisaari, A. M.; Voorhees, P. W.; Guyer, J. E.; Warren, J.; Heinonen, O. G.

    2016-01-01

    Here, we present the first set of benchmark problems for phase field models that are being developed by the Center for Hierarchical Materials Design (CHiMaD) and the National Institute of Standards and Technology (NIST). While many scientific research areas use a limited set of well-established software, the growing phase field community continues to develop a wide variety of codes and lacks benchmark problems to consistently evaluate the numerical performance of new implementations. Phase field modeling has become significantly more popular as computational power has increased and is now becoming mainstream, driving the need for benchmark problems to validate and verify new implementations. We follow the example set by the micromagnetics community to develop an evolving set of benchmark problems that test the usability, computational resources, numerical capabilities and physical scope of phase field simulation codes. In this paper, we propose two benchmark problems that cover the physics of solute diffusion and growth and coarsening of a second phase via a simple spinodal decomposition model and a more complex Ostwald ripening model. We demonstrate the utility of benchmark problems by comparing the results of simulations performed with two different adaptive time stepping techniques, and we discuss the needs of future benchmark problems. The development of benchmark problems will enable the results of quantitative phase field models to be confidently incorporated into integrated computational materials science and engineering (ICME), an important goal of the Materials Genome Initiative.

  3. Heterodimerization of Msx and Dlx homeoproteins results in functional antagonism.

    Science.gov (United States)

    Zhang, H; Hu, G; Wang, H; Sciavolino, P; Iler, N; Shen, M M; Abate-Shen, C

    1997-05-01

    Protein-protein interactions are known to be essential for specifying the transcriptional activities of homeoproteins. Here we show that representative members of the Msx and Dlx homeoprotein families form homo- and heterodimeric complexes. We demonstrate that dimerization by Msx and Dlx proteins is mediated through their homeodomains and that the residues required for this interaction correspond to those necessary for DNA binding. Unlike most other known examples of homeoprotein interactions, association of Msx and Dlx proteins does not promote cooperative DNA binding; instead, dimerization and DNA binding are mutually exclusive activities. In particular, we show that Msx and Dlx proteins interact independently and noncooperatively with homeodomain DNA binding sites and that dimerization is specifically blocked by the presence of such DNA sites. We further demonstrate that the transcriptional properties of Msx and Dlx proteins display reciprocal inhibition. Specifically, Msx proteins act as transcriptional repressors and Dlx proteins act as activators, while in combination, Msx and Dlx proteins counteract each other's transcriptional activities. Finally, we show that the expression patterns of representative Msx and Dlx genes (Msx1, Msx2, Dlx2, and Dlx5) overlap in mouse embryogenesis during limb bud and craniofacial development, consistent with the potential for their protein products to interact in vivo. Based on these observations, we propose that functional antagonism through heterodimer formation provides a mechanism for regulating the transcriptional actions of Msx and Dlx homeoproteins in vivo.

  4. AER benchmark specification sheet

    International Nuclear Information System (INIS)

    Aszodi, A.; Toth, S.

    2009-01-01

    In the VVER-440/213 type reactors, the core outlet temperature field is monitored with in-core thermocouples, which are installed above 210 fuel assemblies. These measured temperatures are used in determination of the fuel assembly powers and they have important role in the reactor power limitation. For these reasons, correct interpretation of the thermocouple signals is an important question. In order to interpret the signals in correct way, knowledge of the coolant mixing in the assembly heads is necessary. Computational Fluid Dynamics (CFD) codes and experiments can help to understand better these mixing processes and they can provide information which can support the more adequate interpretation of the thermocouple signals. This benchmark deals with the 3D CFD modeling of the coolant mixing in the heads of the profiled fuel assemblies with 12.2 mm rod pitch. Two assemblies of the 23rd cycle of the Paks NPP's Unit 3 are investigated. One of them has symmetrical pin power profile and another possesses inclined profile. (authors)

  5. AER Benchmark Specification Sheet

    International Nuclear Information System (INIS)

    Aszodi, A.; Toth, S.

    2009-01-01

    In the WWER-440/213 type reactors, the core outlet temperature field is monitored with in-core thermocouples, which are installed above 210 fuel assemblies. These measured temperatures are used in determination of the fuel assembly powers and they have important role in the reactor power limitation. For these reasons, correct interpretation of the thermocouple signals is an important question. In order to interpret the signals in correct way, knowledge of the coolant mixing in the assembly heads is necessary. Computational fluid dynamics codes and experiments can help to understand better these mixing processes and they can provide information which can support the more adequate interpretation of the thermocouple signals. This benchmark deals with the 3D computational fluid dynamics modeling of the coolant mixing in the heads of the profiled fuel assemblies with 12.2 mm rod pitch. Two assemblies of the twenty third cycle of the Paks NPPs Unit 3 are investigated. One of them has symmetrical pin power profile and another possesses inclined profile. (Authors)

  6. Benchmarking biofuels; Biobrandstoffen benchmarken

    Energy Technology Data Exchange (ETDEWEB)

    Croezen, H.; Kampman, B.; Bergsma, G.

    2012-03-15

    A sustainability benchmark for transport biofuels has been developed and used to evaluate the various biofuels currently on the market. For comparison, electric vehicles, hydrogen vehicles and petrol/diesel vehicles were also included. A range of studies as well as growing insight are making it ever clearer that biomass-based transport fuels may have just as big a carbon footprint as fossil fuels like petrol or diesel, or even bigger. At the request of Greenpeace Netherlands, CE Delft has brought together current understanding on the sustainability of fossil fuels, biofuels and electric vehicles, with particular focus on the performance of the respective energy carriers on three sustainability criteria, with the first weighing the heaviest: (1) Greenhouse gas emissions; (2) Land use; and (3) Nutrient consumption [Dutch] Greenpeace Nederland heeft CE Delft gevraagd een duurzaamheidsmeetlat voor biobrandstoffen voor transport te ontwerpen en hierop de verschillende biobrandstoffen te scoren. Voor een vergelijk zijn ook elektrisch rijden, rijden op waterstof en rijden op benzine of diesel opgenomen. Door onderzoek en voortschrijdend inzicht blijkt steeds vaker dat transportbrandstoffen op basis van biomassa soms net zoveel of zelfs meer broeikasgassen veroorzaken dan fossiele brandstoffen als benzine en diesel. CE Delft heeft voor Greenpeace Nederland op een rijtje gezet wat de huidige inzichten zijn over de duurzaamheid van fossiele brandstoffen, biobrandstoffen en elektrisch rijden. Daarbij is gekeken naar de effecten van de brandstoffen op drie duurzaamheidscriteria, waarbij broeikasgasemissies het zwaarst wegen: (1) Broeikasgasemissies; (2) Landgebruik; en (3) Nutriëntengebruik.

  7. Functional analysis of rare variants in mismatch repair proteins augments results from computation-based predictive methods

    Science.gov (United States)

    Arora, Sanjeevani; Huwe, Peter J.; Sikder, Rahmat; Shah, Manali; Browne, Amanda J.; Lesh, Randy; Nicolas, Emmanuelle; Deshpande, Sanat; Hall, Michael J.; Dunbrack, Roland L.; Golemis, Erica A.

    2017-01-01

    ABSTRACT The cancer-predisposing Lynch Syndrome (LS) arises from germline mutations in DNA mismatch repair (MMR) genes, predominantly MLH1, MSH2, MSH6, and PMS2. A major challenge for clinical diagnosis of LS is the frequent identification of variants of uncertain significance (VUS) in these genes, as it is often difficult to determine variant pathogenicity, particularly for missense variants. Generic programs such as SIFT and PolyPhen-2, and MMR gene-specific programs such as PON-MMR and MAPP-MMR, are often used to predict deleterious or neutral effects of VUS in MMR genes. We evaluated the performance of multiple predictive programs in the context of functional biologic data for 15 VUS in MLH1, MSH2, and PMS2. Using cell line models, we characterized VUS predicted to range from neutral to pathogenic on mRNA and protein expression, basal cellular viability, viability following treatment with a panel of DNA-damaging agents, and functionality in DNA damage response (DDR) signaling, benchmarking to wild-type MMR proteins. Our results suggest that the MMR gene-specific classifiers do not always align with the experimental phenotypes related to DDR. Our study highlights the importance of complementary experimental and computational assessment to develop future predictors for the assessment of VUS. PMID:28494185

  8. Benchmarking in academic pharmacy departments.

    Science.gov (United States)

    Bosso, John A; Chisholm-Burns, Marie; Nappi, Jean; Gubbins, Paul O; Ross, Leigh Ann

    2010-10-11

    Benchmarking in academic pharmacy, and recommendations for the potential uses of benchmarking in academic pharmacy departments are discussed in this paper. Benchmarking is the process by which practices, procedures, and performance metrics are compared to an established standard or best practice. Many businesses and industries use benchmarking to compare processes and outcomes, and ultimately plan for improvement. Institutions of higher learning have embraced benchmarking practices to facilitate measuring the quality of their educational and research programs. Benchmarking is used internally as well to justify the allocation of institutional resources or to mediate among competing demands for additional program staff or space. Surveying all chairs of academic pharmacy departments to explore benchmarking issues such as department size and composition, as well as faculty teaching, scholarly, and service productivity, could provide valuable information. To date, attempts to gather this data have had limited success. We believe this information is potentially important, urge that efforts to gather it should be continued, and offer suggestions to achieve full participation.

  9. Results on nucleon structure functions in quantum chromodynamics

    International Nuclear Information System (INIS)

    Martin, F.

    1979-01-01

    Gluon bremsstrahlung processes inside the nucleon are investigated using the standard renormalization-group analysis. A new method of inverting the moments is applied which leads to analytic results for the parton distributions near x = 1 and x = 0. The nucleon is considered as a bound state of three quarks subsequently ''renormalized'' by gluon bremsstrahlung and quark-antiquark pair production. An ''unrenormalized'' valance quark distribution peaked at x = 1/3, with a width related to the nucleon radius, leads to good agreement with deep-inelastic data. However, the gluon distribution obtained seems too steep near x = 0

  10. Issues in Benchmark Metric Selection

    Science.gov (United States)

    Crolotte, Alain

    It is true that a metric can influence a benchmark but will esoteric metrics create more problems than they will solve? We answer this question affirmatively by examining the case of the TPC-D metric which used the much debated geometric mean for the single-stream test. We will show how a simple choice influenced the benchmark and its conduct and, to some extent, DBMS development. After examining other alternatives our conclusion is that the “real” measure for a decision-support benchmark is the arithmetic mean.

  11. California commercial building energy benchmarking

    Energy Technology Data Exchange (ETDEWEB)

    Kinney, Satkartar; Piette, Mary Ann

    2003-07-01

    Building energy benchmarking is the comparison of whole-building energy use relative to a set of similar buildings. It provides a useful starting point for individual energy audits and for targeting buildings for energy-saving measures in multiple-site audits. Benchmarking is of interest and practical use to a number of groups. Energy service companies and performance contractors communicate energy savings potential with ''typical'' and ''best-practice'' benchmarks while control companies and utilities can provide direct tracking of energy use and combine data from multiple buildings. Benchmarking is also useful in the design stage of a new building or retrofit to determine if a design is relatively efficient. Energy managers and building owners have an ongoing interest in comparing energy performance to others. Large corporations, schools, and government agencies with numerous facilities also use benchmarking methods to compare their buildings to each other. The primary goal of Task 2.1.1 Web-based Benchmarking was the development of a web-based benchmarking tool, dubbed Cal-Arch, for benchmarking energy use in California commercial buildings. While there were several other benchmarking tools available to California consumers prior to the development of Cal-Arch, there were none that were based solely on California data. Most available benchmarking information, including the Energy Star performance rating, were developed using DOE's Commercial Building Energy Consumption Survey (CBECS), which does not provide state-level data. Each database and tool has advantages as well as limitations, such as the number of buildings and the coverage by type, climate regions and end uses. There is considerable commercial interest in benchmarking because it provides an inexpensive method of screening buildings for tune-ups and retrofits. However, private companies who collect and manage consumption data are concerned that the

  12. A Global Vision over Benchmarking Process: Benchmarking Based Enterprises

    OpenAIRE

    Sitnikov, Catalina; Giurca Vasilescu, Laura

    2008-01-01

    Benchmarking uses the knowledge and the experience of others to improve the enterprise. Starting from the analysis of the performance and underlying the strengths and weaknesses of the enterprise it should be assessed what must be done in order to improve its activity. Using benchmarking techniques, an enterprise looks at how processes in the value chain are performed. The approach based on the vision “from the whole towards the parts” (a fragmented image of the enterprise’s value chain) redu...

  13. Benchmarking of density functionals for a soft but accurate prediction and assignment of (1) H and (13)C NMR chemical shifts in organic and biological molecules.

    Science.gov (United States)

    Benassi, Enrico

    2017-01-15

    A number of programs and tools that simulate 1 H and 13 C nuclear magnetic resonance (NMR) chemical shifts using empirical approaches are available. These tools are user-friendly, but they provide a very rough (and sometimes misleading) estimation of the NMR properties, especially for complex systems. Rigorous and reliable ways to predict and interpret NMR properties of simple and complex systems are available in many popular computational program packages. Nevertheless, experimentalists keep relying on these "unreliable" tools in their daily work because, to have a sufficiently high accuracy, these rigorous quantum mechanical methods need high levels of theory. An alternative, efficient, semi-empirical approach has been proposed by Bally, Rablen, Tantillo, and coworkers. This idea consists of creating linear calibrations models, on the basis of the application of different combinations of functionals and basis sets. Following this approach, the predictive capability of a wider range of popular functionals was systematically investigated and tested. The NMR chemical shifts were computed in solvated phase at density functional theory level, using 30 different functionals coupled with three different triple-ζ basis sets. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  14. Benchmarking and Learning in Public Healthcare

    DEFF Research Database (Denmark)

    Buckmaster, Natalie; Mouritsen, Jan

    2017-01-01

    This research investigates the effects of learning-oriented benchmarking in public healthcare settings. Benchmarking is a widely adopted yet little explored accounting practice that is part of the paradigm of New Public Management. Extant studies are directed towards mandated coercive benchmarking...... applications. The present study analyses voluntary benchmarking in a public setting that is oriented towards learning. The study contributes by showing how benchmarking can be mobilised for learning and offers evidence of the effects of such benchmarking for performance outcomes. It concludes that benchmarking...... can enable learning in public settings but that this requires actors to invest in ensuring that benchmark data are directed towards improvement....

  15. Developing Benchmarking Criteria for CO2 Emissions

    Energy Technology Data Exchange (ETDEWEB)

    Neelis, M.; Worrell, E.; Mueller, N.; Angelini, T. [Ecofys, Utrecht (Netherlands); Cremer, C.; Schleich, J.; Eichhammer, W. [The Fraunhofer Institute for Systems and Innovation research, Karlsruhe (Germany)

    2009-02-15

    A European Union (EU) wide greenhouse gas (GHG) allowance trading scheme (EU ETS) was implemented in the EU in 2005. In the first two trading periods of the scheme (running up to 2012), free allocation based on historical emissions was the main methodology for allocation of allowances to existing installations. For the third trading period (2013 - 2020), the European Commission proposed in January 2008 a more important role of auctioning of allowances rather then free allocation. (Transitional) free allocation of allowances to industrial sectors will be determined via harmonized allocation rules, where feasible based on benchmarking. In general terms, a benchmark based method allocates allowances based on a certain amount of emissions per unit of productive output (i.e. the benchmark). This study aims to derive criteria for an allocation methodology for the EU Emission Trading Scheme based on benchmarking for the period 2013 - 2020. To test the feasibility of the criteria, we apply them to four example product groups: iron and steel, pulp and paper, lime and glass. The basis for this study is the Commission proposal for a revised ETS directive put forward on 23 January 2008 and does not take into account any changes to this proposal in the co-decision procedure that resulted in the adoption of the Energy and Climate change package in December 2008.

  16. NASA Software Engineering Benchmarking Study

    Science.gov (United States)

    Rarick, Heather L.; Godfrey, Sara H.; Kelly, John C.; Crumbley, Robert T.; Wifl, Joel M.

    2013-01-01

    was its software assurance practices, which seemed to rate well in comparison to the other organizational groups and also seemed to include a larger scope of activities. An unexpected benefit of the software benchmarking study was the identification of many opportunities for collaboration in areas including metrics, training, sharing of CMMI experiences and resources such as instructors and CMMI Lead Appraisers, and even sharing of assets such as documented processes. A further unexpected benefit of the study was the feedback on NASA practices that was received from some of the organizations interviewed. From that feedback, other potential areas where NASA could improve were highlighted, such as accuracy of software cost estimation and budgetary practices. The detailed report contains discussion of the practices noted in each of the topic areas, as well as a summary of observations and recommendations from each of the topic areas. The resulting 24 recommendations from the topic areas were then consolidated to eliminate duplication and culled into a set of 14 suggested actionable recommendations. This final set of actionable recommendations, listed below, are items that can be implemented to improve NASA's software engineering practices and to help address many of the items that were listed in the NASA top software engineering issues. 1. Develop and implement standard contract language for software procurements. 2. Advance accurate and trusted software cost estimates for both procured and in-house software and improve the capture of actual cost data to facilitate further improvements. 3. Establish a consistent set of objectives and expectations, specifically types of metrics at the Agency level, so key trends and models can be identified and used to continuously improve software processes and each software development effort. 4. Maintain the CMMI Maturity Level requirement for critical NASA projects and use CMMI to measure organizations developing software for NASA. 5

  17. Analytical three-dimensional neutron transport benchmarks for verification of nuclear engineering codes. Final report

    International Nuclear Information System (INIS)

    Ganapol, B.D.; Kornreich, D.E.

    1997-01-01

    Because of the requirement of accountability and quality control in the scientific world, a demand for high-quality analytical benchmark calculations has arisen in the neutron transport community. The intent of these benchmarks is to provide a numerical standard to which production neutron transport codes may be compared in order to verify proper operation. The overall investigation as modified in the second year renewal application includes the following three primary tasks. Task 1 on two dimensional neutron transport is divided into (a) single medium searchlight problem (SLP) and (b) two-adjacent half-space SLP. Task 2 on three-dimensional neutron transport covers (a) point source in arbitrary geometry, (b) single medium SLP, and (c) two-adjacent half-space SLP. Task 3 on code verification, includes deterministic and probabilistic codes. The primary aim of the proposed investigation was to provide a suite of comprehensive two- and three-dimensional analytical benchmarks for neutron transport theory applications. This objective has been achieved. The suite of benchmarks in infinite media and the three-dimensional SLP are a relatively comprehensive set of one-group benchmarks for isotropically scattering media. Because of time and resource limitations, the extensions of the benchmarks to include multi-group and anisotropic scattering are not included here. Presently, however, enormous advances in the solution for the planar Green's function in an anisotropically scattering medium have been made and will eventually be implemented in the two- and three-dimensional solutions considered under this grant. Of particular note in this work are the numerical results for the three-dimensional SLP, which have never before been presented. The results presented were made possible only because of the tremendous advances in computing power that have occurred during the past decade

  18. [Aging of cognitive functions. Results of a longitudinal study].

    Science.gov (United States)

    Poitrenaud, J; Barrère, H; Darcet, P; Driss, F

    1983-12-29

    This study had the two following purposes: to assess the age-related changes in the fluid and crystallized components of intelligence in subjects over age sixty five; to examine whether these age-related changes were linked to any biological, psychological and social factors. The sample was composed of 50 male subjects who were examined three times: in 1968, 1973 and 1977. At the beginning of the study, their age ranged from 60 to 79 years and they were all in good health. In the whole, their socio-economic level was high. At each wave of the study, these subjects were given the same battery of three mental tests: a vocabulary test, selected to assess crystallized intelligence; a perceptual test and a speeded digit coding test, both selected to assess fluid intelligence. Results show that the two components of intelligence have different aging trajectories over age sixty. On the vocabulary test, performances hold until an advanced age (about 75-80), then significantly decline. On the perceptual and digit coding tests, performances sharply and significantly decline with age, this decline looking approximately linear. Whatever the test used, individual differences in age-related changes in performance are found to be great. On vocabulary test, this variability is linked to two factors, independently of age: among subjects who have suffered from a cardio-arterial disease between wave 1 and wave 3, as well as in those who have not maintained an occupational activity, decline in performance is greater than in other subjects. On the two other tests, no factor was found to be significantly linked with change in performance between wave 1 and wave 3.

  19. Extensive regularization of the coupled cluster methods based on the generating functional formalism: Application to gas-phase benchmarks and to the SN2 reaction of CHCl3 and OH- in water

    International Nuclear Information System (INIS)

    Kowalski, Karol; Valiev, Marat

    2009-01-01

    The recently introduced energy expansion based on the use of generating functional (GF) [K. Kowalski and P. D. Fan, J. Chem. Phys. 130, 084112 (2009)] provides a way of constructing size-consistent noniterative coupled cluster (CC) corrections in terms of moments of the CC equations. To take advantage of this expansion in a strongly interacting regime, the regularization of the cluster amplitudes is required in order to counteract the effect of excessive growth of the norm of the CC wave function. Although proven to be efficient, the previously discussed form of the regularization does not lead to rigorously size-consistent corrections. In this paper we address the issue of size-consistent regularization of the GF expansion by redefining the equations for the cluster amplitudes. The performance and basic features of proposed methodology are illustrated on several gas-phase benchmark systems. Moreover, the regularized GF approaches are combined with quantum mechanical molecular mechanics module and applied to describe the S N 2 reaction of CHCl 3 and OH - in aqueous solution.

  20. Thermal Performance Benchmarking: Annual Report

    Energy Technology Data Exchange (ETDEWEB)

    Feng, Xuhui [National Renewable Energy Laboratory (NREL), Golden, CO (United States). Transportation and Hydrogen Systems Center

    2017-10-19

    In FY16, the thermal performance of the 2014 Honda Accord Hybrid power electronics thermal management systems were benchmarked. Both experiments and numerical simulation were utilized to thoroughly study the thermal resistances and temperature distribution in the power module. Experimental results obtained from the water-ethylene glycol tests provided the junction-to-liquid thermal resistance. The finite element analysis (FEA) and computational fluid dynamics (CFD) models were found to yield a good match with experimental results. Both experimental and modeling results demonstrate that the passive stack is the dominant thermal resistance for both the motor and power electronics systems. The 2014 Accord power electronics systems yield steady-state thermal resistance values around 42- 50 mm to the 2nd power K/W, depending on the flow rates. At a typical flow rate of 10 liters per minute, the thermal resistance of the Accord system was found to be about 44 percent lower than that of the 2012 Nissan LEAF system that was benchmarked in FY15. The main reason for the difference is that the Accord power module used a metalized-ceramic substrate and eliminated the thermal interface material layers. FEA models were developed to study the transient performance of 2012 Nissan LEAF, 2014 Accord, and two other systems that feature conventional power module designs. The simulation results indicate that the 2012 LEAF power module has lowest thermal impedance at a time scale less than one second. This is probably due to moving low thermally conductive materials further away from the heat source and enhancing the heat spreading effect from the copper-molybdenum plate close to the insulated gate bipolar transistors. When approaching steady state, the Honda system shows lower thermal impedance. Measurement results of the thermal resistance of the 2015 BMW i3 power electronic system indicate that the i3 insulated gate bipolar transistor module has significantly lower junction

  1. Performance Targets and External Benchmarking

    DEFF Research Database (Denmark)

    Friis, Ivar; Hansen, Allan; Vámosi, Tamás S.

    Research on relative performance measures, transfer pricing, beyond budgeting initiatives, target costing, piece rates systems and value based management has for decades underlined the importance of external benchmarking in performance management. Research conceptualises external benchmarking...... as a market mechanism that can be brought inside the firm to provide incentives for continuous improvement and the development of competitive advances. However, whereas extant research primarily has focused on the importance and effects of using external benchmarks, less attention has been directed towards...... the conditions upon which the market mechanism is performing within organizations. This paper aims to contribute to research by providing more insight to the conditions for the use of external benchmarking as an element in performance management in organizations. Our study explores a particular type of external...

  2. Benchmarking and Sustainable Transport Policy

    DEFF Research Database (Denmark)

    Gudmundsson, Henrik; Wyatt, Andrew; Gordon, Lucy

    2004-01-01

    Order to learn from the best. In 2000 the European Commission initiated research to explore benchmarking as a tool to promote policies for ‘sustainable transport’. This paper reports findings and recommendations on how to address this challenge. The findings suggest that benchmarking is a valuable...... tool that may indeed help to move forward the transport policy agenda. However, there are major conditions and limitations. First of all it is not always so straightforward to delimit, measure and compare transport services in order to establish a clear benchmark. Secondly ‘sustainable transport......’ evokes a broad range of concerns that are hard to address fully at the level of specific practices. Thirdly policies are not directly comparable across space and context. For these reasons attempting to benchmark ‘sustainable transport policies’ against one another would be a highly complex task, which...

  3. How to Advance TPC Benchmarks with Dependability Aspects

    Science.gov (United States)

    Almeida, Raquel; Poess, Meikel; Nambiar, Raghunath; Patil, Indira; Vieira, Marco

    Transactional systems are the core of the information systems of most organizations. Although there is general acknowledgement that failures in these systems often entail significant impact both on the proceeds and reputation of companies, the benchmarks developed and managed by the Transaction Processing Performance Council (TPC) still maintain their focus on reporting bare performance. Each TPC benchmark has to pass a list of dependability-related tests (to verify ACID properties), but not all benchmarks require measuring their performances. While TPC-E measures the recovery time of some system failures, TPC-H and TPC-C only require functional correctness of such recovery. Consequently, systems used in TPC benchmarks are tuned mostly for performance. In this paper we argue that nowadays systems should be tuned for a more comprehensive suite of dependability tests, and that a dependability metric should be part of TPC benchmark publications. The paper discusses WHY and HOW this can be achieved. Two approaches are introduced and discussed: augmenting each TPC benchmark in a customized way, by extending each specification individually; and pursuing a more unified approach, defining a generic specification that could be adjoined to any TPC benchmark.

  4. Benchmarking: contexts and details matter.

    Science.gov (United States)

    Zheng, Siyuan

    2017-07-05

    Benchmarking is an essential step in the development of computational tools. We take this opportunity to pitch in our opinions on tool benchmarking, in light of two correspondence articles published in Genome Biology.Please see related Li et al. and Newman et al. correspondence articles: www.dx.doi.org/10.1186/s13059-017-1256-5 and www.dx.doi.org/10.1186/s13059-017-1257-4.

  5. Handbook of critical experiments benchmarks

    International Nuclear Information System (INIS)

    Durst, B.M.; Bierman, S.R.; Clayton, E.D.

    1978-03-01

    Data from critical experiments have been collected together for use as benchmarks in evaluating calculational techniques and nuclear data. These benchmarks have been selected from the numerous experiments performed on homogeneous plutonium systems. No attempt has been made to reproduce all of the data that exists. The primary objective in the collection of these data is to present representative experimental data defined in a concise, standardized format that can easily be translated into computer code input

  6. Semi-Analytical Benchmarks for MCNP6

    Energy Technology Data Exchange (ETDEWEB)

    Grechanuk, Pavel Aleksandrovi [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-11-07

    Code verification is an extremely important process that involves proving or disproving the validity of code algorithms by comparing them against analytical results of the underlying physics or mathematical theory on which the code is based. Monte Carlo codes such as MCNP6 must undergo verification and testing upon every release to ensure that the codes are properly simulating nature. Specifically, MCNP6 has multiple sets of problems with known analytic solutions that are used for code verification. Monte Carlo codes primarily specify either current boundary sources or a volumetric fixed source, either of which can be very complicated functions of space, energy, direction and time. Thus, most of the challenges with modeling analytic benchmark problems in Monte Carlo codes come from identifying the correct source definition to properly simulate the correct boundary conditions. The problems included in this suite all deal with mono-energetic neutron transport without energy loss, in a homogeneous material. The variables that differ between the problems are source type (isotropic/beam), medium dimensionality (infinite/semi-infinite), etc.

  7. An improved benchmark model for the Big Ten critical assembly - 021

    International Nuclear Information System (INIS)

    Mosteller, R.D.

    2010-01-01

    A new benchmark specification is developed for the BIG TEN uranium critical assembly. The assembly has a fast spectrum, and its core contains approximately 10 wt.% enriched uranium. Detailed specifications for the benchmark are provided, and results from the MCNP5 Monte Carlo code using a variety of nuclear-data libraries are given for this benchmark and two others. (authors)

  8. Integral parameters for the Godiva benchmark calculated by using theoretical and adjusted fission spectra of 235U

    International Nuclear Information System (INIS)

    Caldeira, A.D.

    1987-05-01

    The theoretical and adjusted Watt spectrum representations for 235 U are used as weighting functions to calculate K eff and θ f 28 /θ f 25 for the benchmark Godiva. The results obtained show that the values of K eff and θ f 28 /θ f 25 are not affected by spectrum form change. (author) [pt

  9. ANN-Benchmarks: A Benchmarking Tool for Approximate Nearest Neighbor Algorithms

    DEFF Research Database (Denmark)

    Aumüller, Martin; Bernhardsson, Erik; Faithfull, Alexander

    2017-01-01

    This paper describes ANN-Benchmarks, a tool for evaluating the performance of in-memory approximate nearest neighbor algorithms. It provides a standard interface for measuring the performance and quality achieved by nearest neighbor algorithms on different standard data sets. It supports several...... visualise these as images, Open image in new window plots, and websites with interactive plots. ANN-Benchmarks aims to provide a constantly updated overview of the current state of the art of k-NN algorithms. In the short term, this overview allows users to choose the correct k-NN algorithm and parameters...... for their similarity search task; in the longer term, algorithm designers will be able to use this overview to test and refine automatic parameter tuning. The paper gives an overview of the system, evaluates the results of the benchmark, and points out directions for future work. Interestingly, very different...

  10. Construction of a Benchmark for the User Experience Questionnaire (UEQ

    Directory of Open Access Journals (Sweden)

    Martin Schrepp

    2017-08-01

    Full Text Available Questionnaires are a cheap and highly efficient tool for achieving a quantitative measure of a product’s user experience (UX. However, it is not always easy to decide, if a questionnaire result can really show whether a product satisfies this quality aspect. So a benchmark is useful. It allows comparing the results of one product to a large set of other products. In this paper we describe a benchmark for the User Experience Questionnaire (UEQ, a widely used evaluation tool for interactive products. We also describe how the benchmark can be applied to the quality assurance process for concrete projects.

  11. Numisheet2005 Benchmark Analysis on Forming of an Automotive Deck Lid Inner Panel: Benchmark 1

    International Nuclear Information System (INIS)

    Buranathiti, Thaweepat; Cao Jian

    2005-01-01

    Numerical simulations in sheet metal forming processes have been a very challenging topic in industry. There are many computer codes and modeling techniques existing today. However, there are many unknowns affecting the prediction accuracy. Systematic benchmark tests are needed to accelerate the future implementations and to provide as a reference. This report presents an international cooperative benchmark effort for an automotive deck lid inner panel. Predictions from simulations are analyzed and discussed against the corresponding experimental results. The correlations between accuracy of each parameter of interest are discussed in this report

  12. A PWR Thorium Pin Cell Burnup Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Weaver, Kevan Dean; Zhao, X.; Pilat, E. E; Hejzlar, P.

    2000-05-01

    As part of work to evaluate the potential benefits of using thorium in LWR fuel, a thorium fueled benchmark comparison was made in this study between state-of-the-art codes, MOCUP (MCNP4B + ORIGEN2), and CASMO-4 for burnup calculations. The MOCUP runs were done individually at MIT and INEEL, using the same model but with some differences in techniques and cross section libraries. Eigenvalue and isotope concentrations were compared on a PWR pin cell model up to high burnup. The eigenvalue comparison as a function of burnup is good: the maximum difference is within 2% and the average absolute difference less than 1%. The isotope concentration comparisons are better than a set of MOX fuel benchmarks and comparable to a set of uranium fuel benchmarks reported in the literature. The actinide and fission product data sources used in the MOCUP burnup calculations for a typical thorium fuel are documented. Reasons for code vs code differences are analyzed and discussed.

  13. Optical Gaps in Pristine and Heavily Doped Silicon Nanocrystals: DFT versus Quantum Monte Carlo Benchmarks.

    Science.gov (United States)

    Derian, R; Tokár, K; Somogyi, B; Gali, Á; Štich, I

    2017-12-12

    We present a time-dependent density functional theory (TDDFT) study of the optical gaps of light-emitting nanomaterials, namely, pristine and heavily B- and P-codoped silicon crystalline nanoparticles. Twenty DFT exchange-correlation functionals sampled from the best currently available inventory such as hybrids and range-separated hybrids are benchmarked against ultra-accurate quantum Monte Carlo results on small model Si nanocrystals. Overall, the range-separated hybrids are found to perform best. The quality of the DFT gaps is correlated with the deviation from Koopmans' theorem as a possible quality guide. In addition to providing a generic test of the ability of TDDFT to describe optical properties of silicon crystalline nanoparticles, the results also open up a route to benchmark-quality DFT studies of nanoparticle sizes approaching those studied experimentally.

  14. MCNP simulation of the TRIGA Mark II benchmark experiment

    International Nuclear Information System (INIS)

    Jeraj, R.; Glumac, B.; Maucec, M.

    1996-01-01

    The complete 3D MCNP model of the TRIGA Mark II reactor is presented. It enables precise calculations of some quantities of interest in a steady-state mode of operation. Calculational results are compared to the experimental results gathered during reactor reconstruction in 1992. Since the operating conditions were well defined at that time, the experimental results can be used as a benchmark. It may be noted that this benchmark is one of very few high enrichment benchmarks available. In our simulations experimental conditions were thoroughly simulated: fuel elements and control rods were precisely modeled as well as entire core configuration and the vicinity of the core. ENDF/B-VI and ENDF/B-V libraries were used. Partial results of benchmark calculations are presented. Excellent agreement of core criticality, excess reactivity and control rod worths can be observed. (author)

  15. Regional Competitive Intelligence: Benchmarking and Policymaking

    OpenAIRE

    Huggins , Robert

    2010-01-01

    Benchmarking exercises have become increasingly popular within the sphere of regional policymaking in recent years. The aim of this paper is to analyse the concept of regional benchmarking and its links with regional policymaking processes. It develops a typology of regional benchmarking exercises and regional benchmarkers, and critically reviews the literature, both academic and policy oriented. It is argued that critics who suggest regional benchmarking is a flawed concept and technique fai...

  16. Benchmark testing calculations for 232Th

    International Nuclear Information System (INIS)

    Liu Ping

    2003-01-01

    The cross sections of 232 Th from CNDC and JENDL-3.3 were processed with NJOY97.45 code in the ACE format for the continuous-energy Monte Carlo Code MCNP4C. The K eff values and central reaction rates based on CENDL-3.0, JENDL-3.3 and ENDF/B-6.2 were calculated using MCNP4C code for benchmark assembly, and the comparisons with experimental results are given. (author)

  17. The surgical dilemma of 'functional inoperability' in oral and oropharyngeal cancer: current consensus on operability with regard to functional results

    NARCIS (Netherlands)

    Kreeft, A.; Tan, I. B.; van den Brekel, M. W. M.; Hilgers, F. J.; Balm, A. J. M.

    2009-01-01

    OBJECTIVES: If surgical resection of a tumour results in an unacceptable loss of function, this is defined as 'functional inoperability'. The current survey aims to define the borders of functional inoperability in oral and oropharyngeal carcinoma and evaluate its current use by obtaining opinions

  18. Criticality safety benchmarking of PASC-3 and ECNJEF1.1

    International Nuclear Information System (INIS)

    Li, J.

    1992-09-01

    To validate the code system PASC-3 and the multigroup cross section library ECNJEF1.1 on various applications many benchmarks are required. This report presents the results of critically safety benchmarking for five calculational and four experimental benchmarks. These benchmarks are related to the transport package of fissile materials such as spent fuel. The fissile nuclides in these benchmarks are 235 U and 239 Pu. The modules of PASC-3 which have been used for the calculations are BONAMI, NITAWL and KENO.5A. The final results for the experimental benchmarks do agree well with experimental data. For the calculational benchmarks the results presented here are in reasonable agreement with the results from other investigations. (author). 8 refs.; 20 figs.; 5 tabs

  19. Benchmarking of human resources management

    Directory of Open Access Journals (Sweden)

    David M. Akinnusi

    2008-11-01

    Full Text Available This paper reviews the role of human resource management (HRM which, today, plays a strategic partnership role in management. The focus of the paper is on HRM in the public sector, where much hope rests on HRM as a means of transforming the public service and achieving much needed service delivery. However, a critical evaluation of HRM practices in the public sector reveals that these services leave much to be desired. The paper suggests the adoption of benchmarking as a process to revamp HRM in the public sector so that it is able to deliver on its promises. It describes the nature and process of benchmarking and highlights the inherent difficulties in applying benchmarking in HRM. It concludes with some suggestions for a plan of action. The process of identifying “best” practices in HRM requires the best collaborative efforts of HRM practitioners and academicians. If used creatively, benchmarking has the potential to bring about radical and positive changes in HRM in the public sector. The adoption of the benchmarking process is, in itself, a litmus test of the extent to which HRM in the public sector has grown professionally.

  20. Benchmark simulation models, quo vadis?

    Science.gov (United States)

    Jeppsson, U; Alex, J; Batstone, D J; Benedetti, L; Comas, J; Copp, J B; Corominas, L; Flores-Alsina, X; Gernaey, K V; Nopens, I; Pons, M-N; Rodríguez-Roda, I; Rosen, C; Steyer, J-P; Vanrolleghem, P A; Volcke, E I P; Vrecko, D

    2013-01-01

    As the work of the IWA Task Group on Benchmarking of Control Strategies for wastewater treatment plants (WWTPs) is coming to an end, it is essential to disseminate the knowledge gained. For this reason, all authors of the IWA Scientific and Technical Report on benchmarking have come together to provide their insights, highlighting areas where knowledge may still be deficient and where new opportunities are emerging, and to propose potential avenues for future development and application of the general benchmarking framework and its associated tools. The paper focuses on the topics of temporal and spatial extension, process modifications within the WWTP, the realism of models, control strategy extensions and the potential for new evaluation tools within the existing benchmark system. We find that there are major opportunities for application within all of these areas, either from existing work already being done within the context of the benchmarking simulation models (BSMs) or applicable work in the wider literature. Of key importance is increasing capability, usability and transparency of the BSM package while avoiding unnecessary complexity.

  1. Thought Experiment to Examine Benchmark Performance for Fusion Nuclear Data

    Science.gov (United States)

    Murata, Isao; Ohta, Masayuki; Kusaka, Sachie; Sato, Fuminobu; Miyamaru, Hiroyuki

    2017-09-01

    There are many benchmark experiments carried out so far with DT neutrons especially aiming at fusion reactor development. These integral experiments seemed vaguely to validate the nuclear data below 14 MeV. However, no precise studies exist now. The author's group thus started to examine how well benchmark experiments with DT neutrons can play a benchmarking role for energies below 14 MeV. Recently, as a next phase, to generalize the above discussion, the energy range was expanded to the entire region. In this study, thought experiments with finer energy bins have thus been conducted to discuss how to generally estimate performance of benchmark experiments. As a result of thought experiments with a point detector, the sensitivity for a discrepancy appearing in the benchmark analysis is "equally" due not only to contribution directly conveyed to the deterctor, but also due to indirect contribution of neutrons (named (A)) making neutrons conveying the contribution, indirect controbution of neutrons (B) making the neutrons (A) and so on. From this concept, it would become clear from a sensitivity analysis in advance how well and which energy nuclear data could be benchmarked with a benchmark experiment.

  2. Thought Experiment to Examine Benchmark Performance for Fusion Nuclear Data

    Directory of Open Access Journals (Sweden)

    Murata Isao

    2017-01-01

    Full Text Available There are many benchmark experiments carried out so far with DT neutrons especially aiming at fusion reactor development. These integral experiments seemed vaguely to validate the nuclear data below 14 MeV. However, no precise studies exist now. The author’s group thus started to examine how well benchmark experiments with DT neutrons can play a benchmarking role for energies below 14 MeV. Recently, as a next phase, to generalize the above discussion, the energy range was expanded to the entire region. In this study, thought experiments with finer energy bins have thus been conducted to discuss how to generally estimate performance of benchmark experiments. As a result of thought experiments with a point detector, the sensitivity for a discrepancy appearing in the benchmark analysis is “equally” due not only to contribution directly conveyed to the deterctor, but also due to indirect contribution of neutrons (named (A making neutrons conveying the contribution, indirect controbution of neutrons (B making the neutrons (A and so on. From this concept, it would become clear from a sensitivity analysis in advance how well and which energy nuclear data could be benchmarked with a benchmark experiment.

  3. Status on benchmark testing of CENDL-3

    CERN Document Server

    Liu Ping

    2002-01-01

    CENDL-3, the newest version of China Evaluated Nuclear Data Library has been finished, and distributed for some benchmarks analysis recently. The processing was carried out using the NJOY nuclear data processing code system. The calculations and analysis of benchmarks were done with Monte Carlo code MCNP and reactor lattice code WIMSD5A. The calculated results were compared with the experimental results based on ENDF/B6. In most thermal and fast uranium criticality benchmarks, the calculated k sub e sub f sub f values with CENDL-3 were in good agreements with experimental results. In the plutonium fast cores, the k sub e sub f sub f values were improved significantly with CENDL-3. This is duo to reevaluation of the fission spectrum and elastic angular distributions of sup 2 sup 3 sup 9 Pu and sup 2 sup 4 sup 0 Pu. CENDL-3 underestimated the k sub e sub f sub f values compared with other evaluated data libraries for most spherical or cylindrical assemblies of plutonium or uranium with beryllium

  4. WIPP Benchmark calculations with the large strain SPECTROM codes

    International Nuclear Information System (INIS)

    Callahan, G.D.; DeVries, K.L.

    1995-08-01

    This report provides calculational results from the updated Lagrangian structural finite-element programs SPECTROM-32 and SPECTROM-333 for the purpose of qualifying these codes to perform analyses of structural situations in the Waste Isolation Pilot Plant (WIPP). Results are presented for the Second WIPP Benchmark (Benchmark II) Problems and for a simplified heated room problem used in a parallel design calculation study. The Benchmark II problems consist of an isothermal room problem and a heated room problem. The stratigraphy involves 27 distinct geologic layers including ten clay seams of which four are modeled as frictionless sliding interfaces. The analyses of the Benchmark II problems consider a 10-year simulation period. The evaluation of nine structural codes used in the Benchmark II problems shows that inclusion of finite-strain effects is not as significant as observed for the simplified heated room problem, and a variety of finite-strain and small-strain formulations produced similar results. The simplified heated room problem provides stratigraphic complexity equivalent to the Benchmark II problems but neglects sliding along the clay seams. The simplified heated problem does, however, provide a calculational check case where the small strain-formulation produced room closures about 20 percent greater than those obtained using finite-strain formulations. A discussion is given of each of the solved problems, and the computational results are compared with available published results. In general, the results of the two SPECTROM large strain codes compare favorably with results from other codes used to solve the problems

  5. HPL and STREAM Benchmarks on SANAM Supercomputer

    KAUST Repository

    Bin Sulaiman, Riman A.

    2017-01-01

    SANAM supercomputer was jointly built by KACST and FIAS in 2012 ranking second that year in the Green500 list with a power efficiency of 2.3 GFLOPS/W (Rohr et al., 2014). It is a heterogeneous accelerator-based HPC system that has 300 compute nodes. Each node includes two Intel Xeon E5?2650 CPUs, two AMD FirePro S10000 dual GPUs and 128 GiB of main memory. In this work, the seven benchmarks of HPCC were installed and configured to reassess the performance of SANAM, as part of an unpublished master thesis, after it was reassembled in the Kingdom of Saudi Arabia. We present here detailed results of HPL and STREAM benchmarks.

  6. Benchmark and Continuous Improvement of Performance

    Directory of Open Access Journals (Sweden)

    Alina Alecse Stanciu

    2017-12-01

    Full Text Available The present Economic Environment is challenge us to perform, to think and re-think our personal strategies in according with our entities strategies, even if we are simply employed or we are entrepreneurs. Is an environment characterised by Volatility, Uncertainity, Complexity and Ambiguity - a VUCA World in which the entities must fight for their position gained in the market, disrupt new markets and new economies, developing their client portofolio, with the Performance as one final goal. The pressure of driving forces known as the 2030 Megatrends: Globalization 2.0, Environmental Crisis and the Scarcity of Resources, Individualism and Value Pluralism, Demographic Change, This paper examines whether using benchmark is an opportunity to increase the competitiveness of Romanian SMEs and the results show that benchmark is therefore a powerful instrument, combining reduced negative impact on the environment with a positive impact on the economy and society.

  7. HPL and STREAM Benchmarks on SANAM Supercomputer

    KAUST Repository

    Bin Sulaiman, Riman A.

    2017-03-13

    SANAM supercomputer was jointly built by KACST and FIAS in 2012 ranking second that year in the Green500 list with a power efficiency of 2.3 GFLOPS/W (Rohr et al., 2014). It is a heterogeneous accelerator-based HPC system that has 300 compute nodes. Each node includes two Intel Xeon E5?2650 CPUs, two AMD FirePro S10000 dual GPUs and 128 GiB of main memory. In this work, the seven benchmarks of HPCC were installed and configured to reassess the performance of SANAM, as part of an unpublished master thesis, after it was reassembled in the Kingdom of Saudi Arabia. We present here detailed results of HPL and STREAM benchmarks.

  8. BENCHMARKING - PRACTICAL TOOLS IDENTIFY KEY SUCCESS FACTORS

    Directory of Open Access Journals (Sweden)

    Olga Ju. Malinina

    2016-01-01

    Full Text Available The article gives a practical example of the application of benchmarking techniques. The object of study selected fashion store Company «HLB & M Hennes & Mauritz», located in the shopping center «Gallery», Krasnodar. Hennes & Mauritz. The purpose of this article is to identify the best ways to develop a fashionable brand clothing store Hennes & Mauritz on the basis of benchmarking techniques. On the basis of conducted market research is a comparative analysis of the data from different perspectives. The result of the author’s study is a generalization of the ndings, the development of the key success factors that will allow to plan a successful trading activities in the future, based on the best experience of competitors.

  9. KENO-IV code benchmark calculation, (6)

    International Nuclear Information System (INIS)

    Nomura, Yasushi; Naito, Yoshitaka; Yamakawa, Yasuhiro.

    1980-11-01

    A series of benchmark tests has been undertaken in JAERI in order to examine the capability of JAERI's criticality safety evaluation system consisting of the Monte Carlo calculation code KENO-IV and the newly developed multigroup constants library MGCL. The present report describes the results of a benchmark test using criticality experiments about Plutonium fuel in various shape. In all, 33 cases of experiments have been calculated for Pu(NO 3 ) 4 aqueous solution, Pu metal or PuO 2 -polystyrene compact in various shape (sphere, cylinder, rectangular parallelepiped). The effective multiplication factors calculated for the 33 cases distribute widely between 0.955 and 1.045 due to wide range of system variables. (author)

  10. Summary of ACCSIM and ORBIT Benchmarking Simulations

    CERN Document Server

    AIBA, M

    2009-01-01

    We have performed a benchmarking study of ORBIT and ACCSIM which are accelerator tracking codes having routines to evaluate space charge effects. The study is motivated by the need of predicting/understanding beam behaviour in the CERN Proton Synchrotron Booster (PSB) in which direct space charge is expected to be the dominant performance limitation. Historically at CERN, ACCSIM has been employed for space charge simulation studies. A benchmark study using ORBIT has been started to confirm the results from ACCSIM and to profit from the advantages of ORBIT such as the capability of parallel processing. We observed a fair agreement in emittance evolution in the horizontal plane but not in the vertical one. This may be partly due to the fact that the algorithm to compute the space charge field is different between the two codes.

  11. SUMMARY OF GENERAL WORKING GROUP A+B+D: CODES BENCHMARKING.

    Energy Technology Data Exchange (ETDEWEB)

    WEI, J.; SHAPOSHNIKOVA, E.; ZIMMERMANN, F.; HOFMANN, I.

    2006-05-29

    Computer simulation is an indispensable tool in assisting the design, construction, and operation of accelerators. In particular, computer simulation complements analytical theories and experimental observations in understanding beam dynamics in accelerators. The ultimate function of computer simulation is to study mechanisms that limit the performance of frontier accelerators. There are four goals for the benchmarking of computer simulation codes, namely debugging, validation, comparison and verification: (1) Debugging--codes should calculate what they are supposed to calculate; (2) Validation--results generated by the codes should agree with established analytical results for specific cases; (3) Comparison--results from two sets of codes should agree with each other if the models used are the same; and (4) Verification--results from the codes should agree with experimental measurements. This is the summary of the joint session among working groups A, B, and D of the HI32006 Workshop on computer codes benchmarking.

  12. Benchmarking the Multidimensional Stellar Implicit Code MUSIC

    Science.gov (United States)

    Goffrey, T.; Pratt, J.; Viallet, M.; Baraffe, I.; Popov, M. V.; Walder, R.; Folini, D.; Geroux, C.; Constantino, T.

    2017-04-01

    We present the results of a numerical benchmark study for the MUltidimensional Stellar Implicit Code (MUSIC) based on widely applicable two- and three-dimensional compressible hydrodynamics problems relevant to stellar interiors. MUSIC is an implicit large eddy simulation code that uses implicit time integration, implemented as a Jacobian-free Newton Krylov method. A physics based preconditioning technique which can be adjusted to target varying physics is used to improve the performance of the solver. The problems used for this benchmark study include the Rayleigh-Taylor and Kelvin-Helmholtz instabilities, and the decay of the Taylor-Green vortex. Additionally we show a test of hydrostatic equilibrium, in a stellar environment which is dominated by radiative effects. In this setting the flexibility of the preconditioning technique is demonstrated. This work aims to bridge the gap between the hydrodynamic test problems typically used during development of numerical methods and the complex flows of stellar interiors. A series of multidimensional tests were performed and analysed. Each of these test cases was analysed with a simple, scalar diagnostic, with the aim of enabling direct code comparisons. As the tests performed do not have analytic solutions, we verify MUSIC by comparing it to established codes including ATHENA and the PENCIL code. MUSIC is able to both reproduce behaviour from established and widely-used codes as well as results expected from theoretical predictions. This benchmarking study concludes a series of papers describing the development of the MUSIC code and provides confidence in future applications.

  13. Boiling water reactor turbine trip (TT) benchmark

    International Nuclear Information System (INIS)

    2005-01-01

    In the field of coupled neutronics/thermal-hydraulics computation there is a need to enhance scientific knowledge in order to develop advanced modelling techniques for new nuclear technologies and concepts as well as for current applications. Recently developed 'best-estimate' computer code systems for modelling 3-D coupled neutronics/thermal-hydraulics transients in nuclear cores and for coupling core phenomena and system dynamics (PWR, BWR, VVER) need to be compared against each other and validated against results from experiments. International benchmark studies have been set up for this purpose. The present report is the second in a series of four and summarises the results of the first benchmark exercise, which identifies the key parameters and important issues concerning the thermalhydraulic system modelling of the transient, with specified core average axial power distribution and fission power time transient history. The transient addressed is a turbine trip in a boiling water reactor, involving pressurization events in which the coupling between core phenomena and system dynamics plays an important role. In addition, the data made available from experiments carried out at the Peach Bottom 2 reactor (a GE-designed BWR/4) make the present benchmark particularly valuable. (author)

  14. Benchmarking Organisational Capability using The 20 Keys

    Directory of Open Access Journals (Sweden)

    Dino Petrarolo

    2012-01-01

    Full Text Available Organisations have over the years implemented many improvement initiatives, many of which were applied individually with no real, lasting improvement. Approaches such as quality control, team activities, setup reduction and many more seldom changed the fundamental constitution or capability of an organisation. Leading companies in the world have come to realise that an integrated approach is required which focuses on improving more than one factor at the same time - by recognising the importance of synergy between different improvement efforts and the need for commitment at all levels of the company to achieve total system-wide improvement.

    The 20 Keys approach offers a way to look at the strenqth of organisations and to systemically improve it, one step at a time by focusing on 20 different but interrelated aspects. One feature of the approach is the benchmarking system which forms the main focus of this paper. The benchmarking system is introduced as an important part of the 20 Keys philosophy in measuring organisational strength. Benchmarking results from selected South African companies are provided, as well as one company's results achieved through the adoption of the 20 Keys philosophy.

  15. Atomic Energy Research benchmark activity

    International Nuclear Information System (INIS)

    Makai, M.

    1998-01-01

    The test problems utilized in the validation and verification process of computer programs in Atomic Energie Research are collected into one bunch. This is the first step towards issuing a volume in which tests for VVER are collected, along with reference solutions and a number of solutions. The benchmarks do not include the ZR-6 experiments because they have been published along with a number of comparisons in the Final reports of TIC. The present collection focuses on operational and mathematical benchmarks which cover almost the entire range of reaktor calculation. (Author)

  16. Implementation and verification of global optimization benchmark problems

    Science.gov (United States)

    Posypkin, Mikhail; Usov, Alexander

    2017-12-01

    The paper considers the implementation and verification of a test suite containing 150 benchmarks for global deterministic box-constrained optimization. A C++ library for describing standard mathematical expressions was developed for this purpose. The library automate the process of generating the value of a function and its' gradient at a given point and the interval estimates of a function and its' gradient on a given box using a single description. Based on this functionality, we have developed a collection of tests for an automatic verification of the proposed benchmarks. The verification has shown that literary sources contain mistakes in the benchmarks description. The library and the test suite are available for download and can be used freely.

  17. Implementation and verification of global optimization benchmark problems

    Directory of Open Access Journals (Sweden)

    Posypkin Mikhail

    2017-12-01

    Full Text Available The paper considers the implementation and verification of a test suite containing 150 benchmarks for global deterministic box-constrained optimization. A C++ library for describing standard mathematical expressions was developed for this purpose. The library automate the process of generating the value of a function and its’ gradient at a given point and the interval estimates of a function and its’ gradient on a given box using a single description. Based on this functionality, we have developed a collection of tests for an automatic verification of the proposed benchmarks. The verification has shown that literary sources contain mistakes in the benchmarks description. The library and the test suite are available for download and can be used freely.

  18. Human factors reliability benchmark exercise: a review

    International Nuclear Information System (INIS)

    Humphreys, P.

    1990-01-01

    The Human Factors Reliability Benchmark Exercise has addressed the issues of identification, analysis, representation and quantification of Human Error in order to identify the strengths and weaknesses of available techniques. Using a German PWR nuclear powerplant as the basis for the studies, fifteen teams undertook evaluations of a routine functional Test and Maintenance procedure plus an analysis of human actions during an operational transient. The techniques employed by the teams are discussed and reviewed on a comparative basis. The qualitative assessments performed by each team compare well, but at the quantification stage there is much less agreement. (author)

  19. Dynamic benchmarking methodology for Quality Function Deployment

    NARCIS (Netherlands)

    Raharjo, H.; Brombacher, A.C.; Chai, K.H.; Bergman, B.

    2008-01-01

    A competitive advantage, generally, can be gained if a firm produces a product that not only addresses what the customer values most, but also performs better than its competitors in terms of quality, cost, and timeliness. However, these two factors, namely, the customer needs and competitorspsila

  20. Comment on 'Analytical results for a Bessel function times Legendre polynomials class integrals'

    International Nuclear Information System (INIS)

    Cregg, P J; Svedlindh, P

    2007-01-01

    A result is obtained, stemming from Gegenbauer, where the products of certain Bessel functions and exponentials are expressed in terms of an infinite series of spherical Bessel functions and products of associated Legendre functions. Closed form solutions for integrals involving Bessel functions times associated Legendre functions times exponentials, recently elucidated by Neves et al (J. Phys. A: Math. Gen. 39 L293), are then shown to result directly from the orthogonality properties of the associated Legendre functions. This result offers greater flexibility in the treatment of classical Heisenberg chains and may do so in other problems such as occur in electromagnetic diffraction theory. (comment)

  1. How to achieve and prove performance improvement - 15 years of experience in German wastewater benchmarking.

    Science.gov (United States)

    Bertzbach, F; Franz, T; Möller, K

    2012-01-01

    This paper shows the results of performance improvement, which have been achieved in benchmarking projects in the wastewater industry in Germany over the last 15 years. A huge number of changes in operational practice and also in achieved annual savings can be shown, induced in particular by benchmarking at process level. Investigation of this question produces some general findings for the inclusion of performance improvement in a benchmarking project and for the communication of its results. Thus, we elaborate on the concept of benchmarking at both utility and process level, which is still a necessary distinction for the integration of performance improvement into our benchmarking approach. To achieve performance improvement via benchmarking it should be made quite clear that this outcome depends, on one hand, on a well conducted benchmarking programme and, on the other, on the individual situation within each participating utility.

  2. A Field-Based Aquatic Life Benchmark for Conductivity in ...

    Science.gov (United States)

    This report adapts the standard U.S. EPA methodology for deriving ambient water quality criteria. Rather than use toxicity test results, the adaptation uses field data to determine the loss of 5% of genera from streams. The method is applied to derive effect benchmarks for dissolved salts as measured by conductivity in Central Appalachian streams using data from West Virginia and Kentucky. This report provides scientific evidence for a conductivity benchmark in a specific region rather than for the entire United States.

  3. DNA breathing dynamics: analytic results for distribution functions of relevant Brownian functionals.

    Science.gov (United States)

    Bandyopadhyay, Malay; Gupta, Shamik; Segal, Dvira

    2011-03-01

    We investigate DNA breathing dynamics by suggesting and examining several Brownian functionals associated with bubble lifetime and reactivity. Bubble dynamics is described as an overdamped random walk in the number of broken base pairs. The walk takes place on the Poland-Scheraga free-energy landscape. We suggest several probability distribution functions that characterize the breathing process, and adopt the recently studied backward Fokker-Planck method and the path decomposition method as elegant and flexible tools for deriving these distributions. In particular, for a bubble of an initial size x₀, we derive analytical expressions for (i) the distribution P(t{f}|x₀) of the first-passage time t{f}, characterizing the bubble lifetime, (ii) the distribution P(A|x₀) of the area A until the first-passage time, providing information about the effective reactivity of the bubble to processes within the DNA, (iii) the distribution P(M) of the maximum bubble size M attained before the first-passage time, and (iv) the joint probability distribution P(M,t{m}) of the maximum bubble size M and the time t{m} of its occurrence before the first-passage time. These distributions are analyzed in the limit of small and large bubble sizes. We supplement our analytical predictions with direct numericalsimulations of the related Langevin equation, and obtain a very good agreement in the appropriate limits. The nontrivial scaling behavior of the various quantities analyzed here can, in principle, be explored experimentally.

  4. Model based energy benchmarking for glass furnace

    International Nuclear Information System (INIS)

    Sardeshpande, Vishal; Gaitonde, U.N.; Banerjee, Rangan

    2007-01-01

    Energy benchmarking of processes is important for setting energy efficiency targets and planning energy management strategies. Most approaches used for energy benchmarking are based on statistical methods by comparing with a sample of existing plants. This paper presents a model based approach for benchmarking of energy intensive industrial processes and illustrates this approach for industrial glass furnaces. A simulation model for a glass furnace is developed using mass and energy balances, and heat loss equations for the different zones and empirical equations based on operating practices. The model is checked with field data from end fired industrial glass furnaces in India. The simulation model enables calculation of the energy performance of a given furnace design. The model results show the potential for improvement and the impact of different operating and design preferences on specific energy consumption. A case study for a 100 TPD end fired furnace is presented. An achievable minimum energy consumption of about 3830 kJ/kg is estimated for this furnace. The useful heat carried by glass is about 53% of the heat supplied by the fuel. Actual furnaces operating at these production scales have a potential for reduction in energy consumption of about 20-25%

  5. The Benchmarking of Integrated Business Structures

    Directory of Open Access Journals (Sweden)

    Nifatova Olena M.

    2017-12-01

    Full Text Available The aim of the article is to study the role of the benchmarking in the process of integration of business structures in the aspect of knowledge sharing. The results of studying the essential content of the concept “integrated business structure” and its semantic analysis made it possible to form our own understanding of this category with an emphasis on the need to consider it in the plane of three projections — legal, economic and organizational one. The economic projection of the essential content of integration associations of business units is supported by the organizational projection, which is expressed through such essential aspects as existence of a single center that makes key decisions; understanding integration as knowledge sharing; using the benchmarking as exchange of experience on key business processes. Understanding the process of integration of business units in the aspect of knowledge sharing involves obtaining certain information benefits. Using the benchmarking as exchange of experience on key business processes in integrated business structures will help improve the basic production processes, increase the efficiency of activity of both the individual business unit and the IBS as a whole.

  6. Boiling water reactor turbine trip (TT) benchmark

    International Nuclear Information System (INIS)

    2001-06-01

    In the field of coupled neutronics/thermal-hydraulics computation there is a need to enhance scientific knowledge in order to develop advanced modelling techniques for new nuclear technologies and concepts, as well as for current nuclear applications Recently developed 'best-estimate' computer code systems for modelling 3-D coupled neutronics/thermal-hydraulics transients in nuclear cores and for the coupling of core phenomena and system dynamics (PWR, BWR, VVER) need to be compared against each other and validated against results from experiments. International benchmark studies have been set up for the purpose. The present volume describes the specification of such a benchmark. The transient addressed is a turbine trip (TT) in a BWR involving pressurization events in which the coupling between core phenomena and system dynamics plays an important role. In addition, the data made available from experiments carried out at the plant make the present benchmark very valuable. The data used are from events at the Peach Bottom 2 reactor (a GE-designed BWR/4). (authors)

  7. Benchmarking for On-Scalp MEG Sensors.

    Science.gov (United States)

    Xie, Minshu; Schneiderman, Justin F; Chukharkin, Maxim L; Kalabukhov, Alexei; Riaz, Bushra; Lundqvist, Daniel; Whitmarsh, Stephen; Hamalainen, Matti; Jousmaki, Veikko; Oostenveld, Robert; Winkler, Dag

    2017-06-01

    We present a benchmarking protocol for quantitatively comparing emerging on-scalp magnetoencephalography (MEG) sensor technologies to their counterparts in state-of-the-art MEG systems. As a means of validation, we compare a high-critical-temperature superconducting quantum interference device (high T c SQUID) with the low- T c SQUIDs of an Elekta Neuromag TRIUX system in MEG recordings of auditory and somatosensory evoked fields (SEFs) on one human subject. We measure the expected signal gain for the auditory-evoked fields (deeper sources) and notice some unfamiliar features in the on-scalp sensor-based recordings of SEFs (shallower sources). The experimental results serve as a proof of principle for the benchmarking protocol. This approach is straightforward, general to various on-scalp MEG sensors, and convenient to use on human subjects. The unexpected features in the SEFs suggest on-scalp MEG sensors may reveal information about neuromagnetic sources that is otherwise difficult to extract from state-of-the-art MEG recordings. As the first systematically established on-scalp MEG benchmarking protocol, magnetic sensor developers can employ this method to prove the utility of their technology in MEG recordings. Further exploration of the SEFs with on-scalp MEG sensors may reveal unique information about their sources.

  8. Perspective: Recommendations for benchmarking pre-clinical studies of nanomedicines

    Science.gov (United States)

    Dawidczyk, Charlene M.; Russell, Luisa M.; Searson, Peter C.

    2015-01-01

    Nanoparticle-based delivery systems provide new opportunities to overcome the limitations associated with traditional small molecule drug therapy for cancer, and to achieve both therapeutic and diagnostic functions in the same platform. Pre-clinical trials are generally designed to assess therapeutic potential and not to optimize the design of the delivery platform. Consequently, progress in developing design rules for cancer nanomedicines has been slow, hindering progress in the field. Despite the large number of pre-clinical trials, several factors restrict comparison and benchmarking of different platforms, including variability in experimental design, reporting of results, and the lack of quantitative data. To solve this problem, we review the variables involved in the design of pre-clinical trials and propose a protocol for benchmarking that we recommend be included in in vivo pre-clinical studies of drug delivery platforms for cancer therapy. This strategy will contribute to building the scientific knowledge base that enables development of design rules and accelerates the translation of new technologies. PMID:26249177

  9. Benchmarked Library Websites Comparative Study

    KAUST Repository

    Ramli, Rindra M.; Tyhurst, Janis

    2015-01-01

    This presentation provides an analysis of services provided by the benchmarked library websites. The exploratory study includes comparison of these websites against a list of criterion and presents a list of services that are most commonly deployed by the selected websites. In addition to that, the investigators proposed a list of services that could be provided via the KAUST library website.

  10. Prismatic Core Coupled Transient Benchmark

    International Nuclear Information System (INIS)

    Ortensi, J.; Pope, M.A.; Strydom, G.; Sen, R.S.; DeHart, M.D.; Gougar, H.D.; Ellis, C.; Baxter, A.; Seker, V.; Downar, T.J.; Vierow, K.; Ivanov, K.

    2011-01-01

    The Prismatic Modular Reactor (PMR) is one of the High Temperature Reactor (HTR) design concepts that have existed for some time. Several prismatic units have operated in the world (DRAGON, Fort St. Vrain, Peach Bottom) and one unit is still in operation (HTTR). The deterministic neutronics and thermal-fluids transient analysis tools and methods currently available for the design and analysis of PMRs have lagged behind the state of the art compared to LWR reactor technologies. This has motivated the development of more accurate and efficient tools for the design and safety evaluations of the PMR. In addition to the work invested in new methods, it is essential to develop appropriate benchmarks to verify and validate the new methods in computer codes. The purpose of this benchmark is to establish a well-defined problem, based on a common given set of data, to compare methods and tools in core simulation and thermal hydraulics analysis with a specific focus on transient events. The benchmark-working group is currently seeking OECD/NEA sponsorship. This benchmark is being pursued and is heavily based on the success of the PBMR-400 exercise.

  11. Development of common user data model for APOLLO3 and MARBLE and application to benchmark problems

    International Nuclear Information System (INIS)

    Yokoyama, Kenji

    2009-07-01

    A Common User Data Model, CUDM, has been developed for the purpose of benchmark calculations between APOLLO3 and MARBLE code systems. The current version of CUDM was designed for core calculation benchmark problems with 3-dimensional Cartesian, 3-D XYZ, geometry. CUDM is able to manage all input/output data such as 3-D XYZ geometry, effective macroscopic cross section, effective multiplication factor and neutron flux. In addition, visualization tools for geometry and neutron flux were included. CUDM was designed by the object-oriented technique and implemented using Python programming language. Based on the CUDM, a prototype system for a benchmark calculation, CUDM-benchmark, was also developed. The CUDM-benchmark supports input/output data conversion for IDT solver in APOLLO3, and TRITAC and SNT solvers in MARBLE. In order to evaluate pertinence of CUDM, the CUDM-benchmark was applied to benchmark problems proposed by T. Takeda, G. Chiba and I. Zmijarevic. It was verified that the CUDM-benchmark successfully reproduced the results calculated with reference input data files, and provided consistent results among all the solvers by using one common input data defined by CUDM. In addition, a detailed benchmark calculation for Chiba benchmark was performed by using the CUDM-benchmark. Chiba benchmark is a neutron transport benchmark problem for fast criticality assembly without homogenization. This benchmark problem consists of 4 core configurations which have different sodium void regions, and each core configuration is defined by more than 5,000 fuel/material cells. In this application, it was found that the results by IDT and SNT solvers agreed well with the reference results by Monte-Carlo code. In addition, model effects such as quadrature set effect, S n order effect and mesh size effect were systematically evaluated and summarized in this report. (author)

  12. Visualization of the air flow behind the automotive benchmark vent

    Science.gov (United States)

    Pech, Ondrej; Jedelsky, Jan; Caletka, Petr; Jicha, Miroslav

    2015-05-01

    Passenger comfort in cars depends on appropriate function of the cabin HVAC system. A great attention is therefore paid to the effective function of automotive vents and proper formation of the flow behind the ventilation outlet. The article deals with the visualization of air flow from the automotive benchmark vent. The visualization was made for two different shapes of the inlet channel connected to the benchmark vent. The smoke visualization with the laser knife was used. The influence of the shape of the inlet channel to the airflow direction, its enlargement and position of air flow axis were investigated.

  13. Visualization of the air flow behind the automotive benchmark vent

    Directory of Open Access Journals (Sweden)

    Pech Ondrej

    2015-01-01

    Full Text Available Passenger comfort in cars depends on appropriate function of the cabin HVAC system. A great attention is therefore paid to the effective function of automotive vents and proper formation of the flow behind the ventilation outlet. The article deals with the visualization of air flow from the automotive benchmark vent. The visualization was made for two different shapes of the inlet channel connected to the benchmark vent. The smoke visualization with the laser knife was used. The influence of the shape of the inlet channel to the airflow direction, its enlargement and position of air flow axis were investigated.

  14. Neonatal thyroid screening results are related to gestational maternal thyroid function

    NARCIS (Netherlands)

    Kuppens, S.M.I.; Kooistra, L.; Wijnen, H.A.; Vader, H.L.; Hasaart, T.H.M.; Oei, S.G.; Vulsma, T.; Pop, V.J.

    2011-01-01

    Objective To study the relationship between maternal thyroid function at each pregnancy trimester and neonatal screening results. Background Overt maternal thyroid dysfunction during gestation is associated with poor neonatal thyroid function. However, research on the relationship between suboptimal

  15. Analysis of an OECD/NEA high-temperature reactor benchmark

    International Nuclear Information System (INIS)

    Hosking, J. G.; Newton, T. D.; Koeberl, O.; Morris, P.; Goluoglu, S.; Tombakoglu, T.; Colak, U.; Sartori, E.

    2006-01-01

    This paper describes analyses of the OECD/NEA HTR benchmark organized by the 'Working Party on the Scientific Issues of Reactor Systems (WPRS)', formerly the 'Working Party on the Physics of Plutonium Fuels and Innovative Fuel Cycles'. The benchmark was specifically designed to provide inter-comparisons for plutonium and thorium fuels when used in HTR systems. Calculations considering uranium fuel have also been included in the benchmark, in order to identify any increased uncertainties when using plutonium or thorium fuels. The benchmark consists of five phases, which include cell and whole-core calculations. Analysis of the benchmark has been performed by a number of international participants, who have used a range of deterministic and Monte Carlo code schemes. For each of the benchmark phases, neutronics parameters have been evaluated. Comparisons are made between the results of the benchmark participants, as well as comparisons between the predictions of the deterministic calculations and those from detailed Monte Carlo calculations. (authors)

  16. The International Criticality Safety Benchmark Evaluation Project

    International Nuclear Information System (INIS)

    Briggs, B. J.; Dean, V. F.; Pesic, M. P.

    2001-01-01

    In order to properly manage the risk of a nuclear criticality accident, it is important to establish the conditions for which such an accident becomes possible for any activity involving fissile material. Only when this information is known is it possible to establish the likelihood of actually achieving such conditions. It is therefore important that criticality safety analysts have confidence in the accuracy of their calculations. Confidence in analytical results can only be gained through comparison of those results with experimental data. The Criticality Safety Benchmark Evaluation Project (CSBEP) was initiated in October of 1992 by the US Department of Energy. The project was managed through the Idaho National Engineering and Environmental Laboratory (INEEL), but involved nationally known criticality safety experts from Los Alamos National Laboratory, Lawrence Livermore National Laboratory, Savannah River Technology Center, Oak Ridge National Laboratory and the Y-12 Plant, Hanford, Argonne National Laboratory, and the Rocky Flats Plant. An International Criticality Safety Data Exchange component was added to the project during 1994 and the project became what is currently known as the International Criticality Safety Benchmark Evaluation Project (ICSBEP). Representatives from the United Kingdom, France, Japan, the Russian Federation, Hungary, Kazakhstan, Korea, Slovenia, Yugoslavia, Spain, and Israel are now participating on the project In December of 1994, the ICSBEP became an official activity of the Organization for Economic Cooperation and Development - Nuclear Energy Agency's (OECD-NEA) Nuclear Science Committee. The United States currently remains the lead country, providing most of the administrative support. The purpose of the ICSBEP is to: (1) identify and evaluate a comprehensive set of critical benchmark data; (2) verify the data, to the extent possible, by reviewing original and subsequently revised documentation, and by talking with the

  17. MCNP: Photon benchmark problems

    International Nuclear Information System (INIS)

    Whalen, D.J.; Hollowell, D.E.; Hendricks, J.S.

    1991-09-01

    The recent widespread, markedly increased use of radiation transport codes has produced greater user and institutional demand for assurance that such codes give correct results. Responding to these pressing requirements for code validation, the general purpose Monte Carlo transport code MCNP has been tested on six different photon problem families. MCNP was used to simulate these six sets numerically. Results for each were compared to the set's analytical or experimental data. MCNP successfully predicted the analytical or experimental results of all six families within the statistical uncertainty inherent in the Monte Carlo method. From this we conclude that MCNP can accurately model a broad spectrum of photon transport problems. 8 refs., 30 figs., 5 tabs

  18. Adaptive unified continuum FEM modeling of a 3D FSI benchmark problem.

    Science.gov (United States)

    Jansson, Johan; Degirmenci, Niyazi Cem; Hoffman, Johan

    2017-09-01

    In this paper, we address a 3D fluid-structure interaction benchmark problem that represents important characteristics of biomedical modeling. We present a goal-oriented adaptive finite element methodology for incompressible fluid-structure interaction based on a streamline diffusion-type stabilization of the balance equations for mass and momentum for the entire continuum in the domain, which is implemented in the Unicorn/FEniCS software framework. A phase marker function and its corresponding transport equation are introduced to select the constitutive law, where the mesh tracks the discontinuous fluid-structure interface. This results in a unified simulation method for fluids and structures. We present detailed results for the benchmark problem compared with experiments, together with a mesh convergence study. Copyright © 2016 John Wiley & Sons, Ltd.

  19. Tourism Destination Benchmarking: Evaluation and Selection of the Benchmarking Partners

    Directory of Open Access Journals (Sweden)

    Luštický Martin

    2012-03-01

    Full Text Available Tourism development has an irreplaceable role in regional policy of almost all countries. This is due to its undeniable benefits for the local population with regards to the economic, social and environmental sphere. Tourist destinations compete for visitors at tourism market and subsequently get into a relatively sharp competitive struggle. The main goal of regional governments and destination management institutions is to succeed in this struggle by increasing the competitiveness of their destination. The quality of strategic planning and final strategies is a key factor of competitiveness. Even though the tourism sector is not the typical field where the benchmarking methods are widely used, such approaches could be successfully applied. The paper focuses on key phases of the benchmarking process which lies in the search for suitable referencing partners. The partners are consequently selected to meet general requirements to ensure the quality if strategies. Following from this, some specific characteristics are developed according to the SMART approach. The paper tests this procedure with an expert evaluation of eight selected regional tourism strategies of regions in the Czech Republic, Slovakia and Great Britain. In this way it validates the selected criteria in the frame of the international environment. Hence, it makes it possible to find strengths and weaknesses of selected strategies and at the same time facilitates the discovery of suitable benchmarking partners.

  20. BONFIRE: benchmarking computers and computer networks

    OpenAIRE

    Bouckaert, Stefan; Vanhie-Van Gerwen, Jono; Moerman, Ingrid; Phillips, Stephen; Wilander, Jerker

    2011-01-01

    The benchmarking concept is not new in the field of computing or computer networking. With “benchmarking tools”, one usually refers to a program or set of programs, used to evaluate the performance of a solution under certain reference conditions, relative to the performance of another solution. Since the 1970s, benchmarking techniques have been used to measure the performance of computers and computer networks. Benchmarking of applications and virtual machines in an Infrastructure-as-a-Servi...

  1. Benchmarking clinical photography services in the NHS.

    Science.gov (United States)

    Arbon, Giles

    2015-01-01

    Benchmarking is used in services across the National Health Service (NHS) using various benchmarking programs. Clinical photography services do not have a program in place and services have to rely on ad hoc surveys of other services. A trial benchmarking exercise was undertaken with 13 services in NHS Trusts. This highlights valuable data and comparisons that can be used to benchmark and improve services throughout the profession.

  2. MoMaS reactive transport benchmark using PFLOTRAN

    Science.gov (United States)

    Park, H.

    2017-12-01

    MoMaS benchmark was developed to enhance numerical simulation capability for reactive transport modeling in porous media. The benchmark was published in late September of 2009; it is not taken from a real chemical system, but realistic and numerically challenging tests. PFLOTRAN is a state-of-art massively parallel subsurface flow and reactive transport code that is being used in multiple nuclear waste repository projects at Sandia National Laboratories including Waste Isolation Pilot Plant and Used Fuel Disposition. MoMaS benchmark has three independent tests with easy, medium, and hard chemical complexity. This paper demonstrates how PFLOTRAN is applied to this benchmark exercise and shows results of the easy benchmark test case which includes mixing of aqueous components and surface complexation. Surface complexations consist of monodentate and bidentate reactions which introduces difficulty in defining selectivity coefficient if the reaction applies to a bulk reference volume. The selectivity coefficient becomes porosity dependent for bidentate reaction in heterogeneous porous media. The benchmark is solved by PFLOTRAN with minimal modification to address the issue and unit conversions were made properly to suit PFLOTRAN.

  3. Calculations of different transmutation concepts. An international benchmark exercise

    International Nuclear Information System (INIS)

    2000-01-01

    In April 1996, the NEA Nuclear Science Committee (NSC) Expert Group on Physics Aspects of Different Transmutation Concepts launched a benchmark exercise to compare different transmutation concepts based on pressurised water reactors (PWRs), fast reactors, and an accelerator-driven system. The aim was to investigate the physics of complex fuel cycles involving reprocessing of spent PWR reactor fuel and its subsequent reuse in different reactor types. The objective was also to compare the calculated activities for individual isotopes as a function of time for different plutonium and minor actinide transmutation scenarios in different reactor systems. This report gives the analysis of results of the 15 solutions provided by the participants: six for the PWRs, six for the fast reactor and three for the accelerator case. Various computer codes and nuclear data libraries were applied. (author)

  4. How Benchmarking and Higher Education Came Together

    Science.gov (United States)

    Levy, Gary D.; Ronco, Sharron L.

    2012-01-01

    This chapter introduces the concept of benchmarking and how higher education institutions began to use benchmarking for a variety of purposes. Here, benchmarking is defined as a strategic and structured approach whereby an organization compares aspects of its processes and/or outcomes to those of another organization or set of organizations to…

  5. Benchmarking and Learning in Public Healthcare

    DEFF Research Database (Denmark)

    Buckmaster, Natalie; Mouritsen, Jan

    2017-01-01

    This research investigates the effects of learning-oriented benchmarking in public healthcare settings. Benchmarking is a widely adopted yet little explored accounting practice that is part of the paradigm of New Public Management. Extant studies are directed towards mandated coercive benchmarking...

  6. Ab initio and DFT benchmarking of tungsten nanoclusters and tungsten hydrides

    International Nuclear Information System (INIS)

    Skoviera, J.; Novotny, M.; Cernusak, I.; Oda, T.; Louis, F.

    2015-01-01

    We present several benchmark calculations comparing wave-function based methods and density functional theory for model systems containing tungsten. They include W 4 cluster as well as W 2 , WH and WH 2 molecules. (authors)

  7. Some exact results for the two-point function of an integrable quantum field theory

    International Nuclear Information System (INIS)

    Creamer, D.B.; Thacker, H.B.; Wilkinson, D.

    1981-02-01

    The two point correlation function for the quantum nonlinear Schroedinger (delta-function gas) model is studied. An infinite series representation for this function is derived using the quantum inverse scattering formalism. For the case of zero temperature, the infinite coupling (c → infinity) result of Jimbo, Miwa, Mori and Sato is extended to give an exact expression for the order 1/c correction to the two point function in terms of a Painleve transcendent of the fifth kind

  8. Defect assessment benchmark studies

    International Nuclear Information System (INIS)

    Hooton, D.G.; Sharples, J.K.

    1995-01-01

    Assessments of the resistance to fast fracture of the beltline region of a PWR vessel subjected to a pressurized thermal shock (PTS) transient have been carried out using the procedures of French (RCC-M) and German (KTA) design codes, and comparisons made with results obtained using the R6 procedure as applied for Sizewell B. The example chosen for these comparisons is of a generic nature, and is taken as the PTS identified by the Hirsch addendum to the Second Marshall report (1987) as the most severe transient with regard to vessel integrity. All assessment methods show the beltline region of the vessel to be safe from the risk of fast fracture, but by varying factors of safety. These factors are discussed in terms of margins between limiting and reference defect sizes, fracture toughness and stress intensity factor, and material temperature and temperature at the onset of upper-shelf materials behaviour. Based on these studies, consideration is given to issues involved in the harmonization of those sections of the design codes which are concerned with methods for the demonstration of the avoidance of the risk of failure by fast fracture. (author)

  9. Closed-loop neuromorphic benchmarks

    CSIR Research Space (South Africa)

    Stewart, TC

    2015-11-01

    Full Text Available Benchmarks   Terrence C. Stewart 1* , Travis DeWolf 1 , Ashley Kleinhans 2 , Chris Eliasmith 1   1 University of Waterloo, Canada, 2 Council for Scientific and Industrial Research, South Africa   Submitted to Journal:   Frontiers in Neuroscience   Specialty... Eliasmith 1 1Centre for Theoretical Neuroscience, University of Waterloo, Waterloo, ON, Canada 2Mobile Intelligent Autonomous Systems group, Council for Scientific and Industrial Research, Pretoria, South Africa Correspondence*: Terrence C. Stewart Centre...

  10. Investible benchmarks & hedge fund liquidity

    OpenAIRE

    Freed, Marc S; McMillan, Ben

    2011-01-01

    A lack of commonly accepted benchmarks for hedge fund performance has permitted hedge fund managers to attribute to skill returns that may actually accrue from market risk factors and illiquidity. Recent innovations in hedge fund replication permits us to estimate the extent of this misattribution. Using an option-based model, we find evidence that the value of liquidity options that investors implicitly grant managers when they invest may account for part or even all hedge fund returns. C...

  11. Benchmarking organic mixed conductors for transistors

    KAUST Repository

    Inal, Sahika; Malliaras, George G.; Rivnay, Jonathan

    2017-01-01

    Organic mixed conductors have garnered significant attention in applications from bioelectronics to energy storage/generation. Their implementation in organic transistors has led to enhanced biosensing, neuromorphic function, and specialized circuits. While a narrow class of conducting polymers continues to excel in these new applications, materials design efforts have accelerated as researchers target new functionality, processability, and improved performance/stability. Materials for organic electrochemical transistors (OECTs) require both efficient electronic transport and facile ion injection in order to sustain high capacity. In this work, we show that the product of the electronic mobility and volumetric charge storage capacity (µC*) is the materials/system figure of merit; we use this framework to benchmark and compare the steady-state OECT performance of ten previously reported materials. This product can be independently verified and decoupled to guide materials design and processing. OECTs can therefore be used as a tool for understanding and designing new organic mixed conductors.

  12. Benchmarking organic mixed conductors for transistors

    KAUST Repository

    Inal, Sahika

    2017-11-20

    Organic mixed conductors have garnered significant attention in applications from bioelectronics to energy storage/generation. Their implementation in organic transistors has led to enhanced biosensing, neuromorphic function, and specialized circuits. While a narrow class of conducting polymers continues to excel in these new applications, materials design efforts have accelerated as researchers target new functionality, processability, and improved performance/stability. Materials for organic electrochemical transistors (OECTs) require both efficient electronic transport and facile ion injection in order to sustain high capacity. In this work, we show that the product of the electronic mobility and volumetric charge storage capacity (µC*) is the materials/system figure of merit; we use this framework to benchmark and compare the steady-state OECT performance of ten previously reported materials. This product can be independently verified and decoupled to guide materials design and processing. OECTs can therefore be used as a tool for understanding and designing new organic mixed conductors.

  13. Space network scheduling benchmark: A proof-of-concept process for technology transfer

    Science.gov (United States)

    Moe, Karen; Happell, Nadine; Hayden, B. J.; Barclay, Cathy

    1993-01-01

    This paper describes a detailed proof-of-concept activity to evaluate flexible scheduling technology as implemented in the Request Oriented Scheduling Engine (ROSE) and applied to Space Network (SN) scheduling. The criteria developed for an operational evaluation of a reusable scheduling system is addressed including a methodology to prove that the proposed system performs at least as well as the current system in function and performance. The improvement of the new technology must be demonstrated and evaluated against the cost of making changes. Finally, there is a need to show significant improvement in SN operational procedures. Successful completion of a proof-of-concept would eventually lead to an operational concept and implementation transition plan, which is outside the scope of this paper. However, a high-fidelity benchmark using actual SN scheduling requests has been designed to test the ROSE scheduling tool. The benchmark evaluation methodology, scheduling data, and preliminary results are described.

  14. A Benchmark for Virtual Camera Control

    DEFF Research Database (Denmark)

    Burelli, Paolo; Yannakakis, Georgios N.

    2015-01-01

    Automatically animating and placing the virtual camera in a dynamic environment is a challenging task. The camera is expected to maximise and maintain a set of properties — i.e. visual composition — while smoothly moving through the environment and avoiding obstacles. A large number of different....... For this reason, in this paper, we propose a benchmark for the problem of virtual camera control and we analyse a number of different problems in different virtual environments. Each of these scenarios is described through a set of complexity measures and, as a result of this analysis, a subset of scenarios...

  15. Benchmarks for multicomponent diffusion and electrochemical migration

    DEFF Research Database (Denmark)

    Rasouli, Pejman; Steefel, Carl I.; Mayer, K. Ulrich

    2015-01-01

    In multicomponent electrolyte solutions, the tendency of ions to diffuse at different rates results in a charge imbalance that is counteracted by the electrostatic coupling between charged species leading to a process called “electrochemical migration” or “electromigration.” Although not commonly...... not been published to date. This contribution provides a set of three benchmark problems that demonstrate the effect of electric coupling during multicomponent diffusion and electrochemical migration and at the same time facilitate the intercomparison of solutions from existing reactive transport codes...

  16. Benchmark calculation programme concerning typical LMFBR structures

    International Nuclear Information System (INIS)

    Donea, J.; Ferrari, G.; Grossetie, J.C.; Terzaghi, A.

    1982-01-01

    This programme, which is part of a comprehensive activity aimed at resolving difficulties encountered in using design procedures based on ASME Code Case N-47, should allow to get confidence in computer codes which are supposed to provide a realistic prediction of the LMFBR component behaviour. The calculations started on static analysis of typical structures made of non linear materials stressed by cyclic loads. The fluid structure interaction analysis is also being considered. Reasons and details of the different benchmark calculations are described, results obtained are commented and future computational exercise indicated

  17. 2 D 1/2 graphical benchmark

    International Nuclear Information System (INIS)

    Brochard, P.; Colin De Verdiere, G.; Nomine, J.P.; Perros, J.P.

    1993-01-01

    Within the framework of the development of a new version of the Psyche software, the author reports a benchmark study on different graphical libraries and systems and on the Psyche application. The author outlines the current context of development of graphical tools which still lacks of standardisation. This makes the comparison somehow limited and finally related to envisaged applications. The author presents the various systems and libraries, test principles, and characteristics of machines. Results and interpretations are then presented with reference to faced problems

  18. Benchmark comparisons of evaluated nuclear data files

    International Nuclear Information System (INIS)

    Resler, D.A.; Howerton, R.J.; White, R.M.

    1994-05-01

    With the availability and maturity of several evaluated nuclear data files, it is timely to compare the results of integral tests with calculations using these different files. We discuss here our progress in making integral benchmark tests of the following nuclear data files: ENDL-94, ENDF/B-V and -VI, JENDL-3, JEF-2, and BROND-2. The methods used to process these evaluated libraries in a consistent way into applications files for use in Monte Carlo calculations is presented. Using these libraries, we are calculating and comparing to experiment k eff for 68 fast critical assemblies of 233,235 U and 239 Pu with reflectors of various material and thickness

  19. Benchmarking in TESOL: A Study of the Malaysia Education Blueprint 2013

    Science.gov (United States)

    Jawaid, Arif

    2014-01-01

    Benchmarking is a very common real-life function occurring every moment unnoticed. It has travelled from industry to education like other quality disciplines. Initially benchmarking was used in higher education. .Now it is diffusing into other areas including TESOL (Teaching English to Speakers of Other Languages), which has yet to devise a…

  20. International benchmarking of electricity transmission by regulators: A contrast between theory and practice?

    International Nuclear Information System (INIS)

    Haney, Aoife Brophy; Pollitt, Michael G.

    2013-01-01

    Benchmarking of electricity networks has a key role in sharing the benefits of efficiency improvements with consumers and ensuring regulated companies earn a fair return on their investments. This paper analyses and contrasts the theory and practice of international benchmarking of electricity transmission by regulators. We examine the literature relevant to electricity transmission benchmarking and discuss the results of a survey of 25 national electricity regulators. While new panel data techniques aimed at dealing with unobserved heterogeneity and the validity of the comparator group look intellectually promising, our survey suggests that they are in their infancy for regulatory purposes. In electricity transmission, relative to electricity distribution, choosing variables is particularly difficult, because of the large number of potential variables to choose from. Failure to apply benchmarking appropriately may negatively affect investors’ willingness to invest in the future. While few of our surveyed regulators acknowledge that regulatory risk is currently an issue in transmission benchmarking, many more concede it might be. In the meantime new regulatory approaches – such as those based on tendering, negotiated settlements, a wider range of outputs or longer term grid planning – are emerging and will necessarily involve a reduced role for benchmarking. -- Highlights: •We discuss how to benchmark electricity transmission. •We report survey results from 25 national energy regulators. •Electricity transmission benchmarking is more challenging than benchmarking distribution. •Many regulators concede benchmarking may raise capital costs. •Many regulators are considering new regulatory approaches

  1. The CMSSW benchmarking suite: Using HEP code to measure CPU performance

    International Nuclear Information System (INIS)

    Benelli, G

    2010-01-01

    The demanding computing needs of the CMS experiment require thoughtful planning and management of its computing infrastructure. A key factor in this process is the use of realistic benchmarks when assessing the computing power of the different architectures available. In recent years a discrepancy has been observed between the CPU performance estimates given by the reference benchmark for HEP computing (SPECint) and actual performances of HEP code. Making use of the CPU performance tools from the CMSSW performance suite, comparative CPU performance studies have been carried out on several architectures. A benchmarking suite has been developed and integrated in the CMSSW framework, to allow computing centers and interested third parties to benchmark architectures directly with CMSSW. The CMSSW benchmarking suite can be used out of the box, to test and compare several machines in terms of CPU performance and report with the wanted level of detail the different benchmarking scores (e.g. by processing step) and results. In this talk we describe briefly the CMSSW software performance suite, and in detail the CMSSW benchmarking suite client/server design, the performance data analysis and the available CMSSW benchmark scores. The experience in the use of HEP code for benchmarking will be discussed and CMSSW benchmark results presented.

  2. (Invited) Microreactors for Characterization and Benchmarking of Photocatalysts

    DEFF Research Database (Denmark)

    Vesborg, Peter Christian Kjærgaard; Dionigi, Fabio; Trimarco, Daniel Bøndergaard

    2015-01-01

    In the field of photocatalysis the batch-nature of the typical benchmarking experiment makes it very laborious to obtain good kinetic data as a function of parameters such as illumination wavelength, irradiance, catalyst temperature, reactant composition, etc. Microreactors with on-line mass...

  3. Method of characteristics - Based sensitivity calculations for international PWR benchmark

    International Nuclear Information System (INIS)

    Suslov, I. R.; Tormyshev, I. V.; Komlev, O. G.

    2013-01-01

    Method to calculate sensitivity of fractional-linear neutron flux functionals to transport equation coefficients is proposed. Implementation of the method on the basis of MOC code MCCG3D is developed. Sensitivity calculations for fission intensity for international PWR benchmark are performed. (authors)

  4. Benchmarking urban energy efficiency in the UK

    International Nuclear Information System (INIS)

    Keirstead, James

    2013-01-01

    This study asks what is the ‘best’ way to measure urban energy efficiency. There has been recent interest in identifying efficient cities so that best practices can be shared, a process known as benchmarking. Previous studies have used relatively simple metrics that provide limited insight on the complexity of urban energy efficiency and arguably fail to provide a ‘fair’ measure of urban performance. Using a data set of 198 urban UK local administrative units, three methods are compared: ratio measures, regression residuals, and data envelopment analysis. The results show that each method has its own strengths and weaknesses regarding the ease of interpretation, ability to identify outliers and provide consistent rankings. Efficient areas are diverse but are notably found in low income areas of large conurbations such as London, whereas industrial areas are consistently ranked as inefficient. The results highlight the shortcomings of the underlying production-based energy accounts. Ideally urban energy efficiency benchmarks would be built on consumption-based accounts, but interim recommendations are made regarding the use of efficiency measures that improve upon current practice and facilitate wider conversations about what it means for a specific city to be energy-efficient within an interconnected economy. - Highlights: • Benchmarking is a potentially valuable method for improving urban energy performance. • Three different measures of urban energy efficiency are presented for UK cities. • Most efficient areas are diverse but include low-income areas of large conurbations. • Least efficient areas perform industrial activities of national importance. • Improve current practice with grouped per capita metrics or regression residuals

  5. Model-Based Engineering and Manufacturing CAD/CAM Benchmark

    International Nuclear Information System (INIS)

    Domm, T.D.; Underwood, R.S.

    1999-01-01

    The Benehmark Project was created from a desire to identify best practices and improve the overall efficiency and performance of the Y-12 Plant's systems and personnel supporting the manufacturing mission. The mission of the benchmark team was to search out industry leaders in manufacturing and evaluate their engineering practices and processes to determine direction and focus fm Y-12 modmizadon efforts. The companies visited included several large established companies and anew, small, high-tech machining firm. As a result of this effort changes are recommended that will enable Y-12 to become a more responsive cost-effective manufacturing facility capable of suppording the needs of the Nuclear Weapons Complex (NW at sign) and Work Fw Others into the 21' century. The benchmark team identified key areas of interest, both focused and gencml. The focus arm included Human Resources, Information Management, Manufacturing Software Tools, and Standarda/ Policies and Practices. Areas of general interest included Inhstructure, Computer Platforms and Networking, and Organizational Structure. The method for obtaining the desired information in these areas centered on the creation of a benchmark questionnaire. The questionnaire was used throughout each of the visits as the basis for information gathering. The results of this benchmark showed that all companies are moving in the direction of model-based engineering and manufacturing. There was evidence that many companies are trying to grasp how to manage current and legacy data. In terms of engineering design software tools, the companies contacted were using both 3-D solid modeling and surfaced Wire-frame models. The manufacturing computer tools were varie4 with most companies using more than one software product to generate machining data and none currently performing model-based manufacturing (MBM) ftom a common medel. The majority of companies were closer to identifying or using a single computer-aided design (CAD) system

  6. Argonne Code Center: Benchmark problem book.

    Energy Technology Data Exchange (ETDEWEB)

    None, None

    1977-06-01

    This book is an outgrowth of activities of the Computational Benchmark Problems Committee of the Mathematics and Computation Division of the American Nuclear Society. This is the second supplement of the original benchmark book which was first published in February, 1968 and contained computational benchmark problems in four different areas. Supplement No. 1, which was published in December, 1972, contained corrections to the original benchmark book plus additional problems in three new areas. The current supplement. Supplement No. 2, contains problems in eight additional new areas. The objectives of computational benchmark work and the procedures used by the committee in pursuing the objectives are outlined in the original edition of the benchmark book (ANL-7416, February, 1968). The members of the committee who have made contributions to Supplement No. 2 are listed below followed by the contributors to the earlier editions of the benchmark book.

  7. Benchmarks

    Data.gov (United States)

    Earth Data Analysis Center, University of New Mexico — The National Flood Hazard Layer (NFHL) data incorporates all Digital Flood Insurance Rate Map(DFIRM) databases published by FEMA, and any Letters Of Map Revision...

  8. Existence Results for Some Nonlinear Functional-Integral Equations in Banach Algebra with Applications

    Directory of Open Access Journals (Sweden)

    Lakshmi Narayan Mishra

    2016-04-01

    Full Text Available In the present manuscript, we prove some results concerning the existence of solutions for some nonlinear functional-integral equations which contains various integral and functional equations that considered in nonlinear analysis and its applications. By utilizing the techniques of noncompactness measures, we operate the fixed point theorems such as Darbo's theorem in Banach algebra concerning the estimate on the solutions. The results obtained in this paper extend and improve essentially some known results in the recent literature. We also provide an example of nonlinear functional-integral equation to show the ability of our main result.

  9. Defining a methodology for benchmarking spectrum unfolding codes

    International Nuclear Information System (INIS)

    Meyer, W.; Kirmser, P.G.; Miller, W.H.; Hu, K.K.

    1976-01-01

    It has long been recognized that different neutron spectrum unfolding codes will produce significantly different results when unfolding the same measured data. In reviewing the results of such analyses it has been difficult to determine which result if any is the best representation of what was measured by the spectrometer detector. A proposal to develop a benchmarking procedure for spectrum unfolding codes is presented. The objective of the procedure will be to begin to develop a methodology and a set of data with a well established and documented result that could be used to benchmark and standardize the various unfolding methods and codes. It is further recognized that development of such a benchmark must involve a consensus of the technical community interested in neutron spectrum unfolding

  10. Ultracool dwarf benchmarks with Gaia primaries

    Science.gov (United States)

    Marocco, F.; Pinfield, D. J.; Cook, N. J.; Zapatero Osorio, M. R.; Montes, D.; Caballero, J. A.; Gálvez-Ortiz, M. C.; Gromadzki, M.; Jones, H. R. A.; Kurtev, R.; Smart, R. L.; Zhang, Z.; Cabrera Lavers, A. L.; García Álvarez, D.; Qi, Z. X.; Rickard, M. J.; Dover, L.

    2017-10-01

    We explore the potential of Gaia for the field of benchmark ultracool/brown dwarf companions, and present the results of an initial search for metal-rich/metal-poor systems. A simulated population of resolved ultracool dwarf companions to Gaia primary stars is generated and assessed. Of the order of ˜24 000 companions should be identifiable outside of the Galactic plane (|b| > 10 deg) with large-scale ground- and space-based surveys including late M, L, T and Y types. Our simulated companion parameter space covers 0.02 ≤ M/M⊙ ≤ 0.1, 0.1 ≤ age/Gyr ≤ 14 and -2.5 ≤ [Fe/H] ≤ 0.5, with systems required to have a false alarm probability methodology and simulations, our initial search uses UKIRT Infrared Deep Sky Survey and Sloan Digital Sky Survey to select secondaries, with the parameters of primaries taken from Tycho-2, Radial Velocity Experiment, Large sky Area Multi-Object fibre Spectroscopic Telescope and Tycho-Gaia Astrometric Solution. We identify and follow up 13 new benchmarks. These include M8-L2 companions, with metallicity constraints ranging in quality, but robust in the range -0.39 ≤ [Fe/H] ≤ +0.36, and with projected physical separation in the range 0.6 < s/kau < 76. Going forward, Gaia offers a very high yield of benchmark systems, from which diverse subsamples may be able to calibrate a range of foundational ultracool/sub-stellar theory and observation.

  11. Burn-up TRIGA Mark II benchmark experiment

    International Nuclear Information System (INIS)

    Persic, A.; Ravnik, M.; Zagar, T.

    1998-01-01

    Different reactor codes are used for calculations of reactor parameters. The accuracy of the programs is tested through comparison of the calculated values with the experimental results. Well-defined and accurately measured benchmarks are required. The experimental results of reactivity measurements, fuel element reactivity worth distribution and fuel-up measurements are presented in this paper. The experiments were performed with partly burnt reactor core. The experimental conditions were well defined, so that the results can be used as a burn-up benchmark test case for a TRIGA Mark II reactor calculations.(author)

  12. Criticality benchmark comparisons leading to cross-section upgrades

    International Nuclear Information System (INIS)

    Alesso, H.P.; Annese, C.E.; Heinrichs, D.P.; Lloyd, W.R.; Lent, E.M.

    1993-01-01

    For several years criticality benchmark calculations with COG. COG is a point-wise Monte Carlo code developed at Lawrence Livermore National Laboratory (LLNL). It solves the Boltzmann equation for the transport of neutrons and photons. The principle consideration in developing COG was that the resulting calculation would be as accurate as the point-wise cross-sectional data, since no physics computational approximations were used. The objective of this paper is to report on COG results for criticality benchmark experiments in concert with MCNP comparisons which are resulting in corrections an upgrades to the point-wise ENDL cross-section data libraries. Benchmarking discrepancies reported here indicated difficulties in the Evaluated Nuclear Data Livermore (ENDL) cross-sections for U-238 at thermal neutron energy levels. This led to a re-evaluation and selection of the appropriate cross-section values from several cross-section sets available (ENDL, ENDF/B-V). Further cross-section upgrades anticipated

  13. REVISED STREAM CODE AND WASP5 BENCHMARK

    International Nuclear Information System (INIS)

    Chen, K

    2005-01-01

    STREAM is an emergency response code that predicts downstream pollutant concentrations for releases from the SRS area to the Savannah River. The STREAM code uses an algebraic equation to approximate the solution of the one dimensional advective transport differential equation. This approach generates spurious oscillations in the concentration profile when modeling long duration releases. To improve the capability of the STREAM code to model long-term releases, its calculation module was replaced by the WASP5 code. WASP5 is a US EPA water quality analysis program that simulates one-dimensional pollutant transport through surface water. Test cases were performed to compare the revised version of STREAM with the existing version. For continuous releases, results predicted by the revised STREAM code agree with physical expectations. The WASP5 code was benchmarked with the US EPA 1990 and 1991 dye tracer studies, in which the transport of the dye was measured from its release at the New Savannah Bluff Lock and Dam downstream to Savannah. The peak concentrations predicted by the WASP5 agreed with the measurements within ±20.0%. The transport times of the dye concentration peak predicted by the WASP5 agreed with the measurements within ±3.6%. These benchmarking results demonstrate that STREAM should be capable of accurately modeling releases from SRS outfalls

  14. NASA Software Engineering Benchmarking Effort

    Science.gov (United States)

    Godfrey, Sally; Rarick, Heather

    2012-01-01

    Benchmarking was very interesting and provided a wealth of information (1) We did see potential solutions to some of our "top 10" issues (2) We have an assessment of where NASA stands with relation to other aerospace/defense groups We formed new contacts and potential collaborations (1) Several organizations sent us examples of their templates, processes (2) Many of the organizations were interested in future collaboration: sharing of training, metrics, Capability Maturity Model Integration (CMMI) appraisers, instructors, etc. We received feedback from some of our contractors/ partners (1) Desires to participate in our training; provide feedback on procedures (2) Welcomed opportunity to provide feedback on working with NASA

  15. NEACRP thermal fission product benchmark

    International Nuclear Information System (INIS)

    Halsall, M.J.; Taubman, C.J.

    1989-09-01

    The objective of the thermal fission product benchmark was to compare the range of fission product data in use at the present time. A simple homogeneous problem was set with 200 atoms H/1 atom U235, to be burnt up to 1000 days and then decay for 1000 days. The problem was repeated with 200 atoms H/1 atom Pu239, 20 atoms H/1 atom U235 and 20 atoms H/1 atom Pu239. There were ten participants and the submissions received are detailed in this report. (author)

  16. A Web-Based System for Bayesian Benchmark Dose Estimation.

    Science.gov (United States)

    Shao, Kan; Shapiro, Andrew J

    2018-01-11

    Benchmark dose (BMD) modeling is an important step in human health risk assessment and is used as the default approach to identify the point of departure for risk assessment. A probabilistic framework for dose-response assessment has been proposed and advocated by various institutions and organizations; therefore, a reliable tool is needed to provide distributional estimates for BMD and other important quantities in dose-response assessment. We developed an online system for Bayesian BMD (BBMD) estimation and compared results from this software with U.S. Environmental Protection Agency's (EPA's) Benchmark Dose Software (BMDS). The system is built on a Bayesian framework featuring the application of Markov chain Monte Carlo (MCMC) sampling for model parameter estimation and BMD calculation, which makes the BBMD system fundamentally different from the currently prevailing BMD software packages. In addition to estimating the traditional BMDs for dichotomous and continuous data, the developed system is also capable of computing model-averaged BMD estimates. A total of 518 dichotomous and 108 continuous data sets extracted from the U.S. EPA's Integrated Risk Information System (IRIS) database (and similar databases) were used as testing data to compare the estimates from the BBMD and BMDS programs. The results suggest that the BBMD system may outperform the BMDS program in a number of aspects, including fewer failed BMD and BMDL calculations and estimates. The BBMD system is a useful alternative tool for estimating BMD with additional functionalities for BMD analysis based on most recent research. Most importantly, the BBMD has the potential to incorporate prior information to make dose-response modeling more reliable and can provide distributional estimates for important quantities in dose-response assessment, which greatly facilitates the current trend for probabilistic risk assessment. https://doi.org/10.1289/EHP1289.

  17. Benchmarking Simulation of Long Term Station Blackout Events

    International Nuclear Information System (INIS)

    Kim, Sung Kyum; Lee, John C.; Fynan, Douglas A.; Lee, John C.

    2013-01-01

    The importance of passive cooling systems has emerged since the SBO events. Turbine-driven auxiliary feedwater (TD-AFW) system is the only passive cooling system for steam generators (SGs) in current PWRs. During SBO events, all alternating current (AC) and direct current (DC) are interrupted and then the water levels of steam generators become high. In this case, turbine blades could be degraded and cannot cool down the SGs anymore. To prevent this kind of degradations, improved TD-AFW system should be installed for current PWRs, especially OPR 1000 plants. A long-term station blackout (LTSBO) scenario based on the improved TD-AFW system has been benchmarked as a reference input file. The following task is a safety analysis in order to find some important parameters causing the peak cladding temperature (PCT) to vary. This task has been initiated with the benchmarked input deck applying to the State-of-the-Art Reactor Consequence Analyses (SOARCA) Report. The point of the improved TD-AFW is to control the water level of the SG by using the auxiliary battery charged by a generator connected with the auxiliary turbine. However, this battery also could be disconnected from the generator. To analyze the uncertainties of the failure of the auxiliary battery, the simulation for the time-dependent failure of the TD-AFW has been performed. In addition to the cases simulated in the paper, some valves (e. g., pressurizer safety valve), available during SBO events in the paper, could be important parameters to assess uncertainties in PCTs estimated. The results for these parameters will be included in a future study in addition to the results for the leakage of the RCP seals. After the simulation of several transient cases, alternating conditional expectation (ACE) algorithm will be used to derive functional relationships between the PCT and several system parameters

  18. Analysis of a multigroup stylized CANDU half-core benchmark

    International Nuclear Information System (INIS)

    Pounders, Justin M.; Rahnema, Farzad; Serghiuta, Dumitru

    2011-01-01

    Highlights: → This paper provides a benchmark that is a stylized model problem in more than two energy groups that is realistic with respect to the underlying physics. → An 8-group cross section library is provided to augment a previously published 2-group 3D stylized half-core CANDU benchmark problem. → Reference eigenvalues and selected pin and bundle fission rates are included. → 2-, 4- and 47-group Monte Carlo solutions are compared to analyze homogenization-free transport approximations that result from energy condensation. - Abstract: An 8-group cross section library is provided to augment a previously published 2-group 3D stylized half-core Canadian deuterium uranium (CANDU) reactor benchmark problem. Reference eigenvalues and selected pin and bundle fission rates are also included. This benchmark is intended to provide computational reactor physicists and methods developers with a stylized model problem in more than two energy groups that is realistic with respect to the underlying physics. In addition to transport theory code verification, the 8-group energy structure provides reactor physicist with an ideal problem for examining cross section homogenization and collapsing effects in a full-core environment. To this end, additional 2-, 4- and 47-group full-core Monte Carlo benchmark solutions are compared to analyze homogenization-free transport approximations incurred as a result of energy group condensation.

  19. A large-scale benchmark of gene prioritization methods.

    Science.gov (United States)

    Guala, Dimitri; Sonnhammer, Erik L L

    2017-04-21

    In order to maximize the use of results from high-throughput experimental studies, e.g. GWAS, for identification and diagnostics of new disease-associated genes, it is important to have properly analyzed and benchmarked gene prioritization tools. While prospective benchmarks are underpowered to provide statistically significant results in their attempt to differentiate the performance of gene prioritization tools, a strategy for retrospective benchmarking has been missing, and new tools usually only provide internal validations. The Gene Ontology(GO) contains genes clustered around annotation terms. This intrinsic property of GO can be utilized in construction of robust benchmarks, objective to the problem domain. We demonstrate how this can be achieved for network-based gene prioritization tools, utilizing the FunCoup network. We use cross-validation and a set of appropriate performance measures to compare state-of-the-art gene prioritization algorithms: three based on network diffusion, NetRank and two implementations of Random Walk with Restart, and MaxLink that utilizes network neighborhood. Our benchmark suite provides a systematic and objective way to compare the multitude of available and future gene prioritization tools, enabling researchers to select the best gene prioritization tool for the task at hand, and helping to guide the development of more accurate methods.

  20. Developing a benchmark for emotional analysis of music.

    Science.gov (United States)

    Aljanaki, Anna; Yang, Yi-Hsuan; Soleymani, Mohammad

    2017-01-01

    Music emotion recognition (MER) field rapidly expanded in the last decade. Many new methods and new audio features are developed to improve the performance of MER algorithms. However, it is very difficult to compare the performance of the new methods because of the data representation diversity and scarcity of publicly available data. In this paper, we address these problems by creating a data set and a benchmark for MER. The data set that we release, a MediaEval Database for Emotional Analysis in Music (DEAM), is the largest available data set of dynamic annotations (valence and arousal annotations for 1,802 songs and song excerpts licensed under Creative Commons with 2Hz time resolution). Using DEAM, we organized the 'Emotion in Music' task at MediaEval Multimedia Evaluation Campaign from 2013 to 2015. The benchmark attracted, in total, 21 active teams to participate in the challenge. We analyze the results of the benchmark: the winning algorithms and feature-sets. We also describe the design of the benchmark, the evaluation procedures and the data cleaning and transformations that we suggest. The results from the benchmark suggest that the recurrent neural network based approaches combined with large feature-sets work best for dynamic MER.