WorldWideScience

Sample records for benchmark test cases

  1. Shielding benchmark test

    International Nuclear Information System (INIS)

    Kawai, Masayoshi

    1984-01-01

    Iron data in JENDL-2 have been tested by analyzing shielding benchmark experiments for neutron transmission through iron block performed at KFK using CF-252 neutron source and at ORNL using collimated neutron beam from reactor. The analyses are made by a shielding analysis code system RADHEAT-V4 developed at JAERI. The calculated results are compared with the measured data. As for the KFK experiments, the C/E values are about 1.1. For the ORNL experiments, the calculated values agree with the measured data within an accuracy of 33% for the off-center geometry. The d-t neutron transmission measurements through carbon sphere made at LLNL are also analyzed preliminarily by using the revised JENDL data for fusion neutronics calculation. (author)

  2. Benchmark tests of JENDL-1

    International Nuclear Information System (INIS)

    Kikuchi, Yasuyuki; Hasegawa, Akira; Takano, Hideki; Kamei, Takanobu; Hojuyama, Takeshi; Sasaki, Makoto; Seki, Yuji; Zukeran, Atsushi; Otake, Iwao.

    1982-02-01

    Various benchmark tests were made on JENDL-1. At the first stage, various core center characteristics were tested for many critical assemblies with one-dimensional model. At the second stage, applicability of JENDL-1 was further tested to more sophisticated problems for MOZART and ZPPR-3 assemblies with two-dimensional model. It was proved that JENDL-1 predicted various quantities of fast reactors satisfactorily as a whole. However, the following problems were pointed out: 1) There exists discrepancy of 0.9% in the k sub(eff)-values between the Pu- and U-cores. 2) The fission rate ratio of 239 Pu to 235 U is underestimated by 3%. 3) The Doppler reactivity coefficients are overestimated by about 10%. 4) The control rod worths are underestimated by 4%. 5) The fission rates of 235 U and 239 Pu are underestimated considerably in the outer core and radial blanket regions. 6) The negative sodium void reactivities are overestimated, when the sodium is removed from the outer core. As a whole, most of problems of JENDL-1 seem to be related with the neutron leakage and the neutron spectrum. It was found through the further study that most of these problems came from too small diffusion coefficients and too large elastic removal cross sections above 100 keV, which might be probably caused by overestimation of the total and elastic scattering cross sections for structural materials in the unresolved resonance region up to several MeV. (author)

  3. Experimental and computational benchmark tests

    International Nuclear Information System (INIS)

    Gilliam, D.M.; Briesmeister, J.F.

    1994-01-01

    A program involving principally NIST, LANL, and ORNL has been in progress for about four years now to establish a series of benchmark measurements and calculations related to the moderation and leakage of 252 Cf neutrons from a source surrounded by spherical aqueous moderators of various thicknesses and compositions. The motivation for these studies comes from problems in criticality calculations concerning arrays of multiplying components, where the leakage from one component acts as a source for the other components. This talk compares experimental and calculated values for the fission rates of four nuclides - 235 U, 239 Pu, 238 U, and 237 Np - in the leakage spectrum from moderator spheres of diameters 76.2 mm, 101.6 mm, and 127.0 mm, with either pure water or enriched B-10 solutions as the moderator. Very detailed Monte Carlo calculations were done with the MCNP code, using a open-quotes light waterclose quotes S(α,β) scattering kernel

  4. Standard Guide for Benchmark Testing of Light Water Reactor Calculations

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2010-01-01

    1.1 This guide covers general approaches for benchmarking neutron transport calculations in light water reactor systems. A companion guide (Guide E2005) covers use of benchmark fields for testing neutron transport calculations and cross sections in well controlled environments. This guide covers experimental benchmarking of neutron fluence calculations (or calculations of other exposure parameters such as dpa) in more complex geometries relevant to reactor surveillance. Particular sections of the guide discuss: the use of well-characterized benchmark neutron fields to provide an indication of the accuracy of the calculational methods and nuclear data when applied to typical cases; and the use of plant specific measurements to indicate bias in individual plant calculations. Use of these two benchmark techniques will serve to limit plant-specific calculational uncertainty, and, when combined with analytical uncertainty estimates for the calculations, will provide uncertainty estimates for reactor fluences with ...

  5. Reactor group constants and benchmark test

    Energy Technology Data Exchange (ETDEWEB)

    Takano, Hideki [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2001-08-01

    The evaluated nuclear data files such as JENDL, ENDF/B-VI and JEF-2 are validated by analyzing critical mock-up experiments for various type reactors and assessing applicability for nuclear characteristics such as criticality, reaction rates, reactivities, etc. This is called Benchmark Testing. In the nuclear calculations, the diffusion and transport codes use the group constant library which is generated by processing the nuclear data files. In this paper, the calculation methods of the reactor group constants and benchmark test are described. Finally, a new group constants scheme is proposed. (author)

  6. Benchmark test cases for evaluation of computer-based methods for detection of setup errors: realistic digitally reconstructed electronic portal images with known setup errors

    International Nuclear Information System (INIS)

    Fritsch, Daniel S.; Raghavan, Suraj; Boxwala, Aziz; Earnhart, Jon; Tracton, Gregg; Cullip, Timothy; Chaney, Edward L.

    1997-01-01

    Purpose: The purpose of this investigation was to develop methods and software for computing realistic digitally reconstructed electronic portal images with known setup errors for use as benchmark test cases for evaluation and intercomparison of computer-based methods for image matching and detecting setup errors in electronic portal images. Methods and Materials: An existing software tool for computing digitally reconstructed radiographs was modified to compute simulated megavoltage images. An interface was added to allow the user to specify which setup parameter(s) will contain computer-induced random and systematic errors in a reference beam created during virtual simulation. Other software features include options for adding random and structured noise, Gaussian blurring to simulate geometric unsharpness, histogram matching with a 'typical' electronic portal image, specifying individual preferences for the appearance of the 'gold standard' image, and specifying the number of images generated. The visible male computed tomography data set from the National Library of Medicine was used as the planning image. Results: Digitally reconstructed electronic portal images with known setup errors have been generated and used to evaluate our methods for automatic image matching and error detection. Any number of different sets of test cases can be generated to investigate setup errors involving selected setup parameters and anatomic volumes. This approach has proved to be invaluable for determination of error detection sensitivity under ideal (rigid body) conditions and for guiding further development of image matching and error detection methods. Example images have been successfully exported for similar use at other sites. Conclusions: Because absolute truth is known, digitally reconstructed electronic portal images with known setup errors are well suited for evaluation of computer-aided image matching and error detection methods. High-quality planning images, such as

  7. Benchmarking

    OpenAIRE

    Meylianti S., Brigita

    1999-01-01

    Benchmarking has different meaning to different people. There are five types of benchmarking, namely internal benchmarking, competitive benchmarking, industry / functional benchmarking, process / generic benchmarking and collaborative benchmarking. Each type of benchmarking has its own advantages as well as disadvantages. Therefore it is important to know what kind of benchmarking is suitable to a specific application. This paper will discuss those five types of benchmarking in detail, includ...

  8. The Benchmark Test Results of QNX RTOS

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jang Yeol; Lee, Young Jun; Cheon, Se Woo; Lee, Jang Soo; Kwon, Kee Choon [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2010-10-15

    A Real-Time Operating System(RTOS) is an Operating System(OS) intended for real-time applications. Benchmarking is a point of reference by which something can be measured. The QNX is a Real Time Operating System(RTOS) developed by QSSL(QNX Software Systems Ltd.) in Canada. The ELMSYS is the brand name of commercially available Personal Computer(PC) for applications such as Cabinet Operator Module(COM) of Digital Plant Protection System(DPPS) and COM of Digital Engineered Safety Features Actuation System(DESFAS). The ELMSYS PC Hardware is being qualified by KTL(Korea Testing Lab.) for use as a Cabinet Operator Module(COM). The QNX RTOS is being dedicated by Korea Atomic Energy Research Institute (KAERI). This paper describes the outline and benchmarking test results on Context Switching, Message Passing, Synchronization and Deadline Violation of QNX RTOS under the ELMSYS PC platform

  9. The Benchmark Test Results of QNX RTOS

    International Nuclear Information System (INIS)

    Kim, Jang Yeol; Lee, Young Jun; Cheon, Se Woo; Lee, Jang Soo; Kwon, Kee Choon

    2010-01-01

    A Real-Time Operating System(RTOS) is an Operating System(OS) intended for real-time applications. Benchmarking is a point of reference by which something can be measured. The QNX is a Real Time Operating System(RTOS) developed by QSSL(QNX Software Systems Ltd.) in Canada. The ELMSYS is the brand name of commercially available Personal Computer(PC) for applications such as Cabinet Operator Module(COM) of Digital Plant Protection System(DPPS) and COM of Digital Engineered Safety Features Actuation System(DESFAS). The ELMSYS PC Hardware is being qualified by KTL(Korea Testing Lab.) for use as a Cabinet Operator Module(COM). The QNX RTOS is being dedicated by Korea Atomic Energy Research Institute (KAERI). This paper describes the outline and benchmarking test results on Context Switching, Message Passing, Synchronization and Deadline Violation of QNX RTOS under the ELMSYS PC platform

  10. Ansys Benchmark of the Single Heater Test

    International Nuclear Information System (INIS)

    H.M. Wade; H. Marr; M.J. Anderson

    2006-01-01

    The Single Heater Test (SHT) is the first of three in-situ thermal tests included in the site characterization program for the potential nuclear waste monitored geologic repository at Yucca Mountain. The heating phase of the SHT started in August 1996 and was concluded in May 1997 after 9 months of heating. Cooling continued until January 1998, at which time post-test characterization of the test block commenced. Numerous thermal, hydrological, mechanical, and chemical sensors monitored the coupled processes in the unsaturated fractured rock mass around the heater (CRWMS M and O 1999). The objective of this calculation is to benchmark a numerical simulation of the rock mass thermal behavior against the extensive data set that is available from the thermal test. The scope is limited to three-dimensional (3-D) numerical simulations of the computational domain of the Single Heater Test and surrounding rock mass. This calculation supports the waste package thermal design methodology, and is developed by Waste Package Department (WPD) under Office of Civilian Radioactive Waste Management (OCRWM) procedure AP-3.12Q, Revision 0, ICN 3, BSCN 1, Calculations

  11. The IAEA Coordinated Research Program on HTGR Reactor Physics, Thermal-hydraulics and Depletion Uncertainty Analysis: Description of the Benchmark Test Cases and Phases

    Energy Technology Data Exchange (ETDEWEB)

    Frederik Reitsma; Gerhard Strydom; Bismark Tyobeka; Kostadin Ivanov

    2012-10-01

    The continued development of High Temperature Gas Cooled Reactors (HTGRs) requires verification of design and safety features with reliable high fidelity physics models and robust, efficient, and accurate codes. The uncertainties in the HTR analysis tools are today typically assessed with sensitivity analysis and then a few important input uncertainties (typically based on a PIRT process) are varied in the analysis to find a spread in the parameter of importance. However, one wish to apply a more fundamental approach to determine the predictive capability and accuracies of coupled neutronics/thermal-hydraulics and depletion simulations used for reactor design and safety assessment. Today there is a broader acceptance of the use of uncertainty analysis even in safety studies and it has been accepted by regulators in some cases to replace the traditional conservative analysis. Finally, there is also a renewed focus in supplying reliable covariance data (nuclear data uncertainties) that can then be used in uncertainty methods. Uncertainty and sensitivity studies are therefore becoming an essential component of any significant effort in data and simulation improvement. In order to address uncertainty in analysis and methods in the HTGR community the IAEA launched a Coordinated Research Project (CRP) on the HTGR Uncertainty Analysis in Modelling early in 2012. The project is built on the experience of the OECD/NEA Light Water Reactor (LWR) Uncertainty Analysis in Best-Estimate Modelling (UAM) benchmark activity, but focuses specifically on the peculiarities of HTGR designs and its simulation requirements. Two benchmark problems were defined with the prismatic type design represented by the MHTGR-350 design from General Atomics (GA) while a 250 MW modular pebble bed design, similar to the INET (China) and indirect-cycle PBMR (South Africa) designs are also included. In the paper more detail on the benchmark cases, the different specific phases and tasks and the latest

  12. Pescara benchmark: overview of modelling, testing and identification

    International Nuclear Information System (INIS)

    Bellino, A; Garibaldi, L; Marchesiello, S; Brancaleoni, F; Gabriele, S; Spina, D; Bregant, L; Carminelli, A; Catania, G; Sorrentino, S; Di Evangelista, A; Valente, C; Zuccarino, L

    2011-01-01

    The 'Pescara benchmark' is part of the national research project 'BriViDi' (BRIdge VIbrations and DIagnosis) supported by the Italian Ministero dell'Universita e Ricerca. The project is aimed at developing an integrated methodology for the structural health evaluation of railway r/c, p/c bridges. The methodology should provide for applicability in operating conditions, easy data acquisition through common industrial instrumentation, robustness and reliability against structural and environmental uncertainties. The Pescara benchmark consisted in lab tests to get a consistent and large experimental data base and subsequent data processing. Special tests were devised to simulate the train transit effects in actual field conditions. Prestressed concrete beams of current industrial production both sound and damaged at various severity corrosion levels were tested. The results were collected either in a deterministic setting and in a form suitable to deal with experimental uncertainties. Damage identification was split in two approaches: with or without a reference model. In the first case f.e. models were used in conjunction with non conventional updating techniques. In the second case, specialized output-only identification techniques capable to deal with time-variant and possibly non linear systems were developed. The lab tests allowed validating the above approaches and the performances of classical modal based damage indicators.

  13. Shielding benchmark tests of JENDL-3

    International Nuclear Information System (INIS)

    Kawai, Masayoshi; Hasegawa, Akira; Ueki, Kohtaro; Yamano, Naoki; Sasaki, Kenji; Matsumoto, Yoshihiro; Takemura, Morio; Ohtani, Nobuo; Sakurai, Kiyoshi.

    1994-03-01

    The integral test of neutron cross sections for major shielding materials in JENDL-3 has been performed by analyzing various shielding benchmark experiments. For the fission-like neutron source problem, the following experiments are analyzed: (1) ORNL Broomstick experiments for oxygen, iron and sodium, (2) ASPIS deep penetration experiments for iron, (3) ORNL neutron transmission experiments for iron, stainless steel, sodium and graphite, (4) KfK leakage spectrum measurements from iron spheres, (5) RPI angular neutron spectrum measurements in a graphite block. For D-T neutron source problem, the following two experiments are analyzed: (6) LLNL leakage spectrum measurements from spheres of iron and graphite, and (7) JAERI-FNS angular neutron spectrum measurements on beryllium and graphite slabs. Analyses have been performed using the radiation transport codes: ANISN(1D Sn), DIAC(1D Sn), DOT3.5(2D Sn) and MCNP(3D point Monte Carlo). The group cross sections for Sn transport calculations are generated with the code systems PROF-GROUCH-G/B and RADHEAT-V4. The point-wise cross sections for MCNP are produced with NJOY. For comparison, the analyses with JENDL-2 and ENDF/B-IV have been also carried out. The calculations using JENDL-3 show overall agreement with the experimental data as well as those with ENDF/B-IV. Particularly, JENDL-3 gives better results than JENDL-2 and ENDF/B-IV for sodium. It has been concluded that JENDL-3 is very applicable for fission and fusion reactor shielding analyses. (author)

  14. Reactor calculation benchmark PCA blind test results

    International Nuclear Information System (INIS)

    Kam, F.B.K.; Stallmann, F.W.

    1980-01-01

    Further improvement in calculational procedures or a combination of calculations and measurements is necessary to attain 10 to 15% (1 sigma) accuracy for neutron exposure parameters (flux greater than 0.1 MeV, flux greater than 1.0 MeV, and dpa). The calculational modeling of power reactors should be benchmarked in an actual LWR plant to provide final uncertainty estimates for end-of-life predictions and limitations for plant operations. 26 references, 14 figures, 6 tables

  15. Reactor calculation benchmark PCA blind test results

    Energy Technology Data Exchange (ETDEWEB)

    Kam, F.B.K.; Stallmann, F.W.

    1980-01-01

    Further improvement in calculational procedures or a combination of calculations and measurements is necessary to attain 10 to 15% (1 sigma) accuracy for neutron exposure parameters (flux greater than 0.1 MeV, flux greater than 1.0 MeV, and dpa). The calculational modeling of power reactors should be benchmarked in an actual LWR plant to provide final uncertainty estimates for end-of-life predictions and limitations for plant operations. 26 references, 14 figures, 6 tables.

  16. Benchmark testing calculations for 232Th

    International Nuclear Information System (INIS)

    Liu Ping

    2003-01-01

    The cross sections of 232 Th from CNDC and JENDL-3.3 were processed with NJOY97.45 code in the ACE format for the continuous-energy Monte Carlo Code MCNP4C. The K eff values and central reaction rates based on CENDL-3.0, JENDL-3.3 and ENDF/B-6.2 were calculated using MCNP4C code for benchmark assembly, and the comparisons with experimental results are given. (author)

  17. Status on benchmark testing of CENDL-3

    CERN Document Server

    Liu Ping

    2002-01-01

    CENDL-3, the newest version of China Evaluated Nuclear Data Library has been finished, and distributed for some benchmarks analysis recently. The processing was carried out using the NJOY nuclear data processing code system. The calculations and analysis of benchmarks were done with Monte Carlo code MCNP and reactor lattice code WIMSD5A. The calculated results were compared with the experimental results based on ENDF/B6. In most thermal and fast uranium criticality benchmarks, the calculated k sub e sub f sub f values with CENDL-3 were in good agreements with experimental results. In the plutonium fast cores, the k sub e sub f sub f values were improved significantly with CENDL-3. This is duo to reevaluation of the fission spectrum and elastic angular distributions of sup 2 sup 3 sup 9 Pu and sup 2 sup 4 sup 0 Pu. CENDL-3 underestimated the k sub e sub f sub f values compared with other evaluated data libraries for most spherical or cylindrical assemblies of plutonium or uranium with beryllium

  18. Summary of the benchmark test on artificial noise data

    Energy Technology Data Exchange (ETDEWEB)

    Hoogenboom, J.E.; Ciftcioglu, O.; Dam, H. van

    1988-01-01

    A survey is given of the SMORN-V artificial noise benchmark test for checking autoregressive modelling of noise and testing anomaly detection methods. A detailed description of the system used to generate the signals is given. Contributions from 7 participants have been received. Not all participants executed both the tests on the stationary data and the anomaly data. Comparison of plots of transfer functions, noise contribution ratios and the spectrum of a noise source obtained from AR-analysis partly shows satisfactory agreement (except for normalization), partly distinct disagreement. This was also the case for the several parameters to be determined numerically. The covariance matrices of the intrinsic noise sources showed considerable differences. Participants dealing with the anomaly data used very different methods for anomaly detection. Two of them detected both anomalies present in the signals. One participant the first anomaly and the other participant the second anomaly only.

  19. Summary of the benchmark test on artificial noise data

    International Nuclear Information System (INIS)

    Hoogenboom, J.E.; Ciftcioglu, O.; Dam, H. van

    1988-01-01

    A survey is given of the SMORN-V artificial noise benchmark test for checking autoregressive modelling of noise and testing anomaly detection methods. A detailed description of the system used to generate the signals is given. Contributions from 7 participants have been received. Not all participants executed both the tests on the stationary data and the anomaly data. Comparison of plots of transfer functions, noise contribution ratios and the spectrum of a noise source obtained from AR-analysis partly shows satisfactory agreement (except for normalization), partly distinct disagreement. This was also the case for the several parameters to be determined numerically. The covariance matrices of the intrinsic noise sources showed considerable differences. Participants dealing with the anomaly data used very different methods for anomaly detection. Two of them detected both anomalies present in the signals. One participant the first anomaly and the other participant the second anomaly only. (author)

  20. Benchmarking Academic Libraries: An Australian Case Study.

    Science.gov (United States)

    Robertson, Margaret; Trahn, Isabella

    1997-01-01

    Discusses experiences and outcomes of benchmarking at the Queensland University of Technology (Australia) library that compared acquisitions, cataloging, document delivery, and research support services with those of the University of New South Wales. Highlights include results as a catalyst for change, and the use of common output and performance…

  1. Thermal reactor benchmark tests on JENDL-2

    International Nuclear Information System (INIS)

    Takano, Hideki; Tsuchihashi, Keichiro; Yamane, Tsuyoshi; Akino, Fujiyoshi; Ishiguro, Yukio; Ido, Masaru.

    1983-11-01

    A group constant library for the thermal reactor standard nuclear design code system SRAC was produced by using the evaluated nuclear data JENDL-2. Furthermore, the group constants for 235 U were calculated also from ENDF/B-V. Thermal reactor benchmark calculations were performed using the produced group constant library. The selected benchmark cores are two water-moderated lattices (TRX-1 and 2), two heavy water-moderated cores (DCA and ETA-1), two graphite-moderated cores (SHE-8 and 13) and eight critical experiments for critical safety. The effective multiplication factors and lattice cell parameters were calculated and compared with the experimental values. The results are summarized as follows. (1) Effective multiplication factors: The results by JENDL-2 are considerably improved in comparison with ones by ENDF/B-IV. The best agreement is obtained by using JENDL-2 and ENDF/B-V (only 235 U) data. (2) Lattice cell parameters: For the rho 28 (the ratio of epithermal to thermal 238 U captures) and C* (the ratio of 238 U captures to 235 U fissions), the values calculated by JENDL-2 are in good agreement with the experimental values. The rho 28 (the ratio of 238 U to 235 U fissions) are overestimated as found also for the fast reactor benchmarks. The rho 02 (the ratio of epithermal to thermal 232 Th captures) calculated by JENDL-2 or ENDF/B-IV are considerably underestimated. The functions of the SRAC system have been continued to be extended according to the needs of its users. A brief description will be given, in Appendix B, to the extended parts of the SRAC system together with the input specification. (author)

  2. SMORN-III benchmark test on reactor noise analysis methods

    International Nuclear Information System (INIS)

    Shinohara, Yoshikuni; Hirota, Jitsuya

    1984-02-01

    A computational benchmark test was performed in conjunction with the Third Specialists Meeting on Reactor Noise (SMORN-III) which was held in Tokyo, Japan in October 1981. This report summarizes the results of the test as well as the works made for preparation of the test. (author)

  3. Benchmark Calculations on Halden IFA-650 LOCA Test Results

    International Nuclear Information System (INIS)

    Ek, Mirkka; Kekkonen, Laura; Kelppe, Seppo; Stengaard, J.O.; Josek, Radomir; Wiesenack, Wolfgang; Aounallah, Yacine; Wallin, Hannu; Grandjean, Claude; Herb, Joachim; Lerchl, Georg; Trambauer, Klaus; Sonnenburg, Heinz-Guenther; Nakajima, Tetsuo; Spykman, Gerold; Struzik, Christine

    2010-01-01

    The assessment of the consequences of a loss-of-coolant accident (LOCA) is to a large extent based on calculations carried out with codes especially developed for addressing the phenomena occurring during the transient. Since the time of the first LOCA experiments, which were largely conducted with fresh fuel, changes in fuel design, the introduction of new cladding materials and in particular the move to high burnup have not only generated a need to re-examine the LOCA safety criteria and to verify their continued validity, but also to confirm that codes show an appropriate performance especially with respect to high burnup phenomena influencing LOCA fuel behaviour. As part of international efforts, the OECD Halden Reactor Project program implemented a test series to address particular LOCA issues. Based on recommendations of a group of experts from the US NRC, EPRI, EDF, FRAMATOME-ANP and GNF, the primary objective of the experiments were defined as 1. Measure the extent of fuel (fragment) relocation into the ballooned region and evaluate its possible effect on cladding temperature and oxidation. 2. Investigate the extent (if any) of 'secondary transient hydriding' on the inner side of the cladding above and below the burst region. The Halden LOCA series, using high burnup fuel segments, contains test cases well suited for checking the ability of LOCA analysis codes to predict or reproduce the measurements and to provide clues as to where the codes need to be improved. The NEA Working Group on Fuel Safety, WGFS, therefore decided to conduct a code benchmark based on the Halden LOCA test series. Emphasis was on the codes' ability to predict or reproduce the thermal and mechanical response of fuel and cladding. Before starting the benchmark, participants were given the opportunity to tune their codes to the experimental system applied in the Halden LOCA tests. To this end, the data from the two commissioning runs were made available. The first of these runs went

  4. Shear Strength Measurement Benchmarking Tests for K Basin Sludge Simulants

    Energy Technology Data Exchange (ETDEWEB)

    Burns, Carolyn A.; Daniel, Richard C.; Enderlin, Carl W.; Luna, Maria; Schmidt, Andrew J.

    2009-06-10

    Equipment development and demonstration testing for sludge retrieval is being conducted by the K Basin Sludge Treatment Project (STP) at the MASF (Maintenance and Storage Facility) using sludge simulants. In testing performed at the Pacific Northwest National Laboratory (under contract with the CH2M Hill Plateau Remediation Company), the performance of the Geovane instrument was successfully benchmarked against the M5 Haake rheometer using a series of simulants with shear strengths (τ) ranging from about 700 to 22,000 Pa (shaft corrected). Operating steps for obtaining consistent shear strength measurements with the Geovane instrument during the benchmark testing were refined and documented.

  5. Benchmarking

    OpenAIRE

    Beretta Sergio; Dossi Andrea; Grove Hugh

    2000-01-01

    Due to their particular nature, the benchmarking methodologies tend to exceed the boundaries of management techniques, and to enter the territories of managerial culture. A culture that is also destined to break into the accounting area not only strongly supporting the possibility of fixing targets, and measuring and comparing the performance (an aspect that is already innovative and that is worthy of attention), but also questioning one of the principles (or taboos) of the accounting or...

  6. International benchmark tests of the FENDL-1 Nuclear Data Library

    International Nuclear Information System (INIS)

    Fischer, U.

    1997-01-01

    An international benchmark validation task has been conducted to validate the fusion evaluated nuclear data library FENDL-1 through data tests against integral 14 MeV neutron experiments. The main objective of this task was to qualify the FENDL-1 working libraries for fusion applications and to elaborate recommendations for further data improvements. Several laboratories and institutions from the European Union, Japan, the Russian Federation and US have contributed to the benchmark task. A large variety of existing integral 14 MeV benchmark experiments was analysed with the FENDL-1 working libraries for continuous energy Monte Carlo and multigroup discrete ordinate calculations. Results of the benchmark analyses have been collected, discussed and evaluated. The major findings, conclusions and recommendations are presented in this paper. With regard to the data quality, it is summarised that fusion nuclear data have reached a high confidence level with the available FENDL-1 data library. With few exceptions this holds for the materials of highest importance for fusion reactor applications. As a result of the performed benchmark analyses, some existing deficiencies and discrepancies have been identified that are recommended for removal in theforthcoming FENDL-2 data file. (orig.)

  7. Benchmarking and testing the “Sea Level Equation”

    DEFF Research Database (Denmark)

    Spada, G.; Barletta, Valentina Roberta; Klemann, V.

    2012-01-01

    through which the methods may be validated. Following the example of the mantle con-vection community and our recent successful Benchmark for Post Glacial Rebound codes (Spada et al., 2011), here we present the results of a benchmark study of independently developed codes de-signed to solve the SLE....... This study has taken place within a collaboration facilitated through the Eu-ropean Cooperation in Science and Technology (COST) Action ES0701. The tests involve predictions of past and current sea level variations, and 3D deformations of the Earth surface. In spite of the significant differences...

  8. Benchmark and physics testing of LIFE-4C. Summary

    International Nuclear Information System (INIS)

    Liu, Y.Y.

    1984-06-01

    LIFE-4C is a steady-state/transient analysis code developed for performance evaluation of carbide [(U,Pu)C and UC] fuel elements in advanced LMFBRs. This paper summarizes selected results obtained during a crucial step in the development of LIFE-4C - benchmark and physics testing

  9. JENDL-3.3 thermal reactor benchmark test

    International Nuclear Information System (INIS)

    Akie, Hiroshi

    2001-01-01

    Integral tests of JENDL-3.2 nuclear data library have been carried out by Reactor Integral Test WG of Japanese Nuclear Data Committee. The most important problem in the thermal reactor benchmark testing was the overestimation of the multiplication factor of the U fueled cores. With several revisions of the data of 235 U and the other nuclides, JENDL-3.3 data library gives a good estimation of multiplication factors both for U and Pu fueled thermal reactors. (author)

  10. Calculations of IAEA-CRP-6 Benchmark Case 1 through 7 for a TRISO-Coated Fuel Particle

    International Nuclear Information System (INIS)

    Kim, Young Min; Lee, Y. W.; Chang, J. H.

    2005-01-01

    IAEA-CRP-6 is a coordinated research program of IAEA on Advances in HTGR fuel technology. The CRP examines aspects of HTGR fuel technology, ranging from design and fabrication to characterization, irradiation testing, performance modeling, as well as licensing and quality control issues. The benchmark section of the program treats simple analytical cases, pyrocarbon layer behavior, single TRISO-coated fuel particle behavior, and benchmark calculations of some irradiation experiments performed and planned. There are totally seventeen benchmark cases in the program. Member countries are participating in the benchmark calculations of the CRP with their own developed fuel performance analysis computer codes. Korea is also taking part in the benchmark calculations using a fuel performance analysis code, COPA (COated PArticle), which is being developed in Korea Atomic Energy Research Institute. The study shows the calculational results of IAEACRP- 6 benchmark cases 1 through 7 which describe the structural behaviors for a single fuel particle

  11. International benchmark on the natural convection test in Phenix reactor

    International Nuclear Information System (INIS)

    Tenchine, D.; Pialla, D.; Fanning, T.H.; Thomas, J.W.; Chellapandi, P.; Shvetsov, Y.; Maas, L.; Jeong, H.-Y.; Mikityuk, K.; Chenu, A.; Mochizuki, H.; Monti, S.

    2013-01-01

    Highlights: ► Phenix main characteristics, instrumentation and natural convection test are described. ► “Blind” calculations and post-test calculations from all the participants to the benchmark are compared to reactor data. ► Lessons learned from the natural convection test and the associated calculations are discussed. -- Abstract: The French Phenix sodium cooled fast reactor (SFR) started operation in 1973 and was stopped in 2009. Before the reactor was definitively shutdown, several final tests were planned and performed, including a natural convection test in the primary circuit. During this natural convection test, the heat rejection provided by the steam generators was disabled, followed several minutes later by reactor scram and coast-down of the primary pumps. The International Atomic Energy Agency (IAEA) launched a Coordinated Research Project (CRP) named “control rod withdrawal and sodium natural circulation tests performed during the Phenix end-of-life experiments”. The overall purpose of the CRP was to improve the Member States’ analytical capabilities in the field of SFR safety. An international benchmark on the natural convection test was organized with “blind” calculations in a first step, then “post-test” calculations and sensitivity studies compared with reactor measurements. Eight organizations from seven Member States took part in the benchmark: ANL (USA), CEA (France), IGCAR (India), IPPE (Russian Federation), IRSN (France), KAERI (Korea), PSI (Switzerland) and University of Fukui (Japan). Each organization performed computations and contributed to the analysis and global recommendations. This paper summarizes the findings of the CRP benchmark exercise associated with the Phenix natural convection test, including blind calculations, post-test calculations and comparisons with measured data. General comments and recommendations are pointed out to improve future simulations of natural convection in SFRs

  12. Cross sections, benchmarks, etc.: What is data testing all about

    International Nuclear Information System (INIS)

    Wagschal, J.; Yeivin, Y.

    1985-01-01

    In order to determine the consistency of two distinct measurements of a physical quantity, the discrepancy d between the two should be compared with its own standard deviation, σ = √(σ/sub 1//sup 2/+σ/sub 2//sup 2/). To properly test a given cross-section library by a set of benchmark (integral) measurements, the quantity corresponding to (d/σ)/sup 2/ is the quadratic d/sup dagger/C/sup -1/d. Here d is the vector of which the components are the discrepancies between the calculated values of the integral parameters and their corresponding measured values, and C is the uncertainty matrix of these discrepancies. This quadratic form is the only true measure of the joint consistency of the library and benchmarks. On the other hand, the very matrix C is essentially all one needs to adjust the library by the benchmarks. Therefore, any argument against adjustment simultaneously disqualifies all serious attempts to test cross-section libraries against integral benchmarks

  13. Benchmarking Benchmarks

    NARCIS (Netherlands)

    D.C. Blitz (David)

    2011-01-01

    textabstractBenchmarking benchmarks is a bundle of six studies that are inspired by the prevalence of benchmarking in academic finance research as well as in investment practice. Three studies examine if current benchmark asset pricing models adequately describe the cross-section of stock returns.

  14. Melcor benchmarking against integral severe fuel damage tests

    Energy Technology Data Exchange (ETDEWEB)

    Madni, I.K. [Brookhaven National Lab., Upton, NY (United States)

    1995-09-01

    MELCOR is a fully integrated computer code that models all phases of the progression of severe accidents in light water reactor nuclear power plants, and is being developed for the U.S. Nuclear Regulatory Commission (NRC) by Sandia National Laboratories (SNL). Brookhaven National Laboratory (BNL) has a program with the NRC to provide independent assessment of MELCOR, and a very important part of this program is to benchmark MELCOR against experimental data from integral severe fuel damage tests and predictions of that data from more mechanistic codes such as SCDAP or SCDAP/RELAP5. Benchmarking analyses with MELCOR have been carried out at BNL for five integral severe fuel damage tests, namely, PBF SFD 1-1, SFD 14, and NRU FLHT-2, analyses, and their role in identifying areas of modeling strengths and weaknesses in MELCOR.

  15. Benchmark enclosure fire suppression experiments - phase 1 test report.

    Energy Technology Data Exchange (ETDEWEB)

    Figueroa, Victor G.; Nichols, Robert Thomas; Blanchat, Thomas K.

    2007-06-01

    A series of fire benchmark water suppression tests were performed that may provide guidance for dispersal systems for the protection of high value assets. The test results provide boundary and temporal data necessary for water spray suppression model development and validation. A review of fire suppression in presented for both gaseous suppression and water mist fire suppression. The experimental setup and procedure for gathering water suppression performance data are shown. Characteristics of the nozzles used in the testing are presented. Results of the experiments are discussed.

  16. International Benchmark on Pressurised Water Reactor Sub-channel and Bundle Tests. Volume II: Benchmark Results of Phase I: Void Distribution

    International Nuclear Information System (INIS)

    Rubin, Adam; Avramova, Maria; Velazquez-Lozada, Alexander

    2016-03-01

    This report summarised the first phase of the Nuclear Energy Agency (NEA) and the US Nuclear Regulatory Commission Benchmark based on NUPEC PWR Sub-channel and Bundle Tests (PSBT), which was intended to provide data for the verification of void distribution models in participants' codes. This phase was composed of four exercises; Exercise 1: steady-state single sub-channel benchmark, Exercise 2: steady-state rod bundle benchmark, Exercise 3: transient rod bundle benchmark and Exercise 4: a pressure drop benchmark. The experimental data provided to the participants of this benchmark is from a series of void measurement tests using full-size mock-up tests for both Boiling Water Reactors (BWRs) and Pressurised Water Reactors (PWRs). These tests were performed from 1987 to 1995 by the Nuclear Power Engineering Corporation (NUPEC) in Japan and made available by the Japan Nuclear Energy Safety Organisation (JNES) for the purposes of this benchmark, which was organised by Pennsylvania State University. Twenty-one institutions from nine countries participated in this benchmark. Seventeen different computer codes were used in Exercises 1, 2, 3 and 4. Among the computer codes were porous media, sub-channel, systems thermal-hydraulic code and Computational Fluid Dynamics (CFD) codes. It was observed that the codes tended to overpredict the thermal equilibrium quality at lower elevations and under predict it at higher elevations. There was also a tendency to overpredict void fraction at lower elevations and underpredict it at high elevations for the bundle test cases. The overprediction of void fraction at low elevations is likely caused by the x-ray densitometer measurement method used. Under sub-cooled boiling conditions, the voids accumulate at heated surfaces (and are therefore not seen in the centre of the sub-channel, where the measurements are being taken), so the experimentally-determined void fractions will be lower than the actual void fraction. Some of the best

  17. Benchmarking to Identify Practice Variation in Test Ordering: A Potential Tool for Utilization Management.

    Science.gov (United States)

    Signorelli, Heather; Straseski, Joely A; Genzen, Jonathan R; Walker, Brandon S; Jackson, Brian R; Schmidt, Robert L

    2015-01-01

    Appropriate test utilization is usually evaluated by adherence to published guidelines. In many cases, medical guidelines are not available. Benchmarking has been proposed as a method to identify practice variations that may represent inappropriate testing. This study investigated the use of benchmarking to identify sites with inappropriate utilization of testing for a particular analyte. We used a Web-based survey to compare 2 measures of vitamin D utilization: overall testing intensity (ratio of total vitamin D orders to blood-count orders) and relative testing intensity (ratio of 1,25(OH)2D to 25(OH)D test orders). A total of 81 facilities contributed data. The average overall testing intensity index was 0.165, or approximately 1 vitamin D test for every 6 blood-count tests. The average relative testing intensity index was 0.055, or one 1,25(OH)2D test for every 18 of the 25(OH)D tests. Both indexes varied considerably. Benchmarking can be used as a screening tool to identify outliers that may be associated with inappropriate test utilization. Copyright© by the American Society for Clinical Pathology (ASCP).

  18. Benchmarking of industrial control systems via case-based reasoning

    International Nuclear Information System (INIS)

    Hadjiiski, M.; Boshnakov, K.; Georgiev, Z.

    2013-01-01

    Full text: The recent development of information and communication technologies enables the establishment of virtual consultation centers related to the control of specific processes that are widely presented worldwide as the location of the installations does not have influence on the results. The centers can provide consultations regarding the quality of the process control and overall enterprise management as correction factors such as weather conditions, product or service and associated technology, production level, quality of feedstock used and others can be also taken into account. The benchmarking technique is chosen as a tool for analyzing and comparing the quality of the assessed control systems in individual plants. It is a process of gathering, analyzing and comparing data on the characteristics of comparable units to assess and compare these characteristics and improve the performance of the particular process, enterprise or organization. By comparing the different processes and the adoption of the best practices energy efficiency could be improved and hence the competitiveness of the participating organizations will increase. In the presented work algorithm for benchmarking and parametric optimization of a given control system is developed by applying the approaches of Case-Based Reasoning (CBR) and Data Envelopment Analysis (DEA). Expert knowledge and approaches for optimal tuning of control systems are combined. Two of the most common systems for automatic control of different variables in the case of biological wastewater treatment are presented and discussed. Based on analysis of the processes, different cases are defined. By using DEA analysis the relative efficiencies of 10 systems for automatic control of dissolved oxygen are estimated. The designed and implemented in the current work CBR and DEA are applicable for the purposed of virtual consultation centers. Key words: benchmarking technique, energy efficiency, Case-Based Reasoning (CBR

  19. Results of the Monte Carlo 'simple case' benchmark exercise

    International Nuclear Information System (INIS)

    2003-11-01

    A new 'simple case' benchmark intercomparison exercise was launched, intended to study the importance of the fundamental nuclear data constants, physics treatments and geometry model approximations, employed by Monte Carlo codes in common use. The exercise was also directed at determining the level of agreement which can be expected between measured and calculated quantities, using current state or the art modelling codes and techniques. To this end, measurements and Monte Carlo calculations of the total (or gross) neutron count rates have been performed using a simple moderated 3 He cylindrical proportional counter array or 'slab monitor' counting geometry, deciding to select a very simple geometry for this exercise

  20. Multi-Core Processor Memory Contention Benchmark Analysis Case Study

    Science.gov (United States)

    Simon, Tyler; McGalliard, James

    2009-01-01

    Multi-core processors dominate current mainframe, server, and high performance computing (HPC) systems. This paper provides synthetic kernel and natural benchmark results from an HPC system at the NASA Goddard Space Flight Center that illustrate the performance impacts of multi-core (dual- and quad-core) vs. single core processor systems. Analysis of processor design, application source code, and synthetic and natural test results all indicate that multi-core processors can suffer from significant memory subsystem contention compared to similar single-core processors.

  1. WLUP benchmarks

    International Nuclear Information System (INIS)

    Leszczynski, Francisco

    2002-01-01

    The IAEA-WIMS Library Update Project (WLUP) is on the end stage. The final library will be released on 2002. It is a result of research and development made by more than ten investigators during 10 years. The organization of benchmarks for testing and choosing the best set of data has been coordinated by the author of this paper. It is presented the organization, name conventions, contents and documentation of WLUP benchmarks, and an updated list of the main parameters for all cases. First, the benchmarks objectives and types are given. Then, comparisons of results from different WIMSD libraries are included. Finally it is described the program QVALUE for analysis and plot of results. Some examples are given. The set of benchmarks implemented on this work is a fundamental tool for testing new multigroup libraries. (author)

  2. Probabilistic fracture mechanics applied for lbb case study: international benchmark

    International Nuclear Information System (INIS)

    Radu, V.

    2015-01-01

    An application of probabilistic fracture mechanics to evaluate the structural integrity for a case study chosen from experimental Mock-ups of FP7 STYLE project is described. The reliability model for probabilistic structural integrity, focused on the assessment of TWC in the pipe weld under complex loading (bending moment and residual stress) has been setup. The basic model is the model of fracture for through-wall cracked pipe under elastic-plastic conditions. The corresponding structural reliability approach is developed with the probabilities of failure associated with maximum load for crack initiation, net-section collapse but also the evaluation the instability loads. The probabilities of failure for a through-wall crack in a pipe subject to pure bending are evaluated by using crude Monte Carlo simulations. The results from the international benchmark are presented for the mentioned case in the context of ageing and lifetime management of pressure boundary/pressure circuit component. (authors)

  3. Test One to Test Many: A Unified Approach to Quantum Benchmarks

    Science.gov (United States)

    Bai, Ge; Chiribella, Giulio

    2018-04-01

    Quantum benchmarks are routinely used to validate the experimental demonstration of quantum information protocols. Many relevant protocols, however, involve an infinite set of input states, of which only a finite subset can be used to test the quality of the implementation. This is a problem, because the benchmark for the finitely many states used in the test can be higher than the original benchmark calculated for infinitely many states. This situation arises in the teleportation and storage of coherent states, for which the benchmark of 50% fidelity is commonly used in experiments, although finite sets of coherent states normally lead to higher benchmarks. Here, we show that the average fidelity over all coherent states can be indirectly probed with a single setup, requiring only two-mode squeezing, a 50-50 beam splitter, and homodyne detection. Our setup enables a rigorous experimental validation of quantum teleportation, storage, amplification, attenuation, and purification of noisy coherent states. More generally, we prove that every quantum benchmark can be tested by preparing a single entangled state and measuring a single observable.

  4. Structural Benchmark Testing for Stirling Convertor Heater Heads

    Science.gov (United States)

    Krause, David L.; Kalluri, Sreeramesh; Bowman, Randy R.

    2007-01-01

    The National Aeronautics and Space Administration (NASA) has identified high efficiency Stirling technology for potential use on long duration Space Science missions such as Mars rovers, deep space missions, and lunar applications. For the long life times required, a structurally significant design limit for the Stirling convertor heater head is creep deformation induced even under relatively low stress levels at high material temperatures. Conventional investigations of creep behavior adequately rely on experimental results from uniaxial creep specimens, and much creep data is available for the proposed Inconel-718 (IN-718) and MarM-247 nickel-based superalloy materials of construction. However, very little experimental creep information is available that directly applies to the atypical thin walls, the specific microstructures, and the low stress levels. In addition, the geometry and loading conditions apply multiaxial stress states on the heater head components, far from the conditions of uniaxial testing. For these reasons, experimental benchmark testing is underway to aid in accurately assessing the durability of Stirling heater heads. The investigation supplements uniaxial creep testing with pneumatic testing of heater head test articles at elevated temperatures and with stress levels ranging from one to seven times design stresses. This paper presents experimental methods, results, post-test microstructural analyses, and conclusions for both accelerated and non-accelerated tests. The Stirling projects use the results to calibrate deterministic and probabilistic analytical creep models of the heater heads to predict their life times.

  5. Simulation of Benchmark Cases with the Terminal Area Simulation System (TASS)

    Science.gov (United States)

    Ahmad, Nash'at; Proctor, Fred

    2011-01-01

    The hydrodynamic core of the Terminal Area Simulation System (TASS) is evaluated against different benchmark cases. In the absence of closed form solutions for the equations governing atmospheric flows, the models are usually evaluated against idealized test cases. Over the years, various authors have suggested a suite of these idealized cases which have become standards for testing and evaluating the dynamics and thermodynamics of atmospheric flow models. In this paper, simulations of three such cases are described. In addition, the TASS model is evaluated against a test case that uses an exact solution of the Navier-Stokes equations. The TASS results are compared against previously reported simulations of these banchmark cases in the literature. It is demonstrated that the TASS model is highly accurate, stable and robust.

  6. The benchmark testing of 9Be of CENDL-3

    International Nuclear Information System (INIS)

    Liu Ping

    2002-01-01

    CENDL-3, the latest version of China Evaluated Nuclear Data Library was finished. The data of 9 Be were updated, and distributed for benchmark analysis recently. The calculated results were presented, and compared with the experimental data and the results based on other evaluated nuclear data libraries. The results show that CENDL-3 is better than others for most benchmarks

  7. Benchmark Testing of the Largest Titanium Aluminide Sheet Subelement Conducted

    Science.gov (United States)

    Bartolotta, Paul A.; Krause, David L.

    2000-01-01

    To evaluate wrought titanium aluminide (gamma TiAl) as a viable candidate material for the High-Speed Civil Transport (HSCT) exhaust nozzle, an international team led by the NASA Glenn Research Center at Lewis Field successfully fabricated and tested the largest gamma TiAl sheet structure ever manufactured. The gamma TiAl sheet structure, a 56-percent subscale divergent flap subelement, was fabricated for benchmark testing in three-point bending. Overall, the subelement was 84-cm (33-in.) long by 13-cm (5-in.) wide by 8-cm (3-in.) deep. Incorporated into the subelement were features that might be used in the fabrication of a full-scale divergent flap. These features include the use of: (1) gamma TiAl shear clips to join together sections of corrugations, (2) multiple gamma TiAl face sheets, (3) double hot-formed gamma TiAl corrugations, and (4) brazed joints. The structural integrity of the gamma TiAl sheet subelement was evaluated by conducting a room-temperature three-point static bend test.

  8. Review of benchmark tests on JENDL-2B library

    International Nuclear Information System (INIS)

    Kikuchi, Yasuyuki

    1982-01-01

    JENDL-2B is a mixed library consisting of JENDL-2 for the most important nuclides, i.e., 235 U, 238 U, 239 Pu, 240 Pu, 241 Pu, Cr, Fe and Ni and of JENDL-1 for other nuclides. Complete reevaluation work was made for these eight nuclides. The simultaneous evaluation method was adopted in the evaluation for the five heavy nuclides. The resonance structure was carefully studied for the structural materials in the unresolved resonance region up to several MeV. Benchmark tests have been made on JENDL-2B. Various core center characteristics were tested with one-dimensional model for total of 27 assemblies. Satisfactory results were obtained as a whole. The results of spectrum indices, however, suggested some inconsistent spectrum prediction. Moreover, the reactivity worths were overestimated for most of materials, and apparent C/E discrepancies were observed between the Pu and U cores. Applicability of JENDL-2B was further tested to more sophisticated problems for MOZART and ZPPR-3 assemblies. The reaction rate distributions were better predicted with JENDL-2B than with JENDL-1. The positive sodium void reactivity worth was much overestimated with JENDL-2B due to too large moderation components. The control rod worths were well predicted in MOZART, but were considerably underpredicted in ZPPR-3. (author)

  9. Testing New Programming Paradigms with NAS Parallel Benchmarks

    Science.gov (United States)

    Jin, H.; Frumkin, M.; Schultz, M.; Yan, J.

    2000-01-01

    Over the past decade, high performance computing has evolved rapidly, not only in hardware architectures but also with increasing complexity of real applications. Technologies have been developing to aim at scaling up to thousands of processors on both distributed and shared memory systems. Development of parallel programs on these computers is always a challenging task. Today, writing parallel programs with message passing (e.g. MPI) is the most popular way of achieving scalability and high performance. However, writing message passing programs is difficult and error prone. Recent years new effort has been made in defining new parallel programming paradigms. The best examples are: HPF (based on data parallelism) and OpenMP (based on shared memory parallelism). Both provide simple and clear extensions to sequential programs, thus greatly simplify the tedious tasks encountered in writing message passing programs. HPF is independent of memory hierarchy, however, due to the immaturity of compiler technology its performance is still questionable. Although use of parallel compiler directives is not new, OpenMP offers a portable solution in the shared-memory domain. Another important development involves the tremendous progress in the internet and its associated technology. Although still in its infancy, Java promisses portability in a heterogeneous environment and offers possibility to "compile once and run anywhere." In light of testing these new technologies, we implemented new parallel versions of the NAS Parallel Benchmarks (NPBs) with HPF and OpenMP directives, and extended the work with Java and Java-threads. The purpose of this study is to examine the effectiveness of alternative programming paradigms. NPBs consist of five kernels and three simulated applications that mimic the computation and data movement of large scale computational fluid dynamics (CFD) applications. We started with the serial version included in NPB2.3. Optimization of memory and cache usage

  10. The PAC-MAN model: Benchmark case for linear acoustics in computational physics

    Science.gov (United States)

    Ziegelwanger, Harald; Reiter, Paul

    2017-10-01

    Benchmark cases in the field of computational physics, on the one hand, have to contain a certain complexity to test numerical edge cases and, on the other hand, require the existence of an analytical solution, because an analytical solution allows the exact quantification of the accuracy of a numerical simulation method. This dilemma causes a need for analytical sound field formulations of complex acoustic problems. A well known example for such a benchmark case for harmonic linear acoustics is the ;Cat's Eye model;, which describes the three-dimensional sound field radiated from a sphere with a missing octant analytically. In this paper, a benchmark case for two-dimensional (2D) harmonic linear acoustic problems, viz., the ;PAC-MAN model;, is proposed. The PAC-MAN model describes the radiated and scattered sound field around an infinitely long cylinder with a cut out sector of variable angular width. While the analytical calculation of the 2D sound field allows different angular cut-out widths and arbitrarily positioned line sources, the computational cost associated with the solution of this problem is similar to a 1D problem because of a modal formulation of the sound field in the PAC-MAN model.

  11. Present status and benchmark tests of JENDL-2

    International Nuclear Information System (INIS)

    Kikuchi, Y.

    1983-01-01

    The second version of Japanese Evaluated Nuclear Data Library (JENDL-2) consists of the evaluated data from 10 -5 eV to 20 MeV for 176 nuclides including 99 fission product nuclei. Complete reevaluation has been made to heavy actinide, fission product and main structural material nuclides. Benchmark tests have been made on JENDL-2 for fast reactor application. Various characteristics in core center have been tested with one-dimensional model for total of 27 assemblies, and more sophisticated problems have been examined for MOZART and ZPPR-3. Furthermore analyses of JUPITER project give useful information. Satisfactory results have been obtained as a whole. However, the spectrum is a little underestimated above a few hundred keV and below a few keV. The positive sodium void reactivity worth is much overestimated. As to the latter, the sensitivity analysis with the generalized perturbation method suggests that the fission cross section of 239 Pu below a few keV has an important role. (Auth.)

  12. Case mix classification and a benchmark set for surgery scheduling

    NARCIS (Netherlands)

    Leeftink, Gréanne; Hans, Erwin W.

    Numerous benchmark sets exist for combinatorial optimization problems. However, in healthcare scheduling, only a few benchmark sets are known, mainly focused on nurse rostering. One of the most studied topics in the healthcare scheduling literature is surgery scheduling, for which there is no widely

  13. A Machine-to-Machine protocol benchmark for eHealth applications - Use case: Respiratory rehabilitation.

    Science.gov (United States)

    Talaminos-Barroso, Alejandro; Estudillo-Valderrama, Miguel A; Roa, Laura M; Reina-Tosina, Javier; Ortega-Ruiz, Francisco

    2016-06-01

    M2M (Machine-to-Machine) communications represent one of the main pillars of the new paradigm of the Internet of Things (IoT), and is making possible new opportunities for the eHealth business. Nevertheless, the large number of M2M protocols currently available hinders the election of a suitable solution that satisfies the requirements that can demand eHealth applications. In the first place, to develop a tool that provides a benchmarking analysis in order to objectively select among the most relevant M2M protocols for eHealth solutions. In the second place, to validate the tool with a particular use case: the respiratory rehabilitation. A software tool, called Distributed Computing Framework (DFC), has been designed and developed to execute the benchmarking tests and facilitate the deployment in environments with a large number of machines, with independence of the protocol and performance metrics selected. DDS, MQTT, CoAP, JMS, AMQP and XMPP protocols were evaluated considering different specific performance metrics, including CPU usage, memory usage, bandwidth consumption, latency and jitter. The results obtained allowed to validate a case of use: respiratory rehabilitation of chronic obstructive pulmonary disease (COPD) patients in two scenarios with different types of requirement: Home-Based and Ambulatory. The results of the benchmark comparison can guide eHealth developers in the choice of M2M technologies. In this regard, the framework presented is a simple and powerful tool for the deployment of benchmark tests under specific environments and conditions. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  14. International Benchmark based on Pressurised Water Reactor Sub-channel and Bundle Tests. Volume III: Departure from Nucleate Boiling

    International Nuclear Information System (INIS)

    Rubin, Adam; Avramova, Maria; Velazquez-Lozada, Alexander

    2016-03-01

    This report summarised the second phase of the Nuclear Energy Agency (NEA) and the Nuclear Regulatory Commission (NRC) Benchmark Based on NUPEC PWR Sub-channel and Bundle Tests (PSBT), which was intended to provide data for the verification of Departure from Nucleate Boiling (DNB) prediction in existing thermal-hydraulics codes and provide direction in the development of future methods. This phase was composed of three exercises; Exercise 1: fluid temperature benchmark, Exercise 2: steady-state rod bundle benchmark and Exercise 3: transient rod bundle benchmark. The experimental data provided to the participants of this benchmark is from a series of void measurement tests using full-size mock-up tests for both BWRs and PWRs. These tests were performed from 1987 to 1995 by the Nuclear Power Engineering Corporation (NUPEC) in Japan and made available by the Japan Nuclear Energy Safety Organisation (JNES) for the purposes of this benchmark, which was organised by Pennsylvania State University. Nine institutions from seven countries participated in this benchmark. Nine different computer codes were used in Exercise 1, 2 and 3. Among the computer codes were porous media, sub-channel and systems thermal-hydraulic code. The improvement between FLICA-OVAP (sub-channel) and FLICA (sub-channel) was noticeable. The main difference between the two was that FLICA-OVAP implicitly assigned flow regime based on drift flux, while FLICA assumes single phase flows. In Exercises 2 and 3, the codes were generally able to predict the Departure from Nucleate Boiling (DNB) power as well as the axial location of the onset of DNB (for the steady-state cases) and the time of DNB (for the transient cases). It was noted that the codes that used the Electric-Power-Research- Institute (EPRI) Critical-Heat-Flux (CHF) correlation had the lowest mean error in Exercise 2 for the predicted DNB power

  15. Benchmark tests for fast and thermal reactor applications

    International Nuclear Information System (INIS)

    Seki, Yuji

    1984-01-01

    Integral tests of JENDL-2 library for fast and thermal reactor applications are reviewed including relevant analyses of JUPITER experiments. Criticality and core center characteristics were tested with one-dimensional models for a total of 27 fast critical assemblies. More sofisticated problems such as reaction rate distributions, control rod worths and sodium void reactivities were tested using two-dimensional models for MOZART and ZPPR-3 assemblies. Main observations from the fast core benchmark tests are as follows. 1) The criticality is well predicted; the average C/E value is 0.999+-0.008 for uranium cores and 0.997+-0.005 for plutonium cores. 2) The calculation underpredicts the reaction rate ratio 239 Pusub(fis)/ 235 Usub(fis) by 3% and overpredicts 238 Usub(cap)/ 239 Pusub(fis) by 6%. The results are consistent with those of JUPITER analyses. 3) The reaction rate distributions in the cores of prototype size are well predicted within +-3%. In larger JUPITER cores, however, the C/E value increases with the radial distance from the core center up to 6% at the outer core edge. 4) The prediction of control rod worths is satisfactory; C/E values are within the range from 0.92 to 0.97 with no apparent dependence on 10 B enrichment and the number of control rods inserted. Spatial dependence of C/E is also observed in the JUPITER cores. 5) The sodium void reactivity is overpredicted by 30% to 50% to the positive side. 1) The criticality is well predicted, as is the same in the fast core tests; the average C/E is 0.997+-0.003. 2) The calculation overpredicts 238 Usub(fis)/ 235 Usub(fis) by 3% to 6%, which shows the same tendency as in the small and medium size fast assemblies. The 238 Usub(cap)/ 235 Usub(fis) ratio is well predicted in the thermal cores. The calculated reaction rate ratios of 232 Th deviate from the measurements by 10% to 15%. (author)

  16. Impact testing and analysis for structural code benchmarking

    International Nuclear Information System (INIS)

    Glass, R.E.

    1989-01-01

    Sandia National Laboratories, in cooperation with industry and other national laboratories, has been benchmarking computer codes (''Structural Code Benchmarking for the Analysis of Impact Response of Nuclear Material Shipping Cask,'' R.E. Glass, Sandia National Laboratories, 1985; ''Sample Problem Manual for Benchmarking of Cask Analysis Codes,'' R.E. Glass, Sandia National Laboratories, 1988; ''Standard Thermal Problem Set for the Evaluation of Heat Transfer Codes Used in the Assessment of Transportation Packages, R.E. Glass, et al., Sandia National Laboratories, 1988) used to predict the structural, thermal, criticality, and shielding behavior of radioactive materials packages. The first step in the benchmarking of the codes was to develop standard problem sets and to compare the results from several codes and users. This step for structural analysis codes has been completed as described in ''Structural Code Benchmarking for the Analysis of Impact Response of Nuclear Material Shipping Casks,'' R.E. Glass, Sandia National Laboratories, 1985. The problem set is shown in Fig. 1. This problem set exercised the ability of the codes to predict the response to end (axisymmetric) and side (plane strain) impacts with both elastic and elastic/plastic materials. The results from these problems showed that there is good agreement in predicting elastic response. Significant differences occurred in predicting strains for the elastic/plastic models. An example of the variation in predicting plastic behavior is given, which shows the hoop strain as a function of time at the impacting end of Model B. These differences in predicting plastic strains demonstrated a need for benchmark data for a cask-like problem. 6 refs., 5 figs

  17. BENCHMARKING MOBILE LASER SCANNING SYSTEMS USING A PERMANENT TEST FIELD

    Directory of Open Access Journals (Sweden)

    H. Kaartinen

    2012-07-01

    Full Text Available The objective of the study was to benchmark the geometric accuracy of mobile laser scanning (MLS systems using a permanent test field under good coverage of GNSS. Mobile laser scanning, also called mobile terrestrial laser scanning, is currently a rapidly developing area in laser scanning where laser scanners, GNSS and IMU are mounted onboard a moving vehicle. MLS can be considered to fill the gap between airborne and terrestrial laser scanning. Data provided by MLS systems can be characterized with the following technical parameters: a point density in the range of 100-1000 points per m2 at 10 m distance, b distance measurement accuracy of 2-5 cm, and c operational scanning range from 1 to 100 m. Several commercial, including e.g. Riegl, Optech and others, and some research mobile laser scanning systems surveyed the test field using predefined driving speed and directions. The acquired georeferenced point clouds were delivered for analyzing. The geometric accuracy of the point clouds was determined using the reference targets that could be identified and measured from the point cloud. Results show that in good GNSS conditions most systems can reach an accuracy of 2 cm both in plane and elevation. The accuracy of a low cost system, the price of which is less than tenth of the other systems, seems to be within a few centimetres at least in ground elevation determination. Inaccuracies in the relative orientation of the instruments lead to systematic errors and when several scanners are used, in multiple reproductions of the objects. Mobile laser scanning systems can collect high density point cloud data with high accuracy. A permanent test field suits well for verifying and comparing the performance of different mobile laser scanning systems. The accuracy of the relative orientation between the mapping instruments needs more attention. For example, if the object is seen double in the point cloud due to imperfect boresight calibration between two

  18. Impact testing and analysis for structural code benchmarking

    International Nuclear Information System (INIS)

    Glass, R.E.

    1989-01-01

    Sandia National Laboratories, in cooperation with industry and other national laboratories, has been benchmarking computer codes used to predict the structural, thermal, criticality, and shielding behavior of radioactive materials packages. The first step in the benchmarking of the codes was to develop standard problem sets and to compare the results from several codes and users. This step for structural analysis codes has been completed as described in Structural Code Benchmarking for the Analysis of Impact Response of Nuclear Material Shipping Casks, R.E. Glass, Sandia National Laboratories, 1985. The problem set is shown in Fig. 1. This problem set exercised the ability of the codes to predict the response to end (axisymmetric) and side (plane strain) impacts with both elastic and elastic/plastic materials. The results from these problems showed that there is good agreement in predicting elastic response. Significant differences occurred in predicting strains for the elastic/plastic models. An example of the variation in predicting plastic behavior is given, which shows the hoop strain as a function of time at the impacting end of Model B. These differences in predicting plastic strains demonstrated a need for benchmark data for a cask-like problem

  19. Issues in Institutional Benchmarking of Student Learning Outcomes Using Case Examples

    Science.gov (United States)

    Judd, Thomas P.; Pondish, Christopher; Secolsky, Charles

    2013-01-01

    Benchmarking is a process that can take place at both the inter-institutional and intra-institutional level. This paper focuses on benchmarking intra-institutional student learning outcomes using case examples. The findings of the study illustrate the point that when the outcomes statements associated with the mission of the institution are…

  20. National benchmarking against GLOBALGAP : Case studies of Good Agricultural Practices in Kenya, Malaysia, Mexico and Chile

    NARCIS (Netherlands)

    Valk, van der O.M.C.; Roest, van der J.G.

    2009-01-01

    This desk study examines the experiences and lessons learned from four case studies of countries aiming at the GLOBALGAP benchmarking procedure for national Good Agricultural Practices, namely Chile, Kenya, Malaysia, and Mexico. Aspects that determine the origin and character of the benchmarking

  1. Homogeneous fast reactor benchmark testing of CENDL-2 and ENDF/B-6

    International Nuclear Information System (INIS)

    Liu Guisheng

    1995-11-01

    How to choose correct weighting spectrum has been studied to produce multigroup constants for fast reactor benchmark calculations. A correct weighting option makes us obtain satisfying results of K eff and central reaction rate ratios for nine fast reactor benchmark testing of CENDL-2 and ENDF/B-6. (author). 8 refs, 2 figs, 4 tabs

  2. Homogeneous fast reactor benchmark testing of CENDL-2 and ENDF/B-6

    International Nuclear Information System (INIS)

    Liu Guisheng

    1995-01-01

    How to choose correct weighting spectrum has been studied to produce multigroup constants for fast reactor benchmark calculations. A correct weighting option makes us obtain satisfying results of K eff and central reaction rate ratios for nine fast reactor benchmark testings of CENDL-2 and ENDF/B-6. (4 tabs., 2 figs.)

  3. Review of recent benchmark experiments on integral test for high energy nuclear data evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Nakashima, Hiroshi; Tanaka, Susumu; Konno, Chikara; Fukahori, Tokio; Hayashi, Katsumi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1998-11-01

    A survey work of recent benchmark experiments on an integral test for high energy nuclear data evaluation was carried out as one of the work of the Task Force on JENDL High Energy File Integral Evaluation (JHEFIE). In this paper the results are compiled and the status of recent benchmark experiments is described. (author)

  4. Adventure Tourism Benchmark – Analyzing the Case of Suesca, Cundinamarca

    Directory of Open Access Journals (Sweden)

    Juan Felipe Tsao Borrero

    2012-11-01

    Full Text Available Adventure tourism is a growing sector within the tourism industry and understanding its dynamics is fundamental for adventure tourism destinations and their local authorities. Destination benchmarking is a strong tool to identify the performance of tourism services offered at the destination in order to design appropriate policies to improve its competitiveness. The benchmarking study of Suesca, an adventure tourism destination in Colombia, helps to identify the gaps compared with successful adventure tourism destinations around the world, and provides valuable information to local policy-makers on the features to be improved. The lack of available information to tourists and financial facilities hinders the capability of Suesca to improve its competitiveness.

  5. Benchmark test of evaluated nuclear data files for fast reactor neutronics application

    International Nuclear Information System (INIS)

    Chiba, Go; Hazama, Taira; Iwai, Takehiko; Numata, Kazuyuki

    2007-07-01

    A benchmark test of the latest evaluated nuclear data files, JENDL-3.3, JEFF-3.1 and ENDF/B-VII.0, has been carried out for fast reactor neutronics application. For this benchmark test, experimental data obtained at fast critical assemblies and fast power reactors are utilized. In addition to comparing of numerical solutions with the experimental data, we have extracted several cross sections, in which differences between three nuclear data files affect significantly numerical solutions, by virtue of sensitivity analyses. This benchmark test concludes that ENDF/B-VII.0 predicts well the neutronics characteristics of fast neutron systems rather than the other nuclear data files. (author)

  6. Benchmark Analysis of EBR-II Shutdown Heat Removal Tests

    International Nuclear Information System (INIS)

    2017-08-01

    This publication presents the results and main achievements of an IAEA coordinated research project to verify and validate system and safety codes used in the analyses of liquid metal thermal hydraulics and neutronics phenomena in sodium cooled fast reactors. The publication will be of use to the researchers and professionals currently working on relevant fast reactors programmes. In addition, it is intended to support the training of the next generation of analysts and designers through international benchmark exercises

  7. OECD/NEA Sandia Fuel Project phase I: Benchmark of the ignition testing

    Energy Technology Data Exchange (ETDEWEB)

    Adorni, Martina, E-mail: martina_adorni@hotmail.it [UNIPI (Italy); Herranz, Luis E. [CIEMAT (Spain); Hollands, Thorsten [GRS (Germany); Ahn, Kwang-II [KAERI (Korea, Republic of); Bals, Christine [GRS (Germany); D' Auria, Francesco [UNIPI (Italy); Horvath, Gabor L. [NUBIKI (Hungary); Jaeckel, Bernd S. [PSI (Switzerland); Kim, Han-Chul; Lee, Jung-Jae [KINS (Korea, Republic of); Ogino, Masao [JNES (Japan); Techy, Zsolt [NUBIKI (Hungary); Velazquez-Lozad, Alexander; Zigh, Abdelghani [USNRC (United States); Rehacek, Radomir [OECD/NEA (France)

    2016-10-15

    Highlights: • A unique PWR spent fuel pool experimental project is analytically investigated. • Predictability of fuel clad ignition in case of a complete loss of coolant in SFPs is assessed. • Computer codes reasonably estimate peak cladding temperature and time of ignition. - Abstract: The OECD/NEA Sandia Fuel Project provided unique thermal-hydraulic experimental data associated with Spent Fuel Pool (SFP) complete drain down. The study conducted at Sandia National Laboratories (SNL) was successfully completed (July 2009 to February 2013). The accident conditions of interest for the SFP were simulated in a full scale prototypic fashion (electrically heated, prototypic assemblies in a prototypic SFP rack) so that the experimental results closely represent actual fuel assembly responses. A major impetus for this work was to facilitate severe accident code validation and to reduce modeling uncertainties within the codes. Phase I focused on axial heating and burn propagation in a single PWR 17 × 17 assembly (i.e. “hot neighbors” configuration). Phase II addressed axial and radial heating and zirconium fire propagation including effects of fuel rod ballooning in a 1 × 4 assembly configuration (i.e. single, hot center assembly and four, “cooler neighbors”). This paper summarizes the comparative analysis regarding the final destructive ignition test of the phase I of the project. The objective of the benchmark is to evaluate and compare the predictive capabilities of computer codes concerning the ignition testing of PWR fuel assemblies. Nine institutions from eight different countries were involved in the benchmark calculations. The time to ignition and the maximum temperature are adequately captured by the calculations. It is believed that the benchmark constitutes an enlargement of the validation range for the codes to the conditions tested, thus enhancing the code applicability to other fuel assembly designs and configurations. The comparison of

  8. Sensitivity Analysis of OECD Benchmark Tests in BISON

    Energy Technology Data Exchange (ETDEWEB)

    Swiler, Laura Painton [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Gamble, Kyle [Idaho National Lab. (INL), Idaho Falls, ID (United States); Schmidt, Rodney C. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Williamson, Richard [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-09-01

    This report summarizes a NEAMS (Nuclear Energy Advanced Modeling and Simulation) project focused on sensitivity analysis of a fuels performance benchmark problem. The benchmark problem was defined by the Uncertainty Analysis in Modeling working group of the Nuclear Science Committee, part of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development (OECD ). The benchmark problem involv ed steady - state behavior of a fuel pin in a Pressurized Water Reactor (PWR). The problem was created in the BISON Fuels Performance code. Dakota was used to generate and analyze 300 samples of 17 input parameters defining core boundary conditions, manuf acturing tolerances , and fuel properties. There were 24 responses of interest, including fuel centerline temperatures at a variety of locations and burnup levels, fission gas released, axial elongation of the fuel pin, etc. Pearson and Spearman correlatio n coefficients and Sobol' variance - based indices were used to perform the sensitivity analysis. This report summarizes the process and presents results from this study.

  9. The IEA Annex 20 Two-Dimensional Benchmark Test for CFD Predictions

    DEFF Research Database (Denmark)

    Nielsen, Peter V.; Rong, Li; Cortes, Ines Olmedo

    2010-01-01

    predictions both for isothermal flow and for nonisothermal flow. The benchmark is defined on a web page, which also shows about 50 different benchmark tests with studies of e.g. grid dependence, numerical schemes, different source codes, different turbulence models, RANS or LES, different turbulence levels...... in a supply opening, study of local emission and study of airborne chemical reactions. Therefore the web page is also a collection of information which describes the importance of the different elements of a CFD procedure. The benchmark is originally developed for test of two-dimensional flow, but the paper...

  10. International benchmarking of specialty hospitals. A series of case studies on comprehensive cancer centres

    Science.gov (United States)

    2010-01-01

    Background Benchmarking is one of the methods used in business that is applied to hospitals to improve the management of their operations. International comparison between hospitals can explain performance differences. As there is a trend towards specialization of hospitals, this study examines the benchmarking process and the success factors of benchmarking in international specialized cancer centres. Methods Three independent international benchmarking studies on operations management in cancer centres were conducted. The first study included three comprehensive cancer centres (CCC), three chemotherapy day units (CDU) were involved in the second study and four radiotherapy departments were included in the final study. Per multiple case study a research protocol was used to structure the benchmarking process. After reviewing the multiple case studies, the resulting description was used to study the research objectives. Results We adapted and evaluated existing benchmarking processes through formalizing stakeholder involvement and verifying the comparability of the partners. We also devised a framework to structure the indicators to produce a coherent indicator set and better improvement suggestions. Evaluating the feasibility of benchmarking as a tool to improve hospital processes led to mixed results. Case study 1 resulted in general recommendations for the organizations involved. In case study 2, the combination of benchmarking and lean management led in one CDU to a 24% increase in bed utilization and a 12% increase in productivity. Three radiotherapy departments of case study 3, were considering implementing the recommendations. Additionally, success factors, such as a well-defined and small project scope, partner selection based on clear criteria, stakeholder involvement, simple and well-structured indicators, analysis of both the process and its results and, adapt the identified better working methods to the own setting, were found. Conclusions The improved

  11. Thermal and fast reactor benchmark testing of ENDF/B-6.4

    International Nuclear Information System (INIS)

    Liu Guisheng

    1999-01-01

    The benchmark testing for B-6.4 was done with the same benchmark experiments and calculating method as for B-6.2. The effective multiplication factors k eff , central reaction rate ratios of fast assemblies and lattice cell reaction rate ratios of thermal lattice cell assemblies were calculated and compared with testing results of B-6.2 and CENDL-2. It is obvious that 238 U data files are most important for the calculations of large fast reactors and lattice thermal reactors. However, 238 U data in the new version of ENDF/B-6 have not been renewed. Only data of 235 U, 27 Al, 14 N and 2 D have been renewed in ENDF/B-6.4. Therefor, it will be shown that the thermal reactor benchmark testing results are remarkably improved and the fast reactor benchmark testing results are not improved

  12. Thermal reactor benchmark testing of 69 group library

    International Nuclear Information System (INIS)

    Liu Guisheng; Wang Yaoqing; Liu Ping; Zhang Baocheng

    1994-01-01

    Using a code system NSLINK, AMPX master library in WIMS 69 groups structure are made from nuclides relating to 4 newest evaluated nuclear data libraries. Some integrals of 10 thermal reactor benchmark assemblies recommended by the U.S. CSEWG are calculated using rectified PASC-1 code system and compared with foreign results, the authors results are in good agreement with others. 69 group libraries of evaluated data bases in TPFAP interface file are generated with NJOY code system. The k ∞ values of 6 cell lattice assemblies are calculated by the code CBM. The calculated results are analysed and compared

  13. Uranium-fuel thermal reactor benchmark testing of CENDL-3

    International Nuclear Information System (INIS)

    Liu Ping

    2001-01-01

    CENDL-3, the new version of China Evaluated Nuclear Data Library are being processed, and distributed for thermal reactor benchmark analysis recently. The processing was carried out using the NJOY nuclear data processing system. The calculations and analyses of uranium-fuel thermal assemblies TRX-1,2, BAPL-1,2,3, ZEEP-1,2,3 were done with lattice code WIMSD5A. The results were compared with the experimental results, the results of the '1986'WIMS library and the results based on ENDF/B-VI. (author)

  14. Testing of cross section libraries for TRIGA criticality benchmark

    International Nuclear Information System (INIS)

    Snoj, L.; Trkov, A.; Ravnik, M.

    2007-01-01

    Influence of various up-to-date cross section libraries on the multiplication factor of TRIGA benchmark as well as the influence of fuel composition on the multiplication factor of the system composed of various types of TRIGA fuel elements was investigated. It was observed that keff calculated by using the ENDF/B VII cross section library is systematically higher than using the ENDF/B-VI cross section library. The main contributions (∼ 2 20 pcm) are from 235 U and Zr. (author)

  15. Integral benchmark test of JENDL-4.0 for U-233 systems with ICSBEP handbook

    International Nuclear Information System (INIS)

    Kuwagaki, Kazuki; Nagaya, Yasunobu

    2017-03-01

    The integral benchmark test of JENDL-4.0 for U-233 systems using the continuous-energy Monte Carlo code MVP was conducted. The previous benchmark test was performed only for U-233 thermal solution and fast metallic systems in the ICSBEP handbook. In this study, MVP input files were prepared for uninvestigated benchmark problems in the handbook including compound thermal systems (mainly lattice systems) and integral benchmark test was performed. The prediction accuracy of JENDL-4.0 was evaluated for effective multiplication factors (k eff 's) of the U-233 systems. As a result, a trend of underestimation was observed for all the categories of U-233 systems. In the benchmark test of ENDF/B-VII.1 for U-233 systems with the ICSBEP handbook, it is reported that a decreasing trend of calculated k eff values in association with a parameter ATFF (Above-Thermal Fission Fraction) is observed. The ATFF values were also calculated in this benchmark test of JENDL-4.0 and the same trend as ENDF/B-VII.1 was observed. A CD-ROM is attached as an appendix. (J.P.N.)

  16. Experimental Test for Benchmark 1--Deck Lid Inner Panel

    International Nuclear Information System (INIS)

    Xu Siguang; Lanker, Terry; Zhang, Jimmy; Wang Chuantao

    2005-01-01

    The Benchmark 1 deck lid inner is designed for both aluminum and steel based on a General Motor Corporation's current vehicle product. The die is constructed with a soft tool material. The die successfully produced aluminum and steel panels without splits and wrinkles. Detailed surface strains and thickness measurement were made at selected sections to include a wide range of deformation patterns from uniaxial tension mode to bi-axial tension mode. The springback measurements were done by using CMM machine along the part's hem edge which is critical to correct dimensional accuracy. It is expected that the data obtained will provide a useful source for forming and springback study on future automotive panels

  17. The level 1 and 2 specification for parallel benchmark and a benchmark test of scalar-parallel computer SP2 based on the specifications

    International Nuclear Information System (INIS)

    Orii, Shigeo

    1998-06-01

    A benchmark specification for performance evaluation of parallel computers for numerical analysis is proposed. Level 1 benchmark, which is a conventional type benchmark using processing time, measures performance of computers running a code. Level 2 benchmark proposed in this report is to give the reason of the performance. As an example, scalar-parallel computer SP2 is evaluated with this benchmark specification in case of a molecular dynamics code. As a result, the main causes to suppress the parallel performance are maximum band width and start-up time of communication between nodes. Especially the start-up time is proportional not only to the number of processors but also to the number of particles. (author)

  18. Quality benchmarking methodology: Case study of finance and culture industries in Latvia

    Directory of Open Access Journals (Sweden)

    Ieva Zemīte

    2011-01-01

    Full Text Available Political, socio-economic and cultural changes that have taken place in the world during the last years have influenced all the spheres. Constant improvements are necessary to sustain in rival and shrinking markets. This sets high quality standards for the service industries. Therefore it is important to conduct comparison of quality criteria to ascertain which practices are achieving superior performance levels. At present companies in Latvia do not carry out mutual benchmarking, and as a result of that do not know how they rank against their peers in terms of quality, as well as they do not see benefits in sharing of information and in benchmarking.The purpose of this paper is to determine the criteria of qualitative benchmarking, and to investigate the use of the benchmarking quality in service industries, particularly: finance and culture sectors in Latvia in order to determine the key driving factors of quality, to explore internal and foreign benchmarks, and to reveal the full potential of inputs’ reduction and efficiency growth for the aforementioned industries.Case study and other tools are used to define the readiness of the company for benchmarking. Certain key factors are examined for their impact on quality criteria. The results are based on the research conducted in professional associations in defined fields (insurance and theatre.Originality/value – this is the first study that adopts the benchmarking models for measuring quality criteria and readiness for mutual comparison in insurance and theatre industries in Latvia.

  19. High-Strength Composite Fabric Tested at Structural Benchmark Test Facility

    Science.gov (United States)

    Krause, David L.

    2002-01-01

    Large sheets of ultrahigh strength fabric were put to the test at NASA Glenn Research Center's Structural Benchmark Test Facility. The material was stretched like a snare drum head until the last ounce of strength was reached, when it burst with a cacophonous release of tension. Along the way, the 3-ft square samples were also pulled, warped, tweaked, pinched, and yanked to predict the material's physical reactions to the many loads that it will experience during its proposed use. The material tested was a unique multi-ply composite fabric, reinforced with fibers that had a tensile strength eight times that of common carbon steel. The fiber plies were oriented at 0 and 90 to provide great membrane stiffness, as well as oriented at 45 to provide an unusually high resistance to shear distortion. The fabric's heritage is in astronaut space suits and other NASA programs.

  20. Comparative Test Case Specification

    DEFF Research Database (Denmark)

    Kalyanova, Olena; Heiselberg, Per

     This document includes a definition of the comparative test cases DSF200_3 and DSF200_4, which previously described in the comparative test case specification for the test cases DSF100_3 and DSF200_3 [Ref.1]....... This document includes a definition of the comparative test cases DSF200_3 and DSF200_4, which previously described in the comparative test case specification for the test cases DSF100_3 and DSF200_3 [Ref.1]....

  1. Reactor physics tests and benchmark analyses of STACY

    International Nuclear Information System (INIS)

    Miyoshi, Yoshinori; Umano, Takuya

    1996-01-01

    The Static Experiment Critical Facility, STACY in the Nuclear Fuel Cycle Safety Engineering Research Facility, NUCEF is a solution type critical facility to accumulate fundamental criticality data on uranyl nitrate solution, plutonium nitrate solution and their mixture. A series of critical experiments have been performed for 10 wt% enriched uranyl nitrate solution using a cylindrical core tank. In these experiments, systematic data of the critical height, differential reactivity of the fuel solution, kinetic parameter and reactor power were measured with changing the uranium concentration of the fuel solution from 313 gU/l to 225 gU/l. Critical data through the first series of experiments for the basic core are reported in this paper for evaluating the accuracy of the criticality safety calculation codes. Benchmark calculations of the neutron multiplication factor k eff for the critical condition were made using a neutron transport code TWOTRAN in the SRAC system and a continuous energy Monte Carlo code MCNP 4A with a Japanese evaluated nuclear data library, JENDL 3.2. (J.P.N.)

  2. Testing and Benchmarking a 2014 GM Silverado 6L80 Six Speed Automatic Transmission

    Science.gov (United States)

    Describe the method and test results of EPA’s partial transmission benchmarking process which involves installing both the engine and transmission in an engine dynamometer test cell with the engine wire harness tethered to its vehicle parked outside the test cell.

  3. Reactor benchmarks and integral data testing and feedback into ENDF/B-VI

    International Nuclear Information System (INIS)

    McKnight, R.D.; Williams, M.L.

    1992-01-01

    The role of integral data testing and its feedback into the ENDF/B evaluated nuclear data files are reviewed. The use of the CSEWG reactor benchmarks in the data testing process is discussed and selected results based on ENDF/B Version VI data are presented. Finally, recommendations are given to improve the implementation in future integral data testing of ENDF/B

  4. BUGLE-93 (ENDF/B-VI) cross-section library data testing using shielding benchmarks

    International Nuclear Information System (INIS)

    Hunter, H.T.; Slater, C.O.; White, J.E.

    1994-01-01

    Several integral shielding benchmarks were selected to perform data testing for new multigroup cross-section libraries compiled from the ENDF/B-VI data for light water reactor (LWR) shielding and dosimetry. The new multigroup libraries, BUGLE-93 and VITAMIN-B6, were studied to establish their reliability and response to the benchmark measurements by use of radiation transport codes, ANISN and DORT. Also, direct comparisons of BUGLE-93 and VITAMIN-B6 to BUGLE-80 (ENDF/B-IV) and VITAMIN-E (ENDF/B-V) were performed. Some benchmarks involved the nuclides used in LWR shielding and dosimetry applications, and some were sensitive specific nuclear data, i.e. iron due to its dominant use in nuclear reactor systems and complex set of cross-section resonances. Five shielding benchmarks (four experimental and one calculational) are described and results are presented

  5. Proton Exchange Membrane Fuel Cell Engineering Model Powerplant. Test Report: Benchmark Tests in Three Spatial Orientations

    Science.gov (United States)

    Loyselle, Patricia; Prokopius, Kevin

    2011-01-01

    Proton exchange membrane (PEM) fuel cell technology is the leading candidate to replace the aging alkaline fuel cell technology, currently used on the Shuttle, for future space missions. This test effort marks the final phase of a 5-yr development program that began under the Second Generation Reusable Launch Vehicle (RLV) Program, transitioned into the Next Generation Launch Technologies (NGLT) Program, and continued under Constellation Systems in the Exploration Technology Development Program. Initially, the engineering model (EM) powerplant was evaluated with respect to its performance as compared to acceptance tests carried out at the manufacturer. This was to determine the sensitivity of the powerplant performance to changes in test environment. In addition, a series of tests were performed with the powerplant in the original standard orientation. This report details the continuing EM benchmark test results in three spatial orientations as well as extended duration testing in the mission profile test. The results from these tests verify the applicability of PEM fuel cells for future NASA missions. The specifics of these different tests are described in the following sections.

  6. World-Wide Benchmarking of ITER Nb$_{3}$Sn Strand Test Facilities

    CERN Document Server

    Jewell, MC; Takahashi, Yoshikazu; Shikov, Alexander; Devred, Arnaud; Vostner, Alexander; Liu, Fang; Wu, Yu; Jewell, Matthew C; Boutboul, Thierry; Bessette, Denis; Park, Soo-Hyeon; Isono, Takaaki; Vorobieva, Alexandra; Martovetsky, Nicolai; Seo, Kazutaka

    2010-01-01

    The world-wide procurement of Nb$_{3}$Sn and NbTi for the ITER superconducting magnet systems will involve eight to ten strand suppliers from six Domestic Agencies (DAs) on three continents. To ensure accurate and consistent measurement of the physical and superconducting properties of the composite strand, a strand test facility benchmarking effort was initiated in August 2008. The objectives of this effort are to assess and improve the superconducting strand test and sample preparation technologies at each DA and supplier, in preparation for the more than ten thousand samples that will be tested during ITER procurement. The present benchmarking includes tests for critical current (I-c), n-index, hysteresis loss (Q(hys)), residual resistivity ratio (RRR), strand diameter, Cu fraction, twist pitch, twist direction, and metal plating thickness (Cr or Ni). Nineteen participants from six parties (China, EU, Japan, South Korea, Russia, and the United States) have participated in the benchmarking. This round, cond...

  7. Teacher and headmaster attitudes towards benchmarking and high-stakes testing in adult teaching in Denmark

    DEFF Research Database (Denmark)

    Petersen, Karen Bjerg

    Based on research, surveys and interviews the paper traces teacher and headmaster attitudes towards the introduction of benchmarking and high-stakes language testing introduced in the wake of a neo-liberal education policy in adult teaching for migrants in Denmark in the 2000s. The findings show...... students, reduced use of both project work and non test related activities and stressful working conditions....... that the majority of teachers and headmasters reject benchmarking. Meanwhile, due to both headmasters and language teachers the introduction of high stakes language testing has had an immense impact on the organization, content and quality of adult language teaching. On the one side teachers do not necessarily...

  8. Benchmark Tests for Stirling Convertor Heater Head Life Assessment Conducted

    Science.gov (United States)

    Krause, David L.; Halford, Gary R.; Bowman, Randy R.

    2004-01-01

    A new in-house test capability has been developed at the NASA Glenn Research Center, where a critical component of the Stirling Radioisotope Generator (SRG) is undergoing extensive testing to aid the development of analytical life prediction methodology and to experimentally aid in verification of the flight-design component's life. The new facility includes two test rigs that are performing creep testing of the SRG heater head pressure vessel test articles at design temperature and with wall stresses ranging from operating level to seven times that (see the following photograph).

  9. Molecular Line Emission from Multifluid Shock Waves. I. Numerical Methods and Benchmark Tests

    Science.gov (United States)

    Ciolek, Glenn E.; Roberge, Wayne G.

    2013-05-01

    We describe a numerical scheme for studying time-dependent, multifluid, magnetohydrodynamic shock waves in weakly ionized interstellar clouds and cores. Shocks are modeled as propagating perpendicular to the magnetic field and consist of a neutral molecular fluid plus a fluid of ions and electrons. The scheme is based on operator splitting, wherein time integration of the governing equations is split into separate parts. In one part, independent homogeneous Riemann problems for the two fluids are solved using Godunov's method. In the other, equations containing the source terms for transfer of mass, momentum, and energy between the fluids are integrated using standard numerical techniques. We show that, for the frequent case where the thermal pressures of the ions and electrons are Lt magnetic pressure, the Riemann problems for the neutral and ion-electron fluids have a similar mathematical structure which facilitates numerical coding. Implementation of the scheme is discussed and several benchmark tests confirming its accuracy are presented, including (1) MHD wave packets ranging over orders of magnitude in length- and timescales, (2) early evolution of multifluid shocks caused by two colliding clouds, and (3) a multifluid shock with mass transfer between the fluids by cosmic-ray ionization and ion-electron recombination, demonstrating the effect of ion mass loading on magnetic precursors of MHD shocks. An exact solution to an MHD Riemann problem forming the basis for an approximate numerical solver used in the homogeneous part of our scheme is presented, along with derivations of the analytic benchmark solutions and tests showing the convergence of the numerical algorithm.

  10. MOLECULAR LINE EMISSION FROM MULTIFLUID SHOCK WAVES. I. NUMERICAL METHODS AND BENCHMARK TESTS

    International Nuclear Information System (INIS)

    Ciolek, Glenn E.; Roberge, Wayne G.

    2013-01-01

    We describe a numerical scheme for studying time-dependent, multifluid, magnetohydrodynamic shock waves in weakly ionized interstellar clouds and cores. Shocks are modeled as propagating perpendicular to the magnetic field and consist of a neutral molecular fluid plus a fluid of ions and electrons. The scheme is based on operator splitting, wherein time integration of the governing equations is split into separate parts. In one part, independent homogeneous Riemann problems for the two fluids are solved using Godunov's method. In the other, equations containing the source terms for transfer of mass, momentum, and energy between the fluids are integrated using standard numerical techniques. We show that, for the frequent case where the thermal pressures of the ions and electrons are << magnetic pressure, the Riemann problems for the neutral and ion-electron fluids have a similar mathematical structure which facilitates numerical coding. Implementation of the scheme is discussed and several benchmark tests confirming its accuracy are presented, including (1) MHD wave packets ranging over orders of magnitude in length- and timescales, (2) early evolution of multifluid shocks caused by two colliding clouds, and (3) a multifluid shock with mass transfer between the fluids by cosmic-ray ionization and ion-electron recombination, demonstrating the effect of ion mass loading on magnetic precursors of MHD shocks. An exact solution to an MHD Riemann problem forming the basis for an approximate numerical solver used in the homogeneous part of our scheme is presented, along with derivations of the analytic benchmark solutions and tests showing the convergence of the numerical algorithm.

  11. Benchmarking of direct and indirect friction tests in micro forming

    DEFF Research Database (Denmark)

    Eriksen, Rasmus Solmer; Calaon, Matteo; Arentoft, M.

    2012-01-01

    The sizeable increase in metal forming friction at micro scale, due to the existence of size effects, constitutes a barrier to the realization of industrial micro forming processes. In the quest for improved frictional conditions in micro scale forming operations, friction tests are applied...... to qualify the tribological performance of the particular forming scenario. In this work the application of a simulative sliding friction test at micro scale is studied. The test setup makes it possible to measure the coefficient of friction as a function of the sliding motion. The results confirm a sizeable...... increase in the coefficient of friction when the work piece size is scaled down. © (2012) Trans Tech Publications....

  12. Titanium Aluminide Scramjet Inlet Flap Subelement Benchmark Tested

    Science.gov (United States)

    Krause, David L.; Draper, Susan L.

    2005-01-01

    A subelement-level ultimate strength test was completed successfully at the NASA Glenn Research Center (http://www.nasa.gov/glenn/) on a large gamma titanium aluminide (TiAl) inlet flap demonstration piece. The test subjected the part to prototypical stress conditions by using unique fixtures that allowed both loading and support points to be located remote to the part itself (see the photograph). The resulting configuration produced shear, moment, and the consequent stress topology proportional to the design point. The test was conducted at room temperature, a harsh condition for the material because of reduced available ductility. Still, the peak experimental load-carrying capability exceeded original predictions.

  13. The National Benchmark Test of quantitative literacy: Does it ...

    African Journals Online (AJOL)

    Windows User

    determine whether Grade 12 learners have mastered subject knowledge at the ... the NSC Mathematical Literacy examination and the Quantitative Literacy test of the ..... Method. Sample. The sample for this study consisted of 6,363 Grade. 12 ...

  14. Budget-aware random testing with T3: benchmarking at the SBST2016 testing tool contest

    NARCIS (Netherlands)

    Prasetya, S.W.B.

    2016-01-01

    Random testing has the advantage that it is usually fast. An interesting use case is to use it for bulk smoke testing, e.g. to smoke test a whole project. However, on a large project, even with random testing it may still take hours to complete. To optimize this, we have adapted an automated random

  15. The National Benchmark Test of quantitative literacy: Does it ...

    African Journals Online (AJOL)

    This article explores the relationship between these two standardised assessments in the domain of mathematical/quantitative literacy. This is accomplished through a Pearson correlation analysis of 6,363 test scores obtained by Grade 12 learners on the NSC Mathematical Literacy examination and the Quantitative ...

  16. The National Benchmark Test of quantitative literacy: Does it ...

    African Journals Online (AJOL)

    Windows User

    This is accomplished through a Pearson correlation analysis of 6,363 test scores ... ML and NBT QL, followed by a correlation analysis of a sample of 6,363 scores for .... probability and chance; and 6) data representation ... cognitive levels, through the use of easy to more .... unlikely to result in gender bias that could dilute.

  17. Mechanical simulations of sandia II tests OECD ISP 48 benchmark

    International Nuclear Information System (INIS)

    Ghavamian, Sh.; Courtois, A.; Valfort, J.-L.; Heinfling, G.

    2005-01-01

    This paper illustrates the work carried out by EDF within the framework of ISP48 post-test analysis of NUPEC/NRCN 1:4-scale model of a prestressed pressure containment vessel of a nuclear power plant. EDF as a participant of the International Standard Problem n degree 8 has performed several simulations to determine the ultimate response of the scale model. To determine the most influent parameter in such an analysis several studies were carried out. The mesh was built using a parametric tool to measure the influence of discretization on results. Different material laws of concrete were also used. The purpose of this paper is to illustrate the ultimate behaviour of SANDIA II model obtained by Code-Asterwith comparison to tests records, and also to share the lessons learned from the parametric computations and precautions that must be taken. (authors)

  18. Benchmark tests for a Formula SAE Student car prototyping

    Science.gov (United States)

    Mariasiu, Florin

    2011-12-01

    Aerodynamic characteristics of a vehicle are important elements in its design and construction. A low drag coefficient brings significant fuel savings and increased engine power efficiency. In designing and developing vehicles trough computer simulation process to determine the vehicles aerodynamic characteristics are using dedicated CFD (Computer Fluid Dynamics) software packages. However, the results obtained by this faster and cheaper method, are validated by experiments in wind tunnels tests, which are expensive and were complex testing equipment are used in relatively high costs. Therefore, the emergence and development of new low-cost testing methods to validate CFD simulation results would bring great economic benefits for auto vehicles prototyping process. This paper presents the initial development process of a Formula SAE Student race-car prototype using CFD simulation and also present a measurement system based on low-cost sensors through which CFD simulation results were experimentally validated. CFD software package used for simulation was Solid Works with the FloXpress add-on and experimental measurement system was built using four piezoresistive force sensors FlexiForce type.

  19. IVO participation in IAEA benchmark for VVER-type nuclear power plants seismic analysis and testing

    International Nuclear Information System (INIS)

    Varpasuo, P.

    1997-12-01

    This study is a part of the IAEA coordinated research program 'Benchmark study for the Seismic Analysis and Testing of VVER Type NPPs'. The study reports the numerical simulation of the blast test for Paks and Kozloduy nuclear power plants beginning from the recorded free-field response and computing the structural response at various points inside the reactor building. The full-scale blast tests of the Paks and Kozloduy NPPs took place in December 1994 and in July 1996. During the tests the plants operated normally. The instrumentation for the tests consisted of 52 recording channels with 200 Hz sampling rate. Detonating 100 kg charges in 50-meter deep boreholes at 2.5-km distance from the plant carried out the blast tests. The 3D structural models for both reactor buildings were analyzed in the frequency domain. The number of modes extracted in both cases was about 500 and the cut-off frequency was 25 Hz. In the response history run the responses of the selected points were evaluated. The input values for response history run were the three components of the excitation, which were transformed from time domain to the frequency domain with the aid of Fourier transform. The analysis was carried out in frequency domain and responses were transferred back to time domain with inverse Fourier transform. The Paks and Kozloduy blast tests produced a wealth of information on the behavior of the nuclear power plant structures excited by blast type loads containing also the low frequency wave train if albeit with small energy content. The comparison of measured and calculated results gave information about the suitability of the selected analysis approach for the investigated blast type loading

  20. Benchmark Tests on the New IBM RISC System/6000 590 Workstation

    Directory of Open Access Journals (Sweden)

    Harvey J. Wasserman

    1995-01-01

    Full Text Available The results of benchmark tests on the superscalar IBM RISC System/6000 Model 590 are presented. A set of well-characterized Fortran benchmarks spanning a range of computational characteristics was used for the study. The data from the 590 system are compared with those from a single-processor CRAY C90 system as well as with other microprocessor-based systems, such as the Digital Equipment Corporation AXP 3000/500X and the Hewlett-Packard HP/735.

  1. Benchmark testing of CENDL-2 for U-fuel thermal reactors

    International Nuclear Information System (INIS)

    Zhang Baocheng; Liu Guisheng; Liu Ping

    1995-01-01

    Based on CENDL-2, NJOY-WIMS code system was used to generate 69-group constants, and do benchmark testing for TRX-1,2; BAPL-UO-2-1,2,3; ZEEP-1,2,3. All the results proved that CENDL-2 is reliable for thermal reactor calculations. (3 tabs.)

  2. Benchmark testing and independent verification of the VS2DT computer code

    International Nuclear Information System (INIS)

    McCord, J.T.

    1994-11-01

    The finite difference flow and transport simulator VS2DT was benchmark tested against several other codes which solve the same equations (Richards equation for flow and the Advection-Dispersion equation for transport). The benchmark problems investigated transient two-dimensional flow in a heterogeneous soil profile with a localized water source at the ground surface. The VS2DT code performed as well as or better than all other codes when considering mass balance characteristics and computational speed. It was also rated highly relative to the other codes with regard to ease-of-use. Following the benchmark study, the code was verified against two analytical solutions, one for two-dimensional flow and one for two-dimensional transport. These independent verifications show reasonable agreement with the analytical solutions, and complement the one-dimensional verification problems published in the code's original documentation

  3. Benchmark test of JEF-1 evaluation by calculating fast criticalities

    International Nuclear Information System (INIS)

    Pelloni, S.

    1986-06-01

    JEF-1 basic evaluation was tested by calculating fast critical experiments using the cross section discrete-ordinates transport code ONEDANT with P/sub 3/S/sub 16/ approximation. In each computation a spherical one dimensional model was used, together with a 174 neutron group VITAMIN-E structured JEF-1 based nuclear data library, generated at EIR with NJOY and TRANSX-CTR. It is found that the JEF-1 evaluation gives accurate results comparable with ENDF/B-V and that eigenvalues agree well within 10 mk whereas reaction rates deviate by up to 10% from the experiment. U-233 total and fission cross sections seem to be underestimated in the JEF-1 evaluation in the fast energy range between 0.1 and 1 MeV. This confirms previous analysis based on diffusion theory with 71 neutron groups, performed by H. Takano and E. Sartori at NEA Data Bank. (author)

  4. A subchannel and CFD analysis of void distribution for the BWR fuel bundle test benchmark

    International Nuclear Information System (INIS)

    In, Wang-Kee; Hwang, Dae-Hyun; Jeong, Jae Jun

    2013-01-01

    Highlights: ► We analyzed subchannel void distributions using subchannel, system and CFD codes. ► The mean error and standard deviation at steady states were compared. ► The deviation of the CFD simulation was greater than those of the others. ► The large deviation of the CFD prediction is due to interface model uncertainties. -- Abstract: The subchannel grade and microscopic void distributions in the NUPEC (Nuclear Power Engineering Corporation) BFBT (BWR Full-Size Fine-Mesh Bundle Tests) facility have been evaluated with a subchannel analysis code MATRA, a system code MARS and a CFD code CFX-10. Sixteen test series from five different test bundles were selected for the analysis of the steady-state subchannel void distributions. Four test cases for a high burn-up 8 × 8 fuel bundle with a single water rod were simulated using CFX-10 for the microscopic void distribution benchmark. Two transient cases, a turbine trip without a bypass as a typical power transient and a re-circulation pump trip as a flow transient, were also chosen for this analysis. It was found that the steady-state void distributions calculated by both the MATRA and MARS codes coincided well with the measured data in the range of thermodynamic qualities from 5 to 25%. The results of the transient calculations were also similar to each other and very reasonable. The CFD simulation reproduced the overall radial void distribution trend which produces less vapor in the central part of the bundle and more vapor in the periphery. However, the predicted variation of the void distribution inside the subchannels is small, while the measured one is large showing a very high concentration in the center of the subchannels. The variations of the void distribution between the center of the subchannels and the subchannel gap are estimated to be about 5–10% for the CFD prediction and more than 20% for the experiment

  5. Determining standard of academic potential based on the Indonesian Scholastic Aptitude Test (TBS benchmark

    Directory of Open Access Journals (Sweden)

    Idwin Irma Krisna

    2016-12-01

    Full Text Available The aim of this article was to classify The Indonesian Scholastic Aptitude Test or Tes Bakat Skolastik (TBS results for each subtest and describe scholastic aptitudes in each subtest. The subject of this study was 36,125 prospective students who took the selection test in some universities. Data analysis began by estimating  testees’ ability using the Item Response Theory, and benchmarking process using the scale anchoring method applying ASP.net web server technology. The results of this research are four benchmarks (based on cutoff scores on each subtest, characters which differentiate potential for each benchmark, and measurement error on each benchmark. The items netted give a description of the scholastic aptitude potential clearly and indicate uniqueness so that it could distinguish difference in potential between a lower bench and a higher bench. At a higher bench, a higher level of reasoning power is required in analyzing and processing needed information so that the individual concerned could do the problem solving with the right solution. The items netted at a lower bench in the three subtests tend to be few so that the error of measurement at such a bench still tends to be higher compared to that at a higher bench.

  6. Benchmark physics tests in the metallic-fuelled assembly ZPPR-15

    International Nuclear Information System (INIS)

    McFarlane, H.F.; Brumbach, S.B.; Carpenter, S.G.; Collins, P.J.

    1987-01-01

    In the last two years a shift in emphasis to inherent safety and economic competitiveness has led to a resurgence in US interest in metallic-alloy fuels for LMRs. Argonne National Laboratory initiated an extensive testing program for metallic-fuelled LMR technology that has included benchmark physics as one component. The tests done in the ZPPR-15 Program produced the first physics results in over 20 years for a metal-composition LMR core

  7. Comparative Test Case Specification

    DEFF Research Database (Denmark)

    Kalyanova, Olena; Heiselberg, Per

    This document includes the specification on the IEA task of evaluation building energy simulation computer programs for the Double Skin Facades (DSF) constructions. There are two approaches involved into this procedure, one is the comparative approach and another is the empirical one. In the comp....... In the comparative approach the outcomes of different software tools are compared, while in the empirical approach the modelling results are compared with the results of experimental test cases. The comparative test cases include: ventilation, shading and geometry....

  8. Detailed benchmark test of JENDL-4.0 iron data for fusion applications

    Energy Technology Data Exchange (ETDEWEB)

    Konno, Chikara, E-mail: konno.chikara@jaea.go.jp [Japan Atomic Energy Agency, Tokai-Mura, Ibaraki-ken, 319-1195 (Japan); Wada, Masayuki [Japan Computer System, Mito, 310-0805 (Japan); Kondo, Keitaro; Ohnishi, Seiki; Takakura, Kosuke; Ochiai, Kentaro; Sato, Satoshi [Japan Atomic Energy Agency, Tokai-Mura, Ibaraki-ken, 319-1195 (Japan)

    2011-10-15

    The major revised version of Japanese Evaluated Nuclear Data Library (JENDL), JENDL-4.0, was released in May, 2010. As one of benchmark tests, we have carried out a benchmark test of JENDL-4.0 iron data, which are very important for radiation shielding in fusion reactors, by analyzing the iron fusion neutronics integral experiments (in situ and Time-of-Flight (TOF) experiments) at JAEA/FNS. It is demonstrated that the problems of the iron data in the previous version of JENDL, JENDL-3.3, are solved in JENDL-4.0; the first inelastic scattering cross section data of {sup 57}Fe and the angular distribution of the elastic scattering of {sup 56}Fe. The iron data in JENDL-4.0 are comparable to or are partly better than those in ENDF/B-VII.0 and JEFF-3.1.

  9. Testing of the PELSHIE shielding code using Benchmark problems and other special shielding models

    International Nuclear Information System (INIS)

    Language, A.E.; Sartori, D.E.; De Beer, G.P.

    1981-08-01

    The PELSHIE shielding code for gamma rays from point and extended sources was written in 1971 and a revised version was published in October 1979. At Pelindaba the program is used extensively due to its flexibility and ease of use for a wide range of problems. The testing of PELSHIE results with the results of a range of models and so-called Benchmark problems is desirable to determine possible weaknesses in PELSHIE. Benchmark problems, experimental data, and shielding models, some of which were resolved by the discrete-ordinates method with the ANISN and DOT 3.5 codes, were used for the efficiency test. The description of the models followed the pattern of a classical shielding problem. After the intercomparison with six different models, the usefulness of the PELSHIE code was quantitatively determined [af

  10. Benchmark testing of Canadol-2.1 for heavy water reactor

    International Nuclear Information System (INIS)

    Liu Ping

    1999-01-01

    The new version evaluated nuclear data library of ENDF-B 6.5 has been released recently. In order to compare the quality of evaluated nuclear data CENDL-2.1 with ENDF-B 6.5, it is necessary to do benchmarks testing for them. In this work, CENDL-2.1 and ENDF-B 6.5 were used to generated the WIMS-69 group library respectively, and benchmarks testing was done for the heavy water reactor, using WIMS5A code. It is obvious that data files of CENDL-2.1 is better than that of old WIMS library for the heavy water reactors calculations, and is in good agreement with those of ENDF-B 6.5

  11. Benchmarking healthcare logistics processes: a comparative case study of Danish and US hospitals

    DEFF Research Database (Denmark)

    Feibert, Diana Cordes; Andersen, Bjørn; Jacobsen, Peter

    2017-01-01

    Logistics processes in hospitals are vital in the provision of patient care. Improving healthcare logistics processes provides an opportunity for reduced healthcare costs and better support of clinical processes. Hospitals are faced with increasing healthcare costs around the world and improvement...... initiatives prevalent in manufacturing industries such as lean, business process reengineering and benchmarking have seen an increase in use in healthcare. This study investigates how logistics processes in a hospital can be benchmarked to improve process performance. A comparative case study of the bed...... logistics process and the pharmaceutical distribution process was conducted at a Danish and a US hospital. The case study results identified decision criteria for designing efficient and effective healthcare logistics processes. The most important decision criteria were related to quality, security...

  12. Copper benchmark experiment for the testing of JEFF-3.2 nuclear data for fusion applications

    OpenAIRE

    Angelone, M.; Flammini, D.; Loreti, S.; Moro, F.; Pillon, M.; Villar, R.; Klix, A.; Fischer, U.; Kodeli, I.; Perel, R.L.; Pohorecky, W.

    2017-01-01

    A neutronics benchmark experiment on a pure Copper block (dimensions 60 × 70 × 70 cm3) aimed at testing and validating the recent nuclear data libraries for fusion applications was performed in the frame of the European Fusion Program at the 14 MeV ENEA Frascati Neutron Generator (FNG). Reaction rates, neutron flux spectra and doses were measured using different experimental techniques (e.g. activation foils techniques, NE213 scintillator and thermoluminescent detectors). This paper first sum...

  13. Benchmark analyses for EBR-II shutdown heat removal tests SHRT-17 and SHRT-45R

    Energy Technology Data Exchange (ETDEWEB)

    Mochizuki, Hiroyasu, E-mail: mochizki@u-fukui.ac.jp [Research Institute of Nuclear Engineering, University of Fukui (Japan); Muranaka, Kohmei; Asai, Takayuki [Graduate School of Engineering, University of Fukui (Japan); Rooijen, W.F.G. van, E-mail: rooijen@u-fukui.ac.jp [Research Institute of Nuclear Engineering, University of Fukui (Japan)

    2014-08-15

    Highlights: • The IAEA EBR-II benchmarks SHRT-17 and SHRT-45R are analyzed with a 1D system code. • The calculated result of SHRT-17 corresponds well to the measured results. • For SHRT-45R ERANOS is used for various core parameters and reactivity coefficients. • SHRT-45R peak temperature is overestimated with the ERANOS feedback coefficients. • The peak temperature is well predicted when the feedback coefficient is reduced. - Abstract: Benchmark problems of several experiments in EBR-II, proposed by ANL and coordinated by the IAEA, are analyzed using the plant system code NETFLOW++ and the neutronics code ERANOS. The SHRT-17 test conducted as a loss-of-flow test is calculated using only the NETFLOW++ code because it is a purely thermal–hydraulic problem. The measured data were made available to the benchmark participants after the results of the blind benchmark calculations were submitted. Our work shows that major parameters of the plant are predicted with good accuracy. The SHRT-45R test, an unprotected loss of flow test is calculated using the NETFLOW++ code with the aid of delayed neutron data and reactivity coefficients calculated by the ERANOS code. These parameters are used in the NETFLOW++ code to perform a semi-coupled analysis of the neutronics – thermal–hydraulic problem. The measured data are compared with our calculated results. In our work, the peak temperature is underestimated, indicating that the reactivity feedback coefficients are too strong. When the reactivity feedback coefficient for thermal expansion is adjusted, good agreement is obtained in general for the calculated plant parameters, with a few exceptions.

  14. Supporting the development of an internationalization plan through benchmarking (case: medaffcon)

    OpenAIRE

    Tuompo, Tuuli

    2014-01-01

    In many cases, the small domestic market size in Finland forces organizations to start international operations in order to maintain growth. This thesis concentrates on find-ing success factors for the internationalization of Finnish based, service providing SMEs, operating in B2B markets. The data was collected from 3 participating organizations using benchmarking tech-niques. During the corporate visit data was collected using non-structured interviews, allowing the industry respected p...

  15. Benchmark experiments to test plutonium and stainless steel cross sections. Topical report

    International Nuclear Information System (INIS)

    Jenquin, U.P.; Bierman, S.R.

    1978-06-01

    The Nuclear Regulatory Commission (NRC) commissioned Battelle, Pacific Northwest Laboratory (PNL) to ascertain the accuracy of the neutron cross sections for the isotopes of plutonium and the constituents of stainless steel and determine if improvements can be made in application to criticality safety analysis. NRC's particular area of interest is in the transportation of light-water reactor spent fuel assemblies. The project was divided into two tasks. The first task was to define a set of integral experimental measurements (benchmarks). The second task is to use these benchmarks in neutronics calculations such that the accuracy of ENDF/B-IV plutonium and stainless steel cross sections can be assessed. The results of the first task are given in this report. A set of integral experiments most pertinent to testing the cross sections has been identified and the code input data for calculating each experiment has been developed

  16. Structural Benchmark Creep Testing for Microcast MarM-247 Advanced Stirling Convertor E2 Heater Head Test Article SN18

    Science.gov (United States)

    Krause, David L.; Brewer, Ethan J.; Pawlik, Ralph

    2013-01-01

    This report provides test methodology details and qualitative results for the first structural benchmark creep test of an Advanced Stirling Convertor (ASC) heater head of ASC-E2 design heritage. The test article was recovered from a flight-like Microcast MarM-247 heater head specimen previously used in helium permeability testing. The test article was utilized for benchmark creep test rig preparation, wall thickness and diametral laser scan hardware metrological developments, and induction heater custom coil experiments. In addition, a benchmark creep test was performed, terminated after one week when through-thickness cracks propagated at thermocouple weld locations. Following this, it was used to develop a unique temperature measurement methodology using contact thermocouples, thereby enabling future benchmark testing to be performed without the use of conventional welded thermocouples, proven problematic for the alloy. This report includes an overview of heater head structural benchmark creep testing, the origin of this particular test article, test configuration developments accomplished using the test article, creep predictions for its benchmark creep test, qualitative structural benchmark creep test results, and a short summary.

  17. Springback study in aluminum alloys based on the Demeri Benchmark Test : influence of material model

    International Nuclear Information System (INIS)

    Greze, R.; Laurent, H.; Manach, P. Y.

    2007-01-01

    Springback is a serious problem in sheet metal forming. Its origin lies in the elastic recovery of materials after a deep drawing operation. Springback modifies the final shape of the part when removed from the die after forming. This study deals with Springback in an Al5754-O aluminum alloy. An experimental test similar to the Demeri Benchmark Test has been developed. The experimentally measured Springback is compared to predicted Springback simulation using Abaqus software. Several material models are analyzed, all models using isotropic hardening of Voce type and plasticity criteria such as Von Mises and Hill48's yield criterion

  18. Automated Test Case Generation

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    I would like to present the concept of automated test case generation. I work on it as part of my PhD and I think it would be interesting also for other people. It is also the topic of a workshop paper that I am introducing in Paris. (abstract below) Please note that the talk itself would be more general and not about the specifics of my PhD, but about the broad field of Automated Test Case Generation. I would introduce the main approaches (combinatorial testing, symbolic execution, adaptive random testing) and their advantages and problems. (oracle problem, combinatorial explosion, ...) Abstract of the paper: Over the last decade code-based test case generation techniques such as combinatorial testing or dynamic symbolic execution have seen growing research popularity. Most algorithms and tool implementations are based on finding assignments for input parameter values in order to maximise the execution branch coverage. Only few of them consider dependencies from outside the Code Under Test’s scope such...

  19. Thermal lattice benchmarks for testing basic evaluated data files, developed with MCNP4B

    International Nuclear Information System (INIS)

    Maucec, M.; Glumac, B.

    1996-01-01

    The development of unit cell and full reactor core models of DIMPLE S01A and TRX-1 and TRX-2 benchmark experiments, using Monte Carlo computer code MCNP4B is presented. Nuclear data from ENDF/B-V and VI version of cross-section library were used in the calculations. In addition, a comparison to results obtained with the similar models and cross-section data from the EJ2-MCNPlib library (which is based upon the JEF-2.2 evaluation) developed in IRC Petten, Netherlands is presented. The results of the criticality calculation with ENDF/B-VI data library, and a comparison to results obtained using JEF-2.2 evaluation, confirm the MCNP4B full core model of a DIMPLE reactor as a good benchmark for testing basic evaluated data files. On the other hand, the criticality calculations results obtained using the TRX full core models show less agreement with experiment. It is obvious that without additional data about the TRX geometry, our TRX models are not suitable as Monte Carlo benchmarks. (author)

  20. Definition and Analysis of Heavy Water Reactor Benchmarks for Testing New Wims-D Libraries

    International Nuclear Information System (INIS)

    Leszczynski, Francisco

    2000-01-01

    This work is part of the IAEA-WIMS Library Update Project (WLUP). A group of heavy water reactor benchmarks have been selected for testing new WIMS-D libraries, including calculations with WIMSD5B program and the analysis of results.These benchmarks cover a wide variety of reactors and conditions, from fresh fuels to high burnup, and from natural to enriched uranium.Besides, each benchmark includes variations in lattice pitch and in coolants (normally heavy water and void).Multiplication factors with critical experimental bucklings and other parameters are calculated and compared with experimental reference values.The WIMS libraries used for the calculations were generated with basic data from JEF-2.2 Rev.3 (JEF) and ENDF/B-VI iNReleaseln 5 (E6) Results obtained with WIMS-86 (W86) library, included with WIMSD5B package, from Windfrith, UK with adjusted data, are included also, for showing the improvements obtained with the new -not adjusted- libraries.The calculations with WIMSD5B were made with two methods (input program options): PIJ (two-dimension collision probability method) and DSN (one-dimension Sn method, with homogenization of materials by ring).The general conclusions are: the library based on JEF data and the DSN meted give the best results, that in average are acceptable

  1. Analysis of CSNI benchmark test on containment using the code CONTRAN

    International Nuclear Information System (INIS)

    Haware, S.K.; Ghosh, A.K.; Raj, V.V.; Kakodkar, A.

    1994-01-01

    A programme of experimental as well as analytical studies on the behaviour of nuclear reactor containment is being actively pursued. A large number ol' experiments on pressure and temperature transients have been carried out on a one-tenth scale model vapour suppression pool containment experimental facility, simulating the 220 MWe Indian Pressurised Heavy Water Reactors. A programme of development of computer codes is underway to enable prediction of containment behaviour under accident conditions. This includes codes for pressure and temperature transients, hydrogen behaviour, aerosol behaviour etc. As a part of this ongoing work, the code CONTRAN (CONtainment TRansient ANalysis) has been developed for predicting the thermal hydraulic transients in a multicompartment containment. For the assessment of the hydrogen behaviour, the models for hydrogen transportation in a multicompartment configuration and hydrogen combustion have been incorporated in the code CONTRAN. The code also has models for the heat and mass transfer due to condensation and convection heat transfer. The structural heat transfer is modeled using the one-dimensional transient heat conduction equation. Extensive validation exercises have been carried out with the code CONTRAN. The code CONTRAN has been successfully used for the analysis of the benchmark test devised by Committee on the Safety of Nuclear Installations (CSNI) of the Organisation for Economic Cooperation and Development (OECD), to test the numerical accuracy and convergence errors in the computation of mass and energy conservation for the fluid and in the computation of heat conduction in structural walls. The salient features of the code CONTRAN, description of the CSNI benchmark test and a comparison of the CONTRAN predictions with the benchmark test results are presented and discussed in the paper. (author)

  2. Establishing benchmarks and metrics for disruptive technologies, inappropriate and obsolete tests in the clinical laboratory.

    Science.gov (United States)

    Kiechle, Frederick L; Arcenas, Rodney C; Rogers, Linda C

    2014-01-01

    Benchmarks and metrics related to laboratory test utilization are based on evidence-based medical literature that may suffer from a positive publication bias. Guidelines are only as good as the data reviewed to create them. Disruptive technologies require time for appropriate use to be established before utilization review will be meaningful. Metrics include monitoring the use of obsolete tests and the inappropriate use of lab tests. Test utilization by clients in a hospital outreach program can be used to monitor the impact of new clients on lab workload. A multi-disciplinary laboratory utilization committee is the most effective tool for modifying bad habits, and reviewing and approving new tests for the lab formulary or by sending them out to a reference lab. Copyright © 2013 Elsevier B.V. All rights reserved.

  3. Present status on numerical algorithms and benchmark tests for point kinetics and quasi-static approximate kinetics

    International Nuclear Information System (INIS)

    Ise, Takeharu

    1976-12-01

    Review studies have been made on algorithms of numerical analysis and benchmark tests on point kinetics and quasistatic approximate kinetics computer codes to perform efficiently benchmark tests on space-dependent neutron kinetics codes. Point kinetics methods have now been improved since they can be directly applied to the factorization procedures. Methods based on Pade rational function give numerically stable solutions and methods on matrix-splitting are interested in the fact that they are applicable to the direct integration methods. An improved quasistatic (IQ) approximation is the best and the most practical method; it is numerically shown that the IQ method has a high stability and precision and the computation time which is about one tenth of that of the direct method. IQ method is applicable to thermal reactors as well as fast reactors and especially fitted for fast reactors to which many time steps are necessary. Two-dimensional diffusion kinetics codes are most practicable though there exist also three-dimensional diffusion kinetics code as well as two-dimensional transport kinetics code. On developing a space-dependent kinetics code, in any case, it is desirable to improve the method so as to have a high computing speed for solving static diffusion and transport equations. (auth.)

  4. A benchmark test of computer codes for calculating average resonance parameters

    International Nuclear Information System (INIS)

    Ribon, P.; Thompson, A.

    1983-01-01

    A set of resonance parameters has been generated from known, but secret, average values; the parameters have then been adjusted to mimic experimental data by including the effects of Doppler broadening, resolution broadening and statistical fluctuations. Average parameters calculated from the dataset by various computer codes are compared with each other, and also with the true values. The benchmark test is fully described in the report NEANDC160-U (NEA Data Bank Newsletter No. 27 July 1982); the present paper is a summary of this document. (Auth.)

  5. Benchmark physics tests in the metallic-fuelled assembly ZPPR-15

    International Nuclear Information System (INIS)

    McFarlane, H.F.; Brumbach, S.B.; Carpenter, S.G.; Collins, P.J.

    1987-01-01

    Results of the first benchmark physics tests of a metallic-fueled, demonstration-size, liquid metal reactor are reported. A simple, two-zone, cylindrical conventional assembly was built with three distinctly different compositions to represent the stages of the Integral Fast Reactor fuel cycle. Experiments included criticality, control, power distribution, reaction rate ratios, reactivity coefficients, shielding, kinetics and spectrum. Analysis was done with 3-D nodal diffusion calculations and ENDFIB-V.2 cross sections. Predictions of the ZPPR-15 reactor physics parameters agreed sufficiently well with the measured values to justify confidence in design analyses for metallic-fueled LMRs

  6. Benchmark physics tests in the metallic-fueled assembly ZPPR-15

    International Nuclear Information System (INIS)

    McFarlane, H.F.; Brumbach, S.B.; Carpenter, S.G.; Collins, P.J.

    1989-01-01

    Results of the first benchmark physics tests of a metallic-fueled, demonstration-size liquid-metal reactor (LMR) are reported. A simple, two-zone, cylindrical conventional assembly was built with three distinctly different compositions to represent the stages of the Integral Fast Reactor fuel cycle. Experiments included criticality, control, power distribution, reaction rate ratios, reactivity coefficients, shielding, kinetics, and spectrum. Analysis was done with three-dimensional nodal diffusion calculations and ENDF/B-V.2 cross sections. Predictions of the ZPPR-15 reactor physics parameters agreed sufficiently well with the measured values to justify confidence in design analyses for metallic-fueled LMRs

  7. Empirical Test Case Specification

    DEFF Research Database (Denmark)

    Kalyanova, Olena; Heiselberg, Per

    This document includes the empirical specification on the IEA task of evaluation building energy simulation computer programs for the Double Skin Facades (DSF) constructions. There are two approaches involved into this procedure, one is the comparative approach and another is the empirical one. I....... In the comparative approach the outcomes of different software tools are compared, while in the empirical approach the modelling results are compared with the results of experimental test cases....

  8. Benchmark Calibration Tests Completed for Stirling Convertor Heater Head Life Assessment

    Science.gov (United States)

    Krause, David L.; Halford, Gary R.; Bowman, Randy R.

    2005-01-01

    A major phase of benchmark testing has been completed at the NASA Glenn Research Center (http://www.nasa.gov/glenn/), where a critical component of the Stirling Radioisotope Generator (SRG) is undergoing extensive experimentation to aid the development of an analytical life-prediction methodology. Two special-purpose test rigs subjected SRG heater-head pressure-vessel test articles to accelerated creep conditions, using the standard design temperatures to stay within the wall material s operating creep-response regime, but increasing wall stresses up to 7 times over the design point. This resulted in well-controlled "ballooning" of the heater-head hot end. The test plan was developed to provide critical input to analytical parameters in a reasonable period of time.

  9. Design and testing of corrosion damaged prestressed concrete joists: the Pescara Benchmark

    International Nuclear Information System (INIS)

    Di Evangelista, A; De Leonardis, A; Valente, C; Zuccarino, L

    2011-01-01

    An experimental campaign named the Pescara benchmark and devoted to study the dynamic behaviour of corroded p.c. joists has been conducted. The steel corrosion reduces the area of the reinforcement and causes cracking of concrete so that r/c members are subjected to loss of strength and stiffness. It is of interest to evaluate the corrosion level at which the damage can be detected through signal processing procedures and how close such level is to the r/c member safety limits. Joists of current industrial production having different steel to concrete ratios are tested in different laboratory conditions. Dynamic tests involve either free vibrations and forced vibrations due to a moving mass simulating actual traffic loads in railway bridges. The paper discusses the rationale of the tests including the set up of the artificial corrosion, the static characterization of the joist and the dynamic tests in the different stages of corrosion experienced.

  10. Case Report: HIV test misdiagnosis

    African Journals Online (AJOL)

    Case Study: HIV test misdiagnosis 124. Case Report: HIV ... A positive rapid HIV test does not require ... 3 College of Medicine - Johns Hopkins Research Project, Blantyre,. Malawi ... test results: a pilot study of three community testing sites.

  11. Calculations to an IAHR-benchmark test using the CFD-code CFX-4

    Energy Technology Data Exchange (ETDEWEB)

    Krepper, E

    1998-10-01

    The calculation concerns a test, which was defined as a benchmark for 3-D codes by the working group of advanced nuclear reactor types of IAHR (International Association of Hydraulic Research). The test is well documented and detailed measuring results are available. The test aims at the investigation of phenomena, which are important for heat removal at natural circulation conditions in a nuclear reactor. The task for the calculation was the modelling of the forced flow field of a single phase incompressible fluid with consideration of heat transfer and influence of gravity. These phenomena are typical also for other industrial processes. The importance of correct modelling of these phenomena also for other applications is a motivation for performing these calculations. (orig.)

  12. Case Study: Testing with Case Studies

    Science.gov (United States)

    Herreid, Clyde Freeman

    2015-01-01

    This column provides original articles on innovations in case study teaching, assessment of the method, as well as case studies with teaching notes. This month's issue discusses using case studies to test for knowledge or lessons learned.

  13. Detection of sodium/water reaction in a steam generator: Results of a 1995 benchmark test

    International Nuclear Information System (INIS)

    Oriol, L.

    1997-01-01

    The CEA analysis of the 1995 benchmark test has been focused on the location of the injections. Two techniques have been tested: the pulse timing technique, and the time-domain delay and sum beamforming technique. The two methods gave coherent locations of the injector even if there was a difference of 25% of the SGU height between the vertical locations. Prior to that analysis, the RMS values of the signals were calculated in different frequency bands. The results obtained in the 200-1000 Hz were used to draw a rough estimation of the beginnings of the injections in order to determine the parts of the records on which the location signal processing can be carried out. (author). 2 refs, 8 figs, 2 tabs

  14. Benchmark test of JENDL-3T and -3T/Rev.1

    International Nuclear Information System (INIS)

    Takano, Hideki; Kaneko, Kunio.

    1989-10-01

    The fast reactor 70-group constant set JFS-3-J3T has been generated by using the JENDL-3T nuclear data. One-dimensional 21-benchmark cores and the ZPPR-9 core were analysed with the JFS-3-J3T set. The results obtained are summarized as follows: (1) The values of keff are underestimated by 0.6% for Pu-fueled cores and overestimated by 2% for U-fueled cores. (2) The central reaction rate ratio 239 σ f φ/ 235 σ f φ is in a good agreement with the experimental value, though 238 σ c φ/ 239 σ f φ and 238 σ f φ/ 235 σ f φ are overestimated. (3) Doppler and Na-void reactivities are in a good agreement with the measured data. (4) The prediction accuracy of radial reaction rate distributions are improved in the comparison of the results obtained with the JENDL-2 data. Furthermore, the benchmark test of JENDL-3T/Rev. 1 which was revised from JENDL-3T for several important nuclides has been again performed. It was shown that JENDL-3T/Rev. 1 would predict nuclear characteristics more satisfactorily than JENDL-3T. (author)

  15. Issues in Benchmark Metric Selection

    Science.gov (United States)

    Crolotte, Alain

    It is true that a metric can influence a benchmark but will esoteric metrics create more problems than they will solve? We answer this question affirmatively by examining the case of the TPC-D metric which used the much debated geometric mean for the single-stream test. We will show how a simple choice influenced the benchmark and its conduct and, to some extent, DBMS development. After examining other alternatives our conclusion is that the “real” measure for a decision-support benchmark is the arithmetic mean.

  16. Posture Control—Human-Inspired Approaches for Humanoid Robot Benchmarking: Conceptualizing Tests, Protocols and Analyses

    Directory of Open Access Journals (Sweden)

    Thomas Mergner

    2018-05-01

    Full Text Available Posture control is indispensable for both humans and humanoid robots, which becomes especially evident when performing sensorimotor tasks such as moving on compliant terrain or interacting with the environment. Posture control is therefore targeted in recent proposals of robot benchmarking in order to advance their development. This Methods article suggests corresponding robot tests of standing balance, drawing inspirations from the human sensorimotor system and presenting examples from robot experiments. To account for a considerable technical and algorithmic diversity among robots, we focus in our tests on basic posture control mechanisms, which provide humans with an impressive postural versatility and robustness. Specifically, we focus on the mechanically challenging balancing of the whole body above the feet in the sagittal plane around the ankle joints in concert with the upper body balancing around the hip joints. The suggested tests target three key issues of human balancing, which appear equally relevant for humanoid bipeds: (1 four basic physical disturbances (support surface (SS tilt and translation, field and contact forces may affect the balancing in any given degree of freedom (DoF. Targeting these disturbances allows us to abstract from the manifold of possible behavioral tasks. (2 Posture control interacts in a conflict-free way with the control of voluntary movements for undisturbed movement execution, both with “reactive” balancing of external disturbances and “proactive” balancing of self-produced disturbances from the voluntary movements. Our proposals therefore target both types of disturbances and their superposition. (3 Relevant for both versatility and robustness of the control, linkages between the posture control mechanisms across DoFs provide their functional cooperation and coordination at will and on functional demands. The suggested tests therefore include ankle-hip coordination. Suggested benchmarking

  17. Benchmarking the implementation of E-Commerce A Case Study Approach

    OpenAIRE

    von Ettingshausen, C. R. D. Freiherr

    2009-01-01

    The purpose of this thesis was to develop a guideline to support the implementation of E-Commerce with E-Commerce benchmarking. Because of its importance as an interface with the customer, web-site benchmarking has been a widely researched topic. However, limited research has been conducted on benchmarking E-Commerce across other areas of the value chain. Consequently this thesis aims to extend benchmarking into E-Commerce related subjects. The literature review examined ...

  18. Benchmark test of MORSE-DD code using double-differential form cross sections

    International Nuclear Information System (INIS)

    Nakagawa, Masayuki; Mori, Takamasa; Ishiguro, Yukio

    1985-02-01

    The multi-group double-differential form cross sections (DDX) and the three dimensional Monte Carlo code MORSE-DD devised to utilize the DDX, which were developed for the fusion neutronics analysis, have been validated through many benchmark tests. All the problems tested have a 14 MeV neutron source. To compare the calculated results with the measured values, the following experiments were adopted as the benchmark problems; leakage neutron spectra from spheres composed of nine kinds of materials measured at LLNL, neutron angular spectra from the Li 2 O slab measured at FNS in JAERI, tritium production rate (TPR) in the graphite-reflected Li 2 O sphere measured at FNS and the TPR in the metallic Li sphere measured at KfK. In addition in order to test an accuracy of the calculation method in detail, spectra of neutrons scattered from a small sample and various reaction rates in a Li 2 O cylinder were compared between the present method and the continuous energy Monte Carlo method. The nuclear data files used are mainly ENDF/B4 and partly JENDL-3PR1. The tests were carried out through a comparison with the measured values and also with the results obtained from the conventional Legendre expansion method and the continuous energy Monte Carlo method. It is found that the results by the present method are more accurate than those by the conventional one and agree well with those by the continuous energy Monte Carlo calculations. Discrepancies due to the nuclear data are also discussed. (author)

  19. Towards the Estimation of an Efficient Benchmark Portfolio: The Case of Croatian Emerging Market

    Directory of Open Access Journals (Sweden)

    Dolinar Denis

    2017-04-01

    Full Text Available The fact that cap-weighted indices provide an inefficient risk-return trade-off is well known today. Various research approaches evolved suggesting alternative to cap-weighting in an effort to come up with a more efficient market index benchmark. In this paper we aim to use such an approach and focus on the Croatian capital market. We apply statistical shrinkage method suggested by Ledoit and Wolf (2004 to estimate the covariance matrix and follow the work of Amenc et al. (2011 to obtain estimates of expected returns that rely on risk-return trade-off. Empirical findings for the proposed portfolio optimization include out-of-sample and robustness testing. This way we compare the performance of the capital-weighted benchmark to the alternative and ensure that consistency is achieved in different volatility environments. Research findings do not seem to support relevant research results for the developed markets but rather complement earlier research (Zoričić et al., 2014.

  20. Benchmarks for the Dichotic Sentence Identification test in Brazilian Portuguese for ear and age.

    Science.gov (United States)

    Andrade, Adriana Neves de; Gil, Daniela; Iorio, Maria Cecilia Martinelli

    2015-01-01

    Dichotic listening tests should be used in local languages and adapted for the population. Standardize the Brazilian Portuguese version of the Dichotic Sentence Identification test in normal listeners, comparing the performance for age and ear. This prospective study included 200 normal listeners divided into four groups according to age: 13-19 years (GI), 20-29 years (GII), 30-39 years (GIII), and 40-49 years (GIV). The Dichotic Sentence Identification was applied in four stages: training, binaural integration and directed sound from right and left. Better results for the right ear were observed in the stages of binaural integration in all assessed groups. There was a negative correlation between age and percentage of correct responses in both ears for free report and training. The worst performance in all stages of the test was observed for the age group 40-49 years old. Reference values for the Brazilian Portuguese version of the Dichotic Sentence Identification test in normal listeners aged 13-49 years were established according to age, ear, and test stage; they should be used as benchmarks when evaluating individuals with these characteristics. Copyright © 2015 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.

  1. Benchmarking for Higher Education.

    Science.gov (United States)

    Jackson, Norman, Ed.; Lund, Helen, Ed.

    The chapters in this collection explore the concept of benchmarking as it is being used and developed in higher education (HE). Case studies and reviews show how universities in the United Kingdom are using benchmarking to aid in self-regulation and self-improvement. The chapters are: (1) "Introduction to Benchmarking" (Norman Jackson…

  2. An Eigenstructure Assignment Approach to FDI for the Industrial Actuator Benchmark Test

    DEFF Research Database (Denmark)

    Jørgensen, R.B.; Patton, R.J.; Chen, J.

    1995-01-01

    This paper examines the robustness in modelling uncertainties of an observer-based fault detection and isolation scheme applied to the industrial actuator benchmark problem.......This paper examines the robustness in modelling uncertainties of an observer-based fault detection and isolation scheme applied to the industrial actuator benchmark problem....

  3. INTEGRAL BENCHMARK DATA FOR NUCLEAR DATA TESTING THROUGH THE ICSBEP AND THE NEWLY ORGANIZED IRPHEP

    Energy Technology Data Exchange (ETDEWEB)

    J. Blair Briggs; Lori Scott; Yolanda Rugama; Enrico Satori

    2007-04-01

    The status of the International Criticality Safety Benchmark Evaluation Project (ICSBEP) was last reported in a nuclear data conference at the International Conference on Nuclear Data for Science and Technology, ND-2004, in Santa Fe, New Mexico. Since that time the number and type of integral benchmarks have increased significantly. Included in the ICSBEP Handbook are criticality-alarm / shielding and fundamental physic benchmarks in addition to the traditional critical / subcritical benchmark data. Since ND 2004, a reactor physics counterpart to the ICSBEP, the International Reactor Physics Experiment Evaluation Project (IRPhEP) was initiated. The IRPhEP is patterned after the ICSBEP, but focuses on other integral measurements, such as buckling, spectral characteristics, reactivity effects, reactivity coefficients, kinetics measurements, reaction-rate and power distributions, nuclide compositions, and other miscellaneous-type measurements in addition to the critical configuration. The status of these two projects is discussed and selected benchmarks highlighted in this paper.

  4. Analytical benchmarks for nuclear engineering applications. Case studies in neutron transport theory

    International Nuclear Information System (INIS)

    2008-01-01

    The developers of computer codes involving neutron transport theory for nuclear engineering applications seldom apply analytical benchmarking strategies to ensure the quality of their programs. A major reason for this is the lack of analytical benchmarks and their documentation in the literature. The few such benchmarks that do exist are difficult to locate, as they are scattered throughout the neutron transport and radiative transfer literature. The motivation for this benchmark compendium, therefore, is to gather several analytical benchmarks appropriate for nuclear engineering applications under one cover. We consider the following three subject areas: neutron slowing down and thermalization without spatial dependence, one-dimensional neutron transport in infinite and finite media, and multidimensional neutron transport in a half-space and an infinite medium. Each benchmark is briefly described, followed by a detailed derivation of the analytical solution representation. Finally, a demonstration of the evaluation of the solution representation includes qualified numerical benchmark results. All accompanying computer codes are suitable for the PC computational environment and can serve as educational tools for courses in nuclear engineering. While this benchmark compilation does not contain all possible benchmarks, by any means, it does include some of the most prominent ones and should serve as a valuable reference. (author)

  5. Benchmark Tests to Develop Analytical Time-Temperature Limit for HANA-6 Cladding for Compliance with New LOCA Criteria

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Sung Yong; Jang, Hun; Lim, Jea Young; Kim, Dae Il; Kim, Yoon Ho; Mok, Yong Kyoon [KEPCO Nuclear Fuel Co. Ltd., Daejeon (Korea, Republic of)

    2016-10-15

    According to 10CFR50.46c, two analytical time and temperature limits for breakaway oxidation and postquench ductility (PQD) should be determined by approved experimental procedure as described in NRC Regulatory Guide (RG) 1.222 and 1.223. According to RG 1.222 and 1.223, rigorous qualification requirements for test system are required, such as thermal and weight gain benchmarks. In order to meet these requirements, KEPCO NF has developed the new special facility to evaluate LOCA performance of zirconium alloy cladding. In this paper, qualification results for test facility and HT oxidation model for HANA-6 are summarized. The results of thermal benchmark tests of LOCA HT oxidation tester is summarized as follows. 1. The best estimate HT oxidation model of HANA- 6 was developed for the vender proprietary HT oxidation model. 2. In accordance with the RG 1.222 and 1.223, Benchmark tests were performed by using LOCA HT oxidation tester 3. The maximum axial and circumferential temperature difference are ± 9 .deg. C and ± 2 .deg. C at 1200 .deg. C, respectively. At the other temperature conditions, temperature difference is less than 1200 .deg. C result. Thermal benchmark test results meet the requirements of NRC RG 1.222 and 1.223.

  6. Benchmarking antibiotic use in Finnish acute care hospitals using patient case-mix adjustment.

    Science.gov (United States)

    Kanerva, Mari; Ollgren, Jukka; Lyytikäinen, Outi

    2011-11-01

    It is difficult to draw conclusions about the prudence of antibiotic use in different hospitals by directly comparing usage figures. We present a patient case-mix adjustment model of antibiotic use to rank hospitals while taking patient characteristics into account. Data on antibiotic use were collected during the national healthcare-associated infection (HAI) prevalence survey in 2005 in Finland in all 5 tertiary care, all 15 secondary care and 10 (25% of 40) other acute care hospitals. The use of antibiotics was measured using use-days/100 patient-days during a 7day period and the prevalence of patients receiving at least two antimicrobials during the study day. Case-mix-adjusted antibiotic use was calculated by using multivariate models and an indirect standardization method. Parameters in the model included age, sex, severity of underlying diseases, intensive care, haematology, preceding surgery, respirator, central venous and urinary catheters, community-associated infection, HAI and contact isolation due to methicillin-resistant Staphylococcus aureus. The ranking order changed one position in 12 (40%) hospitals and more than two positions in 13 (43%) hospitals when the case-mix-adjusted figures were compared with those observed. In 24 hospitals (80%), the antibiotic use density observed was lower than expected by the case-mix-adjusted use density. The patient case-mix adjustment of antibiotic use ranked the hospitals differently from the ranking according to observed use, and may be a useful tool for benchmarking hospital antibiotic use. However, the best set of easily and widely available parameters that would describe both patient material and hospital activities remains to be determined.

  7. Quality Assurance Testing of Version 1.3 of U.S. EPA Benchmark Dose Software (Presentation)

    Science.gov (United States)

    EPA benchmark dose software (BMDS) issued to evaluate chemical dose-response data in support of Agency risk assessments, and must therefore be dependable. Quality assurance testing methods developed for BMDS were designed to assess model dependability with respect to curve-fitt...

  8. Benchmarking in Czech Higher Education: The Case of Schools of Economics

    Science.gov (United States)

    Placek, Michal; Ochrana, František; Pucek, Milan

    2015-01-01

    This article describes the use of benchmarking in universities in the Czech Republic and academics' experiences with it. It is based on research conducted among academics from economics schools in Czech public and private universities. The results identified several issues regarding the utilisation and understanding of benchmarking in the Czech…

  9. Reevaluation of the case, de Hoffman, and Placzek one-group neutron transport benchmark solution in plane geometry

    International Nuclear Information System (INIS)

    Ganapol, B.D.

    1986-01-01

    In a course on neutron transport theory and also in the analytical neutron transport theory literature, the pioneering work of Case et al. (CdHP) is often referenced. This work was truly a monumental effort in that it treated the fundamental mathematical properties of the one-group neutron Boltzmann equation in detail as well as the numerical evaluation of most of the resulting solutions. Many mathematically and numerically oriented dissertations were based on this classic monograph. In light of the considerable advances made both in numerical methods and computer technology since 1953, when the historic CdHP monograph first appeared, it seems appropriate to reevaluate the numerical benchmark solutions found therein with present-day computational technology. In most transport theory courses, the subject of proper benchmarking of numerical algorithms and transport codes is seldom addressed at any great length. This may be the reason that the benchmarking procedure is so rarely practiced in the nuclear community and when practiced is improperly applied. In this presentation, the development of a new benchmark for the one-group neutron flux in an infinite medium will be detailed with emphasis placed on the educational aspects of the benchmarking activity

  10. Trace coupled with PARCS benchmark against Leibstadt plant data during the turbine trip test

    Energy Technology Data Exchange (ETDEWEB)

    Sekhri, Abdelkrim; Baumann, Peter, E-mail: abdelkrim.sekhri@kkl.ch, E-mail: peter.Baumann@kkl.ch [KernkraftwerkLeibstadt AG, Leibstadt (Switzerland); Hidalga, Patricio; Morera, Daniel; Miro, Rafael; Barrachina, Teresa; Verdu, Gumersindo, E-mail: pathigar@etsii.upv.es, E-mail: dmorera@isirym.upv.es, E-mail: rmiro@isirym.upv.es, E-mail: tbarrachina@isirym.upv.es, E-mail: gverdu@isirym.upv.es [Universitat Politecnica de Valencia (ISIRYM/UPV), Valencia, (Spain). Institute for Industrial, Radiophysical and Environmental Safety

    2013-07-01

    In order to enhance the modeling of Nuclear Power Plant Leibstadt (KKL), the coupling of 3D neutron kinetics PARCS code with TRACE has been developed. To test its performance a complex transient of Turbine Trip has been simulated comparing the results with the existing plant data of Turbine Trip test. For this transient also Cross Sections have been generated and used by PARCS. The thermal-hydraulic TRACE model is retrieved from the already existing model. For the benchmarking the Turbine Trip transient has been simulated according to the test resulting in the closure of the turbine control valve (TCV) and the following opening of the bypass valve (TBV). This transient caused a pressure shock wave towards the Reactor Pressure Vessel (RPV) which provoked the decreasing of the void level and the consequent slight power excursion. The power control capacity of the system showed a good response with the procedure of a Selected Rod Insertion (SRI) and the recirculation loops performance which resulted in the proper thermal power reduction comparable to APRM data recorder from the plant. The comparison with plant data shows good agreement in general and assesses the performance of the coupled model. Due to this, it can be concluded that the coupling of PARCS and TRACE codes in addition with the Cross Section used works successfully for simulating the behavior of the reactor core during complex plant transients. Nevertheless the TRACE model shall be improved and the core neutronics corresponding to the test shall be used in the future to allow quantitative comparison between TRACE and plant recorded data. (author)

  11. International Benchmark on Numerical Simulations for 1D, Nonlinear Site Response (PRENOLIN) : Verification Phase Based on Canonical Cases

    NARCIS (Netherlands)

    Régnier, Julie; Bonilla, Luis-Fabian; Bard, Pierre-Yves; Bertrand, Etienne; Hollender, Fabrice; Kawase, Hiroshi; Sicilia, Deborah; Arduino, Pedro; Amorosi, Angelo; Asimaki, Dominiki; Pisano, F.

    2016-01-01

    PREdiction of NOn‐LINear soil behavior (PRENOLIN) is an international benchmark aiming to test multiple numerical simulation codes that are capable of predicting nonlinear seismic site response with various constitutive models. One of the objectives of this project is the assessment of the

  12. Benchmarking Alumni Relations in Community Colleges: Findings from a 2015 CASE Survey. CASE White Paper

    Science.gov (United States)

    Paradise, Andrew

    2016-01-01

    Building on the inaugural survey conducted three years prior, the 2015 CASE Community College Alumni Relations survey collected additional insightful data on staffing, structure, communications, engagement, and fundraising. This white paper features key data on alumni relations programs at community colleges across the United States. The paper…

  13. Benchmarking Alumni Relations in Community Colleges: Findings from a 2012 CASE Survey. CASE White Paper

    Science.gov (United States)

    Paradise, Andrew; Heaton, Paul

    2013-01-01

    In 2011, CASE founded the Center for Community College Advancement to provide training and resources to help community colleges build and sustain effective fundraising, alumni relations and communications and marketing programs. This white paper summarizes the results of a groundbreaking survey on alumni relations programs at community colleges…

  14. PHOTOCHEMISTRY IN TERRESTRIAL EXOPLANET ATMOSPHERES. I. PHOTOCHEMISTRY MODEL AND BENCHMARK CASES

    Energy Technology Data Exchange (ETDEWEB)

    Hu Renyu; Seager, Sara; Bains, William, E-mail: hury@mit.edu [Department of Earth, Atmospheric and Planetary Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139 (United States)

    2012-12-20

    We present a comprehensive photochemistry model for exploration of the chemical composition of terrestrial exoplanet atmospheres. The photochemistry model is designed from the ground up to have the capacity to treat all types of terrestrial planet atmospheres, ranging from oxidizing through reducing, which makes the code suitable for applications for the wide range of anticipated terrestrial exoplanet compositions. The one-dimensional chemical transport model treats up to 800 chemical reactions, photochemical processes, dry and wet deposition, surface emission, and thermal escape of O, H, C, N, and S bearing species, as well as formation and deposition of elemental sulfur and sulfuric acid aerosols. We validate the model by computing the atmospheric composition of current Earth and Mars and find agreement with observations of major trace gases in Earth's and Mars' atmospheres. We simulate several plausible atmospheric scenarios of terrestrial exoplanets and choose three benchmark cases for atmospheres from reducing to oxidizing. The most interesting finding is that atomic hydrogen is always a more abundant reactive radical than the hydroxyl radical in anoxic atmospheres. Whether atomic hydrogen is the most important removal path for a molecule of interest also depends on the relevant reaction rates. We also find that volcanic carbon compounds (i.e., CH{sub 4} and CO{sub 2}) are chemically long-lived and tend to be well mixed in both reducing and oxidizing atmospheres, and their dry deposition velocities to the surface control the atmospheric oxidation states. Furthermore, we revisit whether photochemically produced oxygen can cause false positives for detecting oxygenic photosynthesis, and find that in 1 bar CO{sub 2}-rich atmospheres oxygen and ozone may build up to levels that have conventionally been accepted as signatures of life, if there is no surface emission of reducing gases. The atmospheric scenarios presented in this paper can serve as the

  15. Benchmark tests and spin adaptation for the particle-particle random phase approximation

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Yang; Steinmann, Stephan N.; Peng, Degao [Department of Chemistry, Duke University, Durham, North Carolina 27708 (United States); Aggelen, Helen van, E-mail: Helen.VanAggelen@UGent.be [Department of Chemistry, Duke University, Durham, North Carolina 27708 (United States); Department of Inorganic and Physical Chemistry, Ghent University, 9000 Ghent (Belgium); Yang, Weitao, E-mail: Weitao.Yang@duke.edu [Department of Chemistry and Department of Physics, Duke University, Durham, North Carolina 27708 (United States)

    2013-11-07

    The particle-particle random phase approximation (pp-RPA) provides an approximation to the correlation energy in density functional theory via the adiabatic connection [H. van Aggelen, Y. Yang, and W. Yang, Phys. Rev. A 88, 030501 (2013)]. It has virtually no delocalization error nor static correlation error for single-bond systems. However, with its formal O(N{sup 6}) scaling, the pp-RPA is computationally expensive. In this paper, we implement a spin-separated and spin-adapted pp-RPA algorithm, which reduces the computational cost by a substantial factor. We then perform benchmark tests on the G2/97 enthalpies of formation database, DBH24 reaction barrier database, and four test sets for non-bonded interactions (HB6/04, CT7/04, DI6/04, and WI9/04). For the G2/97 database, the pp-RPA gives a significantly smaller mean absolute error (8.3 kcal/mol) than the direct particle-hole RPA (ph-RPA) (22.7 kcal/mol). Furthermore, the error in the pp-RPA is nearly constant with the number of atoms in a molecule, while the error in the ph-RPA increases. For chemical reactions involving typical organic closed-shell molecules, pp- and ph-RPA both give accurate reaction energies. Similarly, both RPAs perform well for reaction barriers and nonbonded interactions. These results suggest that the pp-RPA gives reliable energies in chemical applications. The adiabatic connection formalism based on pairing matrix fluctuation is therefore expected to lead to widely applicable and accurate density functionals.

  16. DECOVALEX I - Bench-Mark Test 3: Thermo-hydro-mechanical modelling

    International Nuclear Information System (INIS)

    Israelsson, J.

    1995-12-01

    The bench-mark test concerns the excavation of a tunnel, located 500 m below the ground surface, and the establishment of mechanical equilibrium and steady-state fluid flow. Following this, a thermal heating due to the nuclear waste, stored in a borehole below the tunnel, was simulated. The results are reported at (1) 30 days after tunnel excavation, (2) steady state, (3) one year after thermal loading, and (4) at the time of maximum temperature. The problem specification included the excavation and waste geometry, materials properties for intact rock and joints, location of more than 6500 joints observed in the 50 by 50 m area, and calculated hydraulic conductivities. However, due to the large number of joints and the lack of dominating orientations, it was decided to treat the problem as a continuum using the computer code FLAC. The problem was modeled using a vertical symmetry plane through the tunnel and the borehole. Flow equilibrium was obtained approx. 40 days after the opening of the tunnel. Since the hydraulic conductivity was set to be stress dependent, a noticeable difference in the horizontal and vertical conductivity and flow was observed. After 40 days, an oedometer-type consolidation of the model was observed. Approx. 4 years after the initiation of the heat source, a maximum temperature of 171 C was obtained. The stress-dependent hydraulic conductivity and the temperature-dependent dynamic viscosity caused minor changes to the flow pattern. The specified mechanical boundary conditions imply that the tunnel is part of a system of parallel tunnels. However, the fixed temperature at the top boundary maintains the temperature below the temperature anticipated for an equivalent repository. The combination of mechanical and hydraulic boundary conditions cause the model to behave like an oedometer test in which the consolidation rate goes asymptotically to zero. 17 refs, 55 figs, 22 tabs

  17. Post test analyses of Revisa benchmark based on a creep test at 1100 Celsius degrees performed on a notched tube

    International Nuclear Information System (INIS)

    Fischer, M.; Bernard, A.; Bhandari, S.

    2001-01-01

    In the Euratom 4. Framework Program of the European Commission, REVISA Project deals with the Reactor Vessel Integrity under Severe Accidents. One of the tasks consists in the experimental validation of the models developed in the project. To do this, a benchmark was designed where the participants use their models to test the results against an experiment. The experiment called RUPTHER 15 was conducted by the coordinating organisation, CEA (Commissariat a l'Energie Atomique) in France. It is a 'delayed fracture' test on a notched tube. Thermal loading is an axial gradient with a temperature of about 1130 C in the mid-part. Internal pressure is maintained at 0.8 MPa. This paper presents the results of Finite Element calculations performed by Framatome-ANP using the SYSTUS code. Two types of analyses were made: -) one based on the 'time hardening' Norton-Bailey creep law, -) the other based on the coupled creep/damage Lemaitre-Chaboche model. The purpose of this paper is in particular to show the influence of temperature on the simulation results. At high temperatures of the kind dealt with here, slight errors in the temperature measurements can lead to very large differences in the deformation behaviour. (authors)

  18. Theory Testing Using Case Studies

    DEFF Research Database (Denmark)

    Møller, Ann-Kristina Løkke; Dissing Sørensen, Pernille

    2014-01-01

    The appropriateness of case studies as a tool for theory testing is still a controversial issue, and discussions about the weaknesses of such research designs have previously taken precedence over those about its strengths. The purpose of the paper is to examine and revive the approach of theory...... testing using case studies, including the associated research goal, analysis, and generalisability. We argue that research designs for theory testing using case studies differ from theorybuilding case study research designs because different research projects serve different purposes and follow different...... research paths....

  19. Theory testing using case studies

    DEFF Research Database (Denmark)

    Dissing Sørensen, Pernille; Løkke Nielsen, Ann-Kristina

    2006-01-01

    on the strengths of theory-testing case studies. We specify research paths associated with theory testing in case studies and present a coherent argument for the logic of theoretical development and refinement using case studies. We emphasize different uses of rival explanations and their implications for research...... design. Finally, we discuss the epistemological logic, i.e., the value to larger research programmes, of such studies and, following Lakatos, conclude that the value of theory-testing case studies lies beyond naïve falsification and in their contribution to developing research programmes in a progressive......Case studies may have different research goals. One such goal is the testing of small-scale and middle-range theories. Theory testing refers to the critical examination, observation, and evaluation of the 'why' and 'how' of a specified phenomenon in a particular setting. In this paper, we focus...

  20. Turbine-missile casing exit tests

    International Nuclear Information System (INIS)

    Yoshimura, H.R.; Sliter, G.E.

    1978-01-01

    Nuclear power plant designers are required to provide safety-related components with adequate protection against hypothetical turbine-missile impacts. In plants with a ''peninsula'' arrangement, protection is provided by installing the turbine axis radially from the reactor building, so that potential missile trajectories are not in line with the plant. In plants with a ''non-peninsula'' arrangement (turbine axis perpendicular to a radius), designers rely on the low probability of a missile strike and on the protection provided by reinforced concrete walls in order to demonstrate an adequate level of protection USNRC Regulatory Guide 1.115). One of the critical first steps in demonstrating adequacy is the determination of the energy and spin of the turbine segments as they exit the turbine casing. The spin increases the probability that a subsequent impact with a protective barrier will be off-normal and therefore less severe than the normal impact assumed in plant designs. Two full-scale turbine-missile casing exit tests which were conducted by Sandia Laboratories at their rocket-sled facility in Albuquerque, New Mexico, are described. Because of wide variations in turbine design details, postulated failure conditions, and missile exit scenarios, the conditions for the two tests were carefully selected to be as prototypical as possible, while still maintaining the well-controlled and well-characterized test conditions needed for generating benchmark data

  1. Copper benchmark experiment for the testing of JEFF-3.2 nuclear data for fusion applications

    Directory of Open Access Journals (Sweden)

    Angelone M.

    2017-01-01

    Full Text Available A neutronics benchmark experiment on a pure Copper block (dimensions 60 × 70 × 70 cm3 aimed at testing and validating the recent nuclear data libraries for fusion applications was performed in the frame of the European Fusion Program at the 14 MeV ENEA Frascati Neutron Generator (FNG. Reaction rates, neutron flux spectra and doses were measured using different experimental techniques (e.g. activation foils techniques, NE213 scintillator and thermoluminescent detectors. This paper first summarizes the analyses of the experiment carried-out using the MCNP5 Monte Carlo code and the European JEFF-3.2 library. Large discrepancies between calculation (C and experiment (E were found for the reaction rates both in the high and low neutron energy range. The analysis was complemented by sensitivity/uncertainty analyses (S/U using the deterministic and Monte Carlo SUSD3D and MCSEN codes, respectively. The S/U analyses enabled to identify the cross sections and energy ranges which are mostly affecting the calculated responses. The largest discrepancy among the C/E values was observed for the thermal (capture reactions indicating severe deficiencies in the 63,65Cu capture and elastic cross sections at lower rather than at high energy. Deterministic and MC codes produced similar results. The 14 MeV copper experiment and its analysis thus calls for a revision of the JEFF-3.2 copper cross section and covariance data evaluation. A new analysis of the experiment was performed with the MCNP5 code using the revised JEFF-3.3-T2 library released by NEA and a new, not yet distributed, revised JEFF-3.2 Cu evaluation produced by KIT. A noticeable improvement of the C/E results was obtained with both new libraries.

  2. Copper benchmark experiment for the testing of JEFF-3.2 nuclear data for fusion applications

    Science.gov (United States)

    Angelone, M.; Flammini, D.; Loreti, S.; Moro, F.; Pillon, M.; Villar, R.; Klix, A.; Fischer, U.; Kodeli, I.; Perel, R. L.; Pohorecky, W.

    2017-09-01

    A neutronics benchmark experiment on a pure Copper block (dimensions 60 × 70 × 70 cm3) aimed at testing and validating the recent nuclear data libraries for fusion applications was performed in the frame of the European Fusion Program at the 14 MeV ENEA Frascati Neutron Generator (FNG). Reaction rates, neutron flux spectra and doses were measured using different experimental techniques (e.g. activation foils techniques, NE213 scintillator and thermoluminescent detectors). This paper first summarizes the analyses of the experiment carried-out using the MCNP5 Monte Carlo code and the European JEFF-3.2 library. Large discrepancies between calculation (C) and experiment (E) were found for the reaction rates both in the high and low neutron energy range. The analysis was complemented by sensitivity/uncertainty analyses (S/U) using the deterministic and Monte Carlo SUSD3D and MCSEN codes, respectively. The S/U analyses enabled to identify the cross sections and energy ranges which are mostly affecting the calculated responses. The largest discrepancy among the C/E values was observed for the thermal (capture) reactions indicating severe deficiencies in the 63,65Cu capture and elastic cross sections at lower rather than at high energy. Deterministic and MC codes produced similar results. The 14 MeV copper experiment and its analysis thus calls for a revision of the JEFF-3.2 copper cross section and covariance data evaluation. A new analysis of the experiment was performed with the MCNP5 code using the revised JEFF-3.3-T2 library released by NEA and a new, not yet distributed, revised JEFF-3.2 Cu evaluation produced by KIT. A noticeable improvement of the C/E results was obtained with both new libraries.

  3. RSG Deployment Case Testing Results

    Energy Technology Data Exchange (ETDEWEB)

    Owsley, Stanley L.; Dodson, Michael G.; Hatchell, Brian K.; Seim, Thomas A.; Alexander, David L.; Hawthorne, Woodrow T.

    2005-09-01

    The RSG deployment case design is centered on taking the RSG system and producing a transport case that houses the RSG in a safe and controlled manner for transport. The transport case was driven by two conflicting constraints, first that the case be as light as possible, and second that it meet a stringent list of Military Specified requirements. The design team worked to extract every bit of weight from the design while striving to meet the rigorous Mil-Spec constraints. In the end compromises were made primarily on the specification side to control the overall weight of the transport case. This report outlines the case testing results.

  4. Simulation with Different Turbulence Models in an Annex 20 Benchmark Test using Star-CCM+

    DEFF Research Database (Denmark)

    Le Dreau, Jerome; Heiselberg, Per; Nielsen, Peter V.

    The purpose of this investigation is to compare the different flow patterns obtained for the 2D isothermal test case defined in Annex 20 (1990) using different turbulence models. The different results are compared with the existing experimental data. Similar study has already been performed by Rong...

  5. Organisational aspects and benchmarking of e-learning initiatives: a case study with South African community health workers.

    Science.gov (United States)

    Reisach, Ulrike; Weilemann, Mitja

    2016-06-01

    South Africa desperately needs a comprehensive approach to fight HIV/AIDS. Education is crucial to reach this goal and Internet and e-learning could offer huge opportunities to broaden and deepen the knowledge basis. But due to the huge societal and digital divide between rich and poor areas, e-learning is difficult to realize in the townships. Community health workers often act as mediators and coaches for people seeking medical and personal help. They could give good advice regarding hygiene, nutrition, protection of family members in case of HIV/AIDS and finding legal ways to earn one's living if they were trained to do so. Therefore they need to have a broader general knowledge. Since learning opportunities in the townships are scarce, a system for e-learning has to be created in order to overcome the lack of experience with computers or the Internet and to enable them to implement a network of expertise. The article describes how the best international resources on basic medical knowledge, HIV/AIDS as well as on basic economic and entrepreneurial skills were benchmarked to be integrated into an e-learning system. After tests with community health workers, researchers developed recommendations on building a self-sustaining system for learning, including a network of expertise and best practice sharing. The article explains the opportunities and challenges for community health workers, which could provide information for other parts of the world with similar preconditions of rural poverty. © The Author(s) 2015.

  6. Experiments on crushed salt consolidation with true triaxial testing device as a contribution to an EC Benchmark exercise

    International Nuclear Information System (INIS)

    Korthaus, E.

    1998-10-01

    The description of a Benchmark laboratory test on crushed salt consolidation is given that was performed twice with the true triaxial testing device developed by INE. The test was defined as an anisothermal hydrostatic multi-step test, with six creeping periods, and 45 days total duration. In the repetition test, an additional technique was applied for the first time in order to further reduce wall friction effects in the triaxial device. In both tests the sample strains were measured with high precision, allowing a reliable determination of the consolidation rates during the creeping periods. Changes in consolidation rates during load reductions were used to calculate the stress exponent of the constitutive model. Elastic compression moduli were determined at three compaction stages in the first test with the use of fast stress changes. The test results are compared with the model calculations performed by INE before the test under the Benchmark project. A preliminary comparison of the test results with those of the other participants is given. The comparison of the results of both tests shows that wall friction has only a moderate effect in the measurements with the true triaxial device. (orig.) [de

  7. The Monte Carlo performance benchmark test - AIMS, specifications and first results

    Energy Technology Data Exchange (ETDEWEB)

    Hoogenboom, J. Eduard, E-mail: j.e.hoogenboom@tudelft.nl [Faculty of Applied Sciences, Delft University of Technology (Netherlands); Martin, William R., E-mail: wrm@umich.edu [Nuclear Engineering and Radiological Sciences, University of Michigan, Ann Arbor, MI (United States); Petrovic, Bojan, E-mail: Bojan.Petrovic@gatech.edu [Nuclear and Radiological Engineering, Georgia Institute of Technology, Atlanta, GA (United States)

    2011-07-01

    The Monte Carlo performance benchmark for detailed power density calculation in a full-size reactor core is organized under the auspices of the OECD NEA Data Bank. It aims at monitoring over a range of years the increase in performance, measured in terms of standard deviation and computer time, of Monte Carlo calculation of the power density in small volumes. A short description of the reactor geometry and composition is discussed. One of the unique features of the benchmark exercise is the possibility to upload results from participants at a web site of the NEA Data Bank which enables online analysis of results and to graphically display how near we are at the goal of doing a detailed power distribution calculation with acceptable statistical uncertainty in an acceptable computing time. First results are discussed which show that 10 to 100 billion histories must be simulated to reach a standard deviation of a few percent in the estimated power of most of the requested the fuel zones. Even when using a large supercomputer, a considerable speedup is still needed to reach the target of 1 hour computer time. An outlook is given of what to expect from this benchmark exercise over the years. Possible extensions of the benchmark for specific issues relevant in current Monte Carlo calculation for nuclear reactors are also discussed. (author)

  8. The Monte Carlo performance benchmark test - AIMS, specifications and first results

    International Nuclear Information System (INIS)

    Hoogenboom, J. Eduard; Martin, William R.; Petrovic, Bojan

    2011-01-01

    The Monte Carlo performance benchmark for detailed power density calculation in a full-size reactor core is organized under the auspices of the OECD NEA Data Bank. It aims at monitoring over a range of years the increase in performance, measured in terms of standard deviation and computer time, of Monte Carlo calculation of the power density in small volumes. A short description of the reactor geometry and composition is discussed. One of the unique features of the benchmark exercise is the possibility to upload results from participants at a web site of the NEA Data Bank which enables online analysis of results and to graphically display how near we are at the goal of doing a detailed power distribution calculation with acceptable statistical uncertainty in an acceptable computing time. First results are discussed which show that 10 to 100 billion histories must be simulated to reach a standard deviation of a few percent in the estimated power of most of the requested the fuel zones. Even when using a large supercomputer, a considerable speedup is still needed to reach the target of 1 hour computer time. An outlook is given of what to expect from this benchmark exercise over the years. Possible extensions of the benchmark for specific issues relevant in current Monte Carlo calculation for nuclear reactors are also discussed. (author)

  9. Availability of Neutronics Benchmarks in the ICSBEP and IRPhEP Handbooks for Computational Tools Testing

    Energy Technology Data Exchange (ETDEWEB)

    Bess, John D.; Briggs, J. Blair; Ivanova, Tatiana; Hill, Ian; Gulliford, Jim

    2017-02-01

    In the past several decades, numerous experiments have been performed worldwide to support reactor operations, measurements, design, and nuclear safety. Those experiments represent an extensive international investment in infrastructure, expertise, and cost, representing significantly valuable resources of data supporting past, current, and future research activities. Those valuable assets represent the basis for recording, development, and validation of our nuclear methods and integral nuclear data [1]. The loss of these experimental data, which has occurred all too much in the recent years, is tragic. The high cost to repeat many of these measurements can be prohibitive, if not impossible, to surmount. Two international projects were developed, and are under the direction of the Organisation for Co-operation and Development Nuclear Energy Agency (OECD NEA) to address the challenges of not just data preservation, but evaluation of the data to determine its merit for modern and future use. The International Criticality Safety Benchmark Evaluation Project (ICSBEP) was established to identify and verify comprehensive critical benchmark data sets; evaluate the data, including quantification of biases and uncertainties; compile the data and calculations in a standardized format; and formally document the effort into a single source of verified benchmark data [2]. Similarly, the International Reactor Physics Experiment Evaluation Project (IRPhEP) was established to preserve integral reactor physics experimental data, including separate or special effects data for nuclear energy and technology applications [3]. Annually, contributors from around the world continue to collaborate in the evaluation and review of select benchmark experiments for preservation and dissemination. The extensively peer-reviewed integral benchmark data can then be utilized to support nuclear design and safety analysts to validate the analytical tools, methods, and data needed for next

  10. Analysis of NEA-NSC PWR Uncontrolled Control Rod Withdrawal at Zero Power Benchmark Cases with NODAL3 Code

    Directory of Open Access Journals (Sweden)

    Tagor Malem Sembiring

    2017-01-01

    Full Text Available The in-house coupled neutronic and thermal-hydraulic (N/T-H code of BATAN (National Nuclear Energy Agency of Indonesia, NODAL3, based on the few-group neutron diffusion equation in 3-dimensional geometry using the polynomial nodal method, has been verified with static and transient PWR benchmark cases. This paper reports the verification of NODAL3 code in the NEA-NSC PWR uncontrolled control rods withdrawal at zero power benchmark. The objective of this paper is to determine the accuracy of NODAL3 code in solving the continuously slow and fast reactivity insertions due to single and group of control rod bank withdrawn while the power and temperature increment are limited by the Doppler coefficient. The benchmark is chosen since many organizations participated using various methods and approximations, so the calculation results of NODAL3 can be compared to other codes’ results. The calculated parameters are performed for the steady-state, transient core averaged, and transient hot pellet results. The influence of radial and axial nodes number was investigated for all cases. The results of NODAL3 code are in very good agreement with the reference solutions if the radial and axial nodes number is 2 × 2 and 2 × 18 (total axial layers, respectively.

  11. International benchmarking of specialty hospitals. A series of case studies on comprehensive cancer centres

    NARCIS (Netherlands)

    van Lent, W.A.M.; de Beer, Relinde; van Harten, Willem H.

    2010-01-01

    Background Benchmarking is one of the methods used in business that is applied to hospitals to improve the management of their operations. International comparison between hospitals can explain performance differences. As there is a trend towards specialization of hospitals, this study examines the

  12. Authentic e-Learning in a Multicultural Context: Virtual Benchmarking Cases from Five Countries

    Science.gov (United States)

    Leppisaari, Irja; Herrington, Jan; Vainio, Leena; Im, Yeonwook

    2013-01-01

    The implementation of authentic learning elements at education institutions in five countries, eight online courses in total, is examined in this paper. The International Virtual Benchmarking Project (2009-2010) applied the elements of authentic learning developed by Herrington and Oliver (2000) as criteria to evaluate authenticity. Twelve…

  13. Benchmarking Alumni Relations in Community Colleges: Findings from a 2015 CASE Survey

    Science.gov (United States)

    Paradise, Andrew

    2016-01-01

    The Benchmarking Alumni Relations in Community Colleges white paper features key data on alumni relations programs at community colleges across the United States. The paper compares results from 2015 and 2012 across such areas as the structure, operations and budget for alumni relations, alumni data collection and management, alumni communications…

  14. Automating Test Activities: Test Cases Creation, Test Execution, and Test Reporting with Multiple Test Automation Tools

    OpenAIRE

    Loke Mun Sei

    2015-01-01

    Software testing has become a mandatory process in assuring the software product quality. Hence, test management is needed in order to manage the test activities conducted in the software test life cycle. This paper discusses on the challenges faced in the software test life cycle, and how the test processes and test activities, mainly on test cases creation, test execution, and test reporting is being managed and automated using several test automation tools, i.e. Jira, ...

  15. The Drill Down Benchmark

    NARCIS (Netherlands)

    P.A. Boncz (Peter); T. Rühl (Tim); F. Kwakkel

    1998-01-01

    textabstractData Mining places specific requirements on DBMS query performance that cannot be evaluated satisfactorily using existing OLAP benchmarks. The DD Benchmark - defined here - provides a practical case and yardstick to explore how well a DBMS is able to support Data Mining applications. It

  16. Benchmarking of thermalhydraulic loop models for lead-alloy-cooled advanced nuclear energy systems. Phase I: Isothermal forced convection case

    International Nuclear Information System (INIS)

    2012-06-01

    Under the auspices of the NEA Nuclear Science Committee (NSC), the Working Party on Scientific Issues of the Fuel Cycle (WPFC) has been established to co-ordinate scientific activities regarding various existing and advanced nuclear fuel cycles, including advanced reactor systems, associated chemistry and flowsheets, development and performance of fuel and materials and accelerators and spallation targets. The WPFC has different expert groups to cover a wide range of scientific issues in the field of nuclear fuel cycle. The Task Force on Lead-Alloy-Cooled Advanced Nuclear Energy Systems (LACANES) was created in 2006 to study thermal-hydraulic characteristics of heavy liquid metal coolant loop. The objectives of the task force are to (1) validate thermal-hydraulic loop models for application to LACANES design analysis in participating organisations, by benchmarking with a set of well-characterised lead-alloy coolant loop test data, (2) establish guidelines for quantifying thermal-hydraulic modelling parameters related to friction and heat transfer by lead-alloy coolant and (3) identify specific issues, either in modelling and/or in loop testing, which need to be addressed via possible future work. Nine participants from seven different institutes participated in the first phase of the benchmark. This report provides details of the benchmark specifications, method and code characteristics and results of the preliminary study: pressure loss coefficient and Phase-I. A comparison and analysis of the results will be performed together with Phase-II

  17. Technical report on the fatigue crack Growth Benchmark based on CEA pipe bending tests

    International Nuclear Information System (INIS)

    2001-07-01

    In order to improve the estimation methods of surface crack propagation through the thickness of components, CEA has proposed a benchmark to members of the IAGE WG, sub-group on Integrity of metal components and structures. The subject is a simple configuration of a pipe containing an axisymmetric notch and submitted to a cyclic bending load. An experimental data-set form CEA was used to validate three issues in the topic of Leak Before Break. - Crack initiation, - Crack propagation through the thickness, - Crack penetration. All material and geometrical data which are necessary for the simulation were given in the proposal, including experimental results. Due to the peculiar complexity of the problem, it was decided to focus the work on methodologies comparison so as to allow participants to tune up parameters and adjust their models and tools. This report presents all estimations performed by the participants and collected by CEA. They are compared to the experimental results. An analysis of the used procedures is also proposed. This, associated with the study of the accuracy of different methodologies, leads to comments and recommendations on the analysis of fatigue crack growth. The participation in the first step was important: nine participants have proposed analyses, sometimes parametric analysis to estimate crack growth. Results sorted out three estimation methods groups that give results in accordance with experimental ones (these three groups are based on a strain range evaluation and the fatigue curve of the material): - The use of an elastic stress at the notch tip and a fatigue notch concentration factor to determine the strain range. - The use of a KI (or elastic F.E. calculation) and a Neuber rule for the estimation of the strain range at a characteristic distance from the crack tip. - The direct calculation of the strain range at the characteristic distance by an elastic plastic F.E. calculation. Only 4 participants have proposed an estimate of the

  18. Benchmarking passenger air transport marketing activities in Vietnam : case company: Etihad Airways

    OpenAIRE

    Kim Nga Nguyen, Thi

    2015-01-01

    Marketing strategy is crucial for businesses operating in highly competitive environments. Especially with the intense competition over international flights in the Vietnamese air travel market, it is important for airlines to adopt superior strategy, in order to incorporate brand presence in the market. Hence, performing benchmarking on marketing strategy for Etihad Airways is timely and necessary. The thesis adopts the combination of inductive and deductive approaches, with the assistan...

  19. TACLeBench: A benchmark collection to support worst-case execution time research

    DEFF Research Database (Denmark)

    Falk, Heiko; Altmeyer, Sebastian; Hellinckx, Peter

    2016-01-01

    -source programs, adapted them to a common coding style, and provide the collection in open-source. The benchmark collection is called TACLeBench and is available from GitHub in version 1.9 at the publication date of this paper. One of the main features of TACLeBench is that all programs are self-contained without...... any dependencies on standard libraries or an operating system....

  20. Benchmark tests of JENDL-3.2 for thermal and fast reactors

    International Nuclear Information System (INIS)

    Takano, Hideki

    1995-01-01

    Benchmark calculations for a variety of thermal and fast reactors have been performed by using the newly evaluated JENDL-3 Version-2 (JENDL-3.2) file. In the thermal reactor calculations for the uranium and plutonium fueled cores of TRX and TCA, the k eff and lattice parameters were well predicted. The fast reactor calculations for ZPPR-9 and FCA assemblies showed that the k eff , reactivity worth of Doppler, sodium void and control rod, and reaction rate distribution were in a very good agreement with the experiments. (author)

  1. Benchmark test of nuclear data with pulsed sphere experiment using OKTAVIAN

    International Nuclear Information System (INIS)

    Ichihara, Ch.; Hayashi, S.; Yamamoto, J.; Kimura, I.; Tkahashi, A.

    1996-01-01

    Nuclear data files such as JENDL - Fusion File, JENDL - 3.2, ENDF / B -VI, BROND - 2 have been compared to pulsed sphere experiments on 14 samples modeled with the MCNP4A code for the purpose of bench-marking the data libraries. The results are in good agreement for Li, Cr, Mn, Cu and Mo. Satisfying results have been obtained for Zr and Nb with JENDL - Fusion and JENDL - 3.2. For W, quite good results were obtained with ENDF / B - VI. There is a disagreement for the other samples such as LiF, TEFLON, Si, Ti and Co

  2. FENDL-3 benchmark test with neutronics experiments related to fusion in Japan

    International Nuclear Information System (INIS)

    Konno, Chikara; Ohta, Masayuki; Takakura, Kosuke; Ochiai, Kentaro; Sato, Satoshi

    2014-01-01

    Highlights: •We have benchmarked FENDL-3.0 with integral experiments with DT neutron sources in Japan. •The FENDL-3.0 is as accurate as FENDL-2.1 and JENDL-4.0 or more. •Some data in FENDL-3.0 may have some problems. -- Abstract: The IAEA supports and promotes the gathering of the best data from evaluated nuclear data libraries for each nucleus involved in fusion reactor applications and compiles these data as FENDL. In 2012, the IAEA released a major update to FENDL, FENDL-3.0, which extends the neutron energy range from 20 MeV to greater than 60 MeV for 180 nuclei. We have benchmarked FENDL-3.0 versus in situ and TOF experiments using the DT neutron source at FNS at the JAEA and TOF experiments using the DT neutron source at OKTAVIAN at Osaka University in Japan. The Monte Carlo code MCNP-5 and the ACE file of FENDL-3.0 supplied from the IAEA were used for the calculations. The results were compared with measured ones and those obtained using the previous version, FENDL-2.1, and the latest version, JENDL-4.0. It is concluded that FENDL-3.0 is as accurate as or more so than FENDL-2.1 and JENDL-4.0, although some data in FENDL-3.0 may be problematic

  3. Benchmarking Tool Kit.

    Science.gov (United States)

    Canadian Health Libraries Association.

    Nine Canadian health libraries participated in a pilot test of the Benchmarking Tool Kit between January and April, 1998. Although the Tool Kit was designed specifically for health libraries, the content and approach are useful to other types of libraries as well. Used to its full potential, benchmarking can provide a common measuring stick to…

  4. THE APPLICATION OF DATA ENVELOPMENT ANALYSIS METHODOLOGY TO IMPROVE THE BENCHMARKING PROCESS IN THE EFQM BUSINESS MODEL (CASE STUDY: AUTOMOTIVE INDUSTRY OF IRAN

    Directory of Open Access Journals (Sweden)

    K. Shahroudi

    2009-10-01

    Full Text Available This paper reports a survey and case study research outcomes on the application of Data Envelopment Analysis (DEA to the ranking method of European Foundation for Quality Management (EFQM Business Excellence Model in Iran’s Automotive Industry and improving benchmarking process after assessment. Following the global trend, the Iranian industry leaders have introduced the EFQM practice to their supply chain in order to improve the supply base competitiveness during the last four years. A question which is raises is whether the EFQM model can be combined with a mathematical model such as DEA in order to generate a new ranking method and develop or facilitate the benchmarking process. The developed model of this paper is simple. However, it provides some new and interesting insights. The paper assesses the usefulness and capability of the DEA technique to recognize a new scoring system in order to compare the classical ranking method and the EFQM business model. We used this method to identify meaningful exemplar companies for each criterion of the EFQM model then we designed a road map based on realistic targets in the criterion which have currently been achieved by exemplar companies. The research indicates that the DEA approach is a reliable tool to analyze the latent knowledge of scores generated by conducting self- assessments. The Wilcoxon Rank Sum Test is used to compare two scores and the Test of Hypothesis reveals the meaningful relation between the EFQM and DEA new ranking methods. Finally, we drew a road map based on the benchmarking concept using the research results.

  5. Theory Testing Using Case Studies

    DEFF Research Database (Denmark)

    Sørensen, Pernille Dissing; Løkke, Ann-Kristina

    2006-01-01

    design. Finally, we discuss the epistemological logic, i.e., the value to larger research programmes, of such studies and, following Lakatos, conclude that the value of theory-testing case studies lies beyond naïve falsification and in their contribution to developing research programmes in a progressive...

  6. Mean Abnormal Result Rate: Proof of Concept of a New Metric for Benchmarking Selectivity in Laboratory Test Ordering.

    Science.gov (United States)

    Naugler, Christopher T; Guo, Maggie

    2016-04-01

    There is a need to develop and validate new metrics to access the appropriateness of laboratory test requests. The mean abnormal result rate (MARR) is a proposed measure of ordering selectivity, the premise being that higher mean abnormal rates represent more selective test ordering. As a validation of this metric, we compared the abnormal rate of lab tests with the number of tests ordered on the same requisition. We hypothesized that requisitions with larger numbers of requested tests represent less selective test ordering and therefore would have a lower overall abnormal rate. We examined 3,864,083 tests ordered on 451,895 requisitions and found that the MARR decreased from about 25% if one test was ordered to about 7% if nine or more tests were ordered, consistent with less selectivity when more tests were ordered. We then examined the MARR for community-based testing for 1,340 family physicians and found both a wide variation in MARR as well as an inverse relationship between the total tests ordered per year per physician and the physician-specific MARR. The proposed metric represents a new utilization metric for benchmarking relative selectivity of test orders among physicians. © American Society for Clinical Pathology, 2016. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  7. Case studies in ultrasonic testing

    International Nuclear Information System (INIS)

    Prasad, V.; Satheesh, C.; Varde, P.V.

    2015-01-01

    Ultrasonic testing is widely used Non Destructive Testing (NDT) method and forms the essential part of In-service inspection programme of nuclear reactors. Main application of ultrasonic testing is for volumetric scanning of weld joints followed by thickness gauging of pipelines and pressure vessels. Research reactor Dhruva has completed the first In Service Inspection programme in which about 325 weld joints have been volumetrically scanned, in addition to thickness gauging of 300 meters of pipe lines of various sizes and about 24 nos of pressure vessels. Ultrasonic testing is also used for level measurements, distance measurements and cleaning and decontamination of tools. Two case studies are brought out in this paper in which ultrasonic testing is used successfully for identification of butterfly valve opening status and extent of choking in pipe lines in Dhruva reactor systems

  8. Benchmarking Investments in Advancement: Results of the Inaugural CASE Advancement Investment Metrics Study (AIMS). CASE White Paper

    Science.gov (United States)

    Kroll, Juidith A.

    2012-01-01

    The inaugural Advancement Investment Metrics Study, or AIMS, benchmarked investments and staffing in each of the advancement disciplines (advancement services, alumni relations, communications and marketing, fundraising and advancement management) as well as the return on the investment in fundraising specifically. This white paper reports on the…

  9. Numerical simulation of air distribution in a room with a sidewall jet under benchmark test conditions

    Science.gov (United States)

    Zasimova, Marina; Ivanov, Nikolay

    2018-05-01

    The goal of the study is to validate Large Eddy Simulation (LES) data on mixing ventilation in an isothermal room at conditions of benchmark experiments by Hurnik et al. (2015). The focus is on the accuracy of the mean and rms velocity fields prediction in the quasi-free jet zone of the room with 3D jet supplied from a sidewall rectangular diffuser. Calculations were carried out using the ANSYS Fluent 16.2 software with an algebraic wall-modeled LES subgrid-scale model. CFD results on the mean velocity vector are compared with the Laser Doppler Anemometry data. The difference between the mean velocity vector and the mean air speed in the jet zone, both LES-computed, is presented and discussed.

  10. POLCA-T simulation of OECD/NRC BWR turbine trip benchmark exercise 3 best estimate scenario TT2 test and four extreme scenarios

    International Nuclear Information System (INIS)

    Panayotov, D.

    2004-01-01

    Westinghouse transient code POLCA-T brings together the system thermal-hydraulics plant models and the 3D neutron kinetics core model. Code validation plan includes the calculations of Peach Bottom end of cycle 2 turbine trip transients and low-flow stability tests. The paper describes the objectives, method, and results of analyses performed in the final phase of OECD/NRC Peach Bottom 2 Boiling Water Reactor Turbine Trip Benchmark. Brief overview of the code features, the method of simulation, the developed 3D core model and system input deck for Peach Bottom 2 are given. The paper presents the results of benchmark exercise 3 best estimate scenario: coupled 3D core neutron kinetics with system thermal-hydraulics analyses. Performed sensitivity studies cover the SCRAM initiation, carry-under, and decay power. Obtained results including total power, steam dome, core exit, lower and upper plenum, main steam line and turbine inlet pressures showed good agreement with measured plant data Thus the POLCA-T code capabilities for correct simulation of turbine trip transients were proved The performed calculations and obtained results for extreme cases demonstrate the POLCA-T code wide range capabilities to simulate transients when scram, steam bypass, and safety and relief valves are not activated. The code is able to handle such transients even when the reactor power and pressure reach values higher than 600 % of rated power, and 10.8 MPa. (authors)

  11. Performance of exchange-correlation functionals in density functional theory calculations for liquid metal: A benchmark test for sodium

    Science.gov (United States)

    Han, Jeong-Hwan; Oda, Takuji

    2018-04-01

    The performance of exchange-correlation functionals in density-functional theory (DFT) calculations for liquid metal has not been sufficiently examined. In the present study, benchmark tests of Perdew-Burke-Ernzerhof (PBE), Armiento-Mattsson 2005 (AM05), PBE re-parameterized for solids, and local density approximation (LDA) functionals are conducted for liquid sodium. The pair correlation function, equilibrium atomic volume, bulk modulus, and relative enthalpy are evaluated at 600 K and 1000 K. Compared with the available experimental data, the errors range from -11.2% to 0.0% for the atomic volume, from -5.2% to 22.0% for the bulk modulus, and from -3.5% to 2.5% for the relative enthalpy depending on the DFT functional. The generalized gradient approximation functionals are superior to the LDA functional, and the PBE and AM05 functionals exhibit the best performance. In addition, we assess whether the error tendency in liquid simulations is comparable to that in solid simulations, which would suggest that the atomic volume and relative enthalpy performances are comparable between solid and liquid states but that the bulk modulus performance is not. These benchmark test results indicate that the results of liquid simulations are significantly dependent on the exchange-correlation functional and that the DFT functional performance in solid simulations can be used to roughly estimate the performance in liquid simulations.

  12. Library Benchmarking

    Directory of Open Access Journals (Sweden)

    Wiji Suwarno

    2017-02-01

    Full Text Available The term benchmarking has been encountered in the implementation of total quality (TQM or in Indonesian termed holistic quality management because benchmarking is a tool to look for ideas or learn from the library. Benchmarking is a processof measuring and comparing for continuous business process of systematic and continuous measurement, the process of measuring and comparing for continuous business process of an organization to get information that can help these organization improve their performance efforts.

  13. Benchmark test of drift-kinetic and gyrokinetic codes through neoclassical transport simulations

    International Nuclear Information System (INIS)

    Satake, S.; Sugama, H.; Watanabe, T.-H.; Idomura, Yasuhiro

    2009-09-01

    Two simulation codes that solve the drift-kinetic or gyrokinetic equation in toroidal plasmas are benchmarked by comparing the simulation results of neoclassical transport. The two codes are the drift-kinetic δf Monte Carlo code (FORTEC-3D) and the gyrokinetic full- f Vlasov code (GT5D), both of which solve radially-global, five-dimensional kinetic equation with including the linear Fokker-Planck collision operator. In a tokamak configuration, neoclassical radial heat flux and the force balance relation, which relates the parallel mean flow with radial electric field and temperature gradient, are compared between these two codes, and their results are also compared with the local neoclassical transport theory. It is found that the simulation results of the two codes coincide very well in a wide rage of plasma collisionality parameter ν * = 0.01 - 10 and also agree with the theoretical estimations. The time evolution of radial electric field and particle flux, and the radial profile of the geodesic acoustic mode frequency also coincide very well. These facts guarantee the capability of GT5D to simulate plasma turbulence transport with including proper neoclassical effects of collisional diffusion and equilibrium radial electric field. (author)

  14. Testing of ENDF/B cross section data in the Californium-252 neutron benchmark field

    International Nuclear Information System (INIS)

    Mannhart, W.

    1979-01-01

    The fission neutron field of 252 Cf presently represents one of the most well-known neutron benchmark fields. For 13 neutron reactions which are of importance in reactor metrology, measurements of spectrum-averaged cross sections, [sigma], performed in this neutron field were compared with calculated average cross sections. This comparison allows one to draw conclusions as to the quality of different sigma(E) data taken from ENDF/B-IV, from ENDF/B-V, and from recent experiments and used in the calculation of average cross sections. The comparison includes an uncertainty analysis regarding the different uncertainty contributions of [sigma], of sigma(E), and of the spectral distribution of 252 Cf fission neutrons. Additionally, in a few examples, sensitivity studies were carried out. The sensitivity of the spectrum-averaged cross sections to individual characteristics of the sigma(E) data, such as normalization factors or shifts in the energy scale, was investigated. Similarly, the sensitivity of [sigma] to the spectral distribution of 252 Cf was determined. 4 figures, 2 tables

  15. How Activists Use Benchmarks

    DEFF Research Database (Denmark)

    Seabrooke, Leonard; Wigan, Duncan

    2015-01-01

    Non-governmental organisations use benchmarks as a form of symbolic violence to place political pressure on firms, states, and international organisations. The development of benchmarks requires three elements: (1) salience, that the community of concern is aware of the issue and views...... are put to the test. The first is a reformist benchmarking cycle where organisations defer to experts to create a benchmark that conforms with the broader system of politico-economic norms. The second is a revolutionary benchmarking cycle driven by expert-activists that seek to contest strong vested...... interests and challenge established politico-economic norms. Differentiating these cycles provides insights into how activists work through organisations and with expert networks, as well as how campaigns on complex economic issues can be mounted and sustained....

  16. Interactive benchmarking

    DEFF Research Database (Denmark)

    Lawson, Lartey; Nielsen, Kurt

    2005-01-01

    We discuss individual learning by interactive benchmarking using stochastic frontier models. The interactions allow the user to tailor the performance evaluation to preferences and explore alternative improvement strategies by selecting and searching the different frontiers using directional...... in the suggested benchmarking tool. The study investigates how different characteristics on dairy farms influences the technical efficiency....

  17. RUNE benchmarks

    DEFF Research Database (Denmark)

    Peña, Alfredo

    This report contains the description of a number of benchmarks with the purpose of evaluating flow models for near-shore wind resource estimation. The benchmarks are designed based on the comprehensive database of observations that the RUNE coastal experiment established from onshore lidar...

  18. Benchmark selection

    DEFF Research Database (Denmark)

    Hougaard, Jens Leth; Tvede, Mich

    2002-01-01

    Within a production theoretic framework, this paper considers an axiomatic approach to benchmark selection. It is shown that two simple and weak axioms; efficiency and comprehensive monotonicity characterize a natural family of benchmarks which typically becomes unique. Further axioms are added...... in order to obtain a unique selection...

  19. The impact of a scheduling change on ninth grade high school performance on biology benchmark exams and the California Standards Test

    Science.gov (United States)

    Leonardi, Marcelo

    The primary purpose of this study was to examine the impact of a scheduling change from a trimester 4x4 block schedule to a modified hybrid schedule on student achievement in ninth grade biology courses. This study examined the impact of the scheduling change on student achievement through teacher created benchmark assessments in Genetics, DNA, and Evolution and on the California Standardized Test in Biology. The secondary purpose of this study examined the ninth grade biology teacher perceptions of ninth grade biology student achievement. Using a mixed methods research approach, data was collected both quantitatively and qualitatively as aligned to research questions. Quantitative methods included gathering data from departmental benchmark exams and California Standardized Test in Biology and conducting multiple analysis of covariance and analysis of covariance to determine significance differences. Qualitative methods include journal entries questions and focus group interviews. The results revealed a statistically significant increase in scores on both the DNA and Evolution benchmark exams. DNA and Evolution benchmark exams showed significant improvements from a change in scheduling format. The scheduling change was responsible for 1.5% of the increase in DNA benchmark scores and 2% of the increase in Evolution benchmark scores. The results revealed a statistically significant decrease in scores on the Genetics Benchmark exam as a result of the scheduling change. The scheduling change was responsible for 1% of the decrease in Genetics benchmark scores. The results also revealed a statistically significant increase in scores on the CST Biology exam. The scheduling change was responsible for .7% of the increase in CST Biology scores. Results of the focus group discussions indicated that all teachers preferred the modified hybrid schedule over the trimester schedule and that it improved student achievement.

  20. A benchmarking study

    Directory of Open Access Journals (Sweden)

    H. Groessing

    2015-02-01

    Full Text Available A benchmark study for permeability measurement is presented. In the past studies of other research groups which focused on the reproducibility of 1D-permeability measurements showed high standard deviations of the gained permeability values (25%, even though a defined test rig with required specifications was used. Within this study, the reproducibility of capacitive in-plane permeability testing system measurements was benchmarked by comparing results of two research sites using this technology. The reproducibility was compared by using a glass fibre woven textile and carbon fibre non crimped fabric (NCF. These two material types were taken into consideration due to the different electrical properties of glass and carbon with respect to dielectric capacitive sensors of the permeability measurement systems. In order to determine the unsaturated permeability characteristics as function of fibre volume content the measurements were executed at three different fibre volume contents including five repetitions. It was found that the stability and reproducibility of the presentedin-plane permeability measurement system is very good in the case of the glass fibre woven textiles. This is true for the comparison of the repetition measurements as well as for the comparison between the two different permeameters. These positive results were confirmed by a comparison to permeability values of the same textile gained with an older generation permeameter applying the same measurement technology. Also it was shown, that a correct determination of the grammage and the material density are crucial for correct correlation of measured permeability values and fibre volume contents.

  1. On the feasibility of using emergy analysis as a source of benchmarking criteria through data envelopment analysis: A case study for wind energy

    International Nuclear Information System (INIS)

    Iribarren, Diego; Vázquez-Rowe, Ian; Rugani, Benedetto; Benetto, Enrico

    2014-01-01

    The definition of criteria for the benchmarking of similar entities is often a critical issue in analytical studies because of the multiplicity of criteria susceptible to be taken into account. This issue can be aggravated by the need to handle multiple data for multiple facilities. This article presents a methodological framework, named the Em + DEA method, which combines emergy analysis with Data Envelopment Analysis (DEA) for the ecocentric benchmarking of multiple resembling entities (i.e., multiple decision making units or DMUs). Provided that the life-cycle inventories of these DMUs are available, an emergy analysis is performed through the computation of seven different indicators, which refer to the use of fossil, metal, mineral, nuclear, renewable energy, water and land resources. These independent emergy values are then implemented as inputs for DEA computation, thus providing operational emergy-based efficiency scores and, for the inefficient DMUs, target emergy flows (i.e., feasible emergy benchmarks that would turn inefficient DMUs into efficient). The use of the Em + DEA method is exemplified through a case study of wind energy farms. The potential use of CED (cumulative energy demand) and CExD (cumulative exergy demand) indicators as alternative benchmarking criteria to emergy is discussed. The combined use of emergy analysis with DEA is proven to be a valid methodological approach to provide benchmarks oriented towards the optimisation of the life-cycle performance of a set of multiple similar facilities, not being limited to the operational traits of the assessed units. - Highlights: • Combined emergy and DEA method to benchmark multiple resembling entities. • Life-cycle inventory, emergy analysis and DEA as key steps of the Em + DEA method. • Valid ecocentric benchmarking approach proven through a case study of wind farms. • Comparison with life-cycle energy-based benchmarking criteria (CED/CExD + DEA). • Analysts and decision and policy

  2. Benchmark Evaluation of Start-Up and Zero-Power Measurements at the High-Temperature Engineering Test Reactor

    International Nuclear Information System (INIS)

    Bess, John D.; Fujimoto, Nozomu

    2014-01-01

    Benchmark models were developed to evaluate six cold-critical and two warm-critical, zero-power measurements of the HTTR. Additional measurements of a fully-loaded subcritical configuration, core excess reactivity, shutdown margins, six isothermal temperature coefficients, and axial reaction-rate distributions were also evaluated as acceptable benchmark experiments. Insufficient information is publicly available to develop finely-detailed models of the HTTR as much of the design information is still proprietary. However, the uncertainties in the benchmark models are judged to be of sufficient magnitude to encompass any biases and bias uncertainties incurred through the simplification process used to develop the benchmark models. Dominant uncertainties in the experimental keff for all core configurations come from uncertainties in the impurity content of the various graphite blocks that comprise the HTTR. Monte Carlo calculations of keff are between approximately 0.9 % and 2.7 % greater than the benchmark values. Reevaluation of the HTTR models as additional information becomes available could improve the quality of this benchmark and possibly reduce the computational biases. High-quality characterization of graphite impurities would significantly improve the quality of the HTTR benchmark assessment. Simulation of the other reactor physics measurements are in good agreement with the benchmark experiment values. The complete benchmark evaluation details are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments

  3. Benchmarking the Habits and Behaviours of Successful Students: A Case Study of Academic-Business Collaboration

    Directory of Open Access Journals (Sweden)

    Elizabeth Archer

    2014-02-01

    Full Text Available Student success and retention is a primary goal of higher education institutions across the world. The cost of student failure and dropout in higher education is multifaceted including, amongst other things, the loss of revenue, prestige, and stakeholder trust for both institutions and students. Interventions to address this are complex and varied. While the dominant thrust has been to investigate academic and non-academic risk factors thus applying a “risk” lens, equal attention should be given to exploring the characteristics of successful students which expands the focus to include “requirements for success”. Based on a socio-critical model for understanding of student success and retention, the University of South Africa (Unisa initiated a pilot project to benchmark successful students’ habits and behaviours using a tool employed in business settings, namely Shadowmatch®. The original focus was on finding a theoretically valid measured for habits and behaviours to examine the critical aspect of student agency in the social critical model. Although this was not the focus of the pilot, concerns regarding using a commercial tool in an academic setting overshadowed the process. This paper provides insights into how academic-business collaboration could allow an institution to be more dynamic and flexible in supporting its student population.

  4. Strategic behavior and international benchmarking for monopoly price regulation. The case of Mexico

    Energy Technology Data Exchange (ETDEWEB)

    Brito, Dagobert L. [Rice Univ., Houston, TX (US). Dept. of Economics and Baker Inst. (MS-22); Rosellon, Juan [Centro de Investigacion y Docencia Economicas (CIDE), Mexico, D.F. (Mexico); Deutsches Inst. fuer Wirtschaftsforschung, Berlin (Germany)

    2010-07-01

    This paper looks into various models that address strategic behavior in the supply of gas by the Mexican monopoly Pemex. The paper has three very strong technical results. First, the netback pricing rule for the price of domestic natural gas (based on a Houston benchmark price) leads to discontinuities in Pemex's revenue function. Second, having Pemex pay for the gas it uses and the gas it flares increases the value of the Lagrange multiplier associated with the gas processing constraint. Third, if the gas processing constraint is binding, having Pemex pay for the gas it uses and flares does not change the short run optimal solution for the optimization problem, so it will have no impact on short-run behavior. These results imply three clear policy recommendations. The first is that the arbitrage point be fixed by the amount of gas Pemex has the potential to supply in the absence of processing and gathering constraints. The second is that Pemex be charged for the gas it uses in production and the gas it flares. The third is that investment in gas processing and pipeline should be in a separate account from other Pemex investment. (orig.)

  5. Modernization at the Y-12 National Security Complex: A Case for Additional Experimental Benchmarks

    Energy Technology Data Exchange (ETDEWEB)

    Thornbury, M. L. [Y-12 National Security Complex, Oak Ridge, TN (United States); Juarez, C. [Y-12 National Security Complex, Oak Ridge, TN (United States); Krass, A. W. [Y-12 National Security Complex, Oak Ridge, TN (United States)

    2017-08-14

    Efforts are underway at the Y-12 National Security Complex (Y-12) to modernize the recovery, purification, and consolidation of un-irradiated, highly enriched uranium metal. Successful integration of advanced technology such as Electrorefining (ER) eliminates many of the intermediate chemistry systems and processes that are the current and historical basis of the nuclear fuel cycle at Y-12. The cost of operations, the inventory of hazardous chemicals, and the volume of waste are significantly reduced by ER. It also introduces unique material forms and compositions related to the chemistry of chloride salts for further consideration in safety analysis and engineering. The work herein briefly describes recent investigations of nuclear criticality for 235UO2Cl2 (uranyl chloride) and 6LiCl (lithium chloride) in aqueous solution. Of particular interest is the minimum critical mass of highly enriched uranium as a function of the molar ratio of 6Li to 235U. The work herein also briefly describes recent investigations of nuclear criticality for 235U metal reflected by salt mixtures of 6LiCl or 7LiCl (lithium chloride), KCl (potassium chloride), and 235UCl3 or 238UCl3 (uranium tri-chloride). Computational methods for analysis of nuclear criticality safety and published nuclear data are employed in the absence of directly relevant experimental criticality benchmarks.

  6. Benchmarking and the laboratory

    Science.gov (United States)

    Galloway, M; Nadin, L

    2001-01-01

    This article describes how benchmarking can be used to assess laboratory performance. Two benchmarking schemes are reviewed, the Clinical Benchmarking Company's Pathology Report and the College of American Pathologists' Q-Probes scheme. The Clinical Benchmarking Company's Pathology Report is undertaken by staff based in the clinical management unit, Keele University with appropriate input from the professional organisations within pathology. Five annual reports have now been completed. Each report is a detailed analysis of 10 areas of laboratory performance. In this review, particular attention is focused on the areas of quality, productivity, variation in clinical practice, skill mix, and working hours. The Q-Probes scheme is part of the College of American Pathologists programme in studies of quality assurance. The Q-Probes scheme and its applicability to pathology in the UK is illustrated by reviewing two recent Q-Probe studies: routine outpatient test turnaround time and outpatient test order accuracy. The Q-Probes scheme is somewhat limited by the small number of UK laboratories that have participated. In conclusion, as a result of the government's policy in the UK, benchmarking is here to stay. Benchmarking schemes described in this article are one way in which pathologists can demonstrate that they are providing a cost effective and high quality service. Key Words: benchmarking • pathology PMID:11477112

  7. Benchmarking infrastructure for mutation text mining.

    Science.gov (United States)

    Klein, Artjom; Riazanov, Alexandre; Hindle, Matthew M; Baker, Christopher Jo

    2014-02-25

    Experimental research on the automatic extraction of information about mutations from texts is greatly hindered by the lack of consensus evaluation infrastructure for the testing and benchmarking of mutation text mining systems. We propose a community-oriented annotation and benchmarking infrastructure to support development, testing, benchmarking, and comparison of mutation text mining systems. The design is based on semantic standards, where RDF is used to represent annotations, an OWL ontology provides an extensible schema for the data and SPARQL is used to compute various performance metrics, so that in many cases no programming is needed to analyze results from a text mining system. While large benchmark corpora for biological entity and relation extraction are focused mostly on genes, proteins, diseases, and species, our benchmarking infrastructure fills the gap for mutation information. The core infrastructure comprises (1) an ontology for modelling annotations, (2) SPARQL queries for computing performance metrics, and (3) a sizeable collection of manually curated documents, that can support mutation grounding and mutation impact extraction experiments. We have developed the principal infrastructure for the benchmarking of mutation text mining tasks. The use of RDF and OWL as the representation for corpora ensures extensibility. The infrastructure is suitable for out-of-the-box use in several important scenarios and is ready, in its current state, for initial community adoption.

  8. Benchmarking infrastructure for mutation text mining

    Science.gov (United States)

    2014-01-01

    Background Experimental research on the automatic extraction of information about mutations from texts is greatly hindered by the lack of consensus evaluation infrastructure for the testing and benchmarking of mutation text mining systems. Results We propose a community-oriented annotation and benchmarking infrastructure to support development, testing, benchmarking, and comparison of mutation text mining systems. The design is based on semantic standards, where RDF is used to represent annotations, an OWL ontology provides an extensible schema for the data and SPARQL is used to compute various performance metrics, so that in many cases no programming is needed to analyze results from a text mining system. While large benchmark corpora for biological entity and relation extraction are focused mostly on genes, proteins, diseases, and species, our benchmarking infrastructure fills the gap for mutation information. The core infrastructure comprises (1) an ontology for modelling annotations, (2) SPARQL queries for computing performance metrics, and (3) a sizeable collection of manually curated documents, that can support mutation grounding and mutation impact extraction experiments. Conclusion We have developed the principal infrastructure for the benchmarking of mutation text mining tasks. The use of RDF and OWL as the representation for corpora ensures extensibility. The infrastructure is suitable for out-of-the-box use in several important scenarios and is ready, in its current state, for initial community adoption. PMID:24568600

  9. OECD/NRC Benchmark Based on NUPEC PWR Sub-channel and Bundle Test (PSBT). Volume I: Experimental Database and Final Problem Specifications

    International Nuclear Information System (INIS)

    Rubin, A.; Schoedel, A.; Avramova, M.; Utsuno, H.; Bajorek, S.; Velazquez-Lozada, A.

    2012-01-01

    The need to refine models for best-estimate calculations, based on good-quality experimental data, has been expressed in many recent meetings in the field of nuclear applications. The needs arising in this respect should not be limited to the currently available macroscopic methods but should be extended to next-generation analysis techniques that focus on more microscopic processes. One of the most valuable databases identified for the thermal-hydraulics modelling was developed by the Nuclear Power Engineering Corporation (NUPEC), Japan, which includes sub-channel void fraction and departure from nucleate boiling (DNB) measurements in a representative Pressurised Water Reactor (PWR) fuel assembly. Part of this database has been made available for this international benchmark activity entitled 'NUPEC PWR Sub-channel and Bundle Tests (PSBT) benchmark'. This international project has been officially approved by the Japanese Ministry of Economy, Trade, and Industry (METI), the US Nuclear Regulatory Commission (NRC) and endorsed by the OECD/NEA. The benchmark team has been organised based on the collaboration between Japan and the USA. A large number of international experts have agreed to participate in this programme. The fine-mesh high-quality sub-channel void fraction and departure from nucleate boiling data encourages advancement in understanding and modelling complex flow behaviour in real bundles. Considering that the present theoretical approach is relatively immature, the benchmark specification is designed so that it will systematically assess and compare the participants' analytical models on the prediction of detailed void distributions and DNB. The development of truly mechanistic models for DNB prediction is currently underway. The benchmark problem includes both macroscopic and microscopic measurement data. In this context, the sub-channel grade void fraction data are regarded as the macroscopic data and the digitised computer graphic images are the

  10. Elevated-temperature benchmark tests of simply supported beams and circular plates subjected to time-varying loadings

    International Nuclear Information System (INIS)

    Corum, J.M.; Richardson, M.; Clinard, J.A.

    1977-01-01

    This report presents the measured elastic-plastic-creep responses of eight simply supported type 304 stainless steel beams and circular plates that were subjected to time-varying loadings at elevated temperature. The tests were performed to provide experimental benchmark problem data suitable for assessing inelastic analysis methods and for validating computer programs. Beams and plates exhibit the essential features of inelastic structural behavior; yet they are relatively simple and the experimental results are generally easy to interpret. The stress fields are largely uniaxial in beams, while multiaxial effects are introduced in plates. The specimens tested were laterally loaded at the center and subjected to either a prescribed load or a center deflection history. The specimens were machined from a common well-characterized heat of material, and all the tests were performed at a temperature of 593 0 C (1100 0 F). Test results are presented in terms of the load and center deflection behaviors, which typify the overall structural behavior. Additional deflection data, as well as strain gage results and mechanical properties data for the beam and plate material, are provided in the appendices

  11. The OECD/NRC BWR full-size fine-mesh bundle tests benchmark (BFBT)-general description

    International Nuclear Information System (INIS)

    Sartori, Enrico; Hochreiter, L.E.; Ivanov, Kostadin; Utsuno, Hideaki

    2004-01-01

    The need to refine models for best-estimate calculations based on good-quality experimental data have been expressed in many recent meetings in the field of nuclear applications. The needs arising in this respect should not be limited to currently available macroscopic approaches but should be extended to next-generation approaches that focus on more microscopic processes. One most valuable database identified for the thermal-hydraulics modelling was developed by the Nuclear Power Engineering Corporation (NUPEC). Part of this database will be made available for an international benchmark exercise. This fine-mesh high-quality data encourages advancement in the insufficiently developed field of the two-phase flow theory. Considering that the present theoretical approach is relatively immature, the benchmark specification is designed so that it will systematically assess and compare the participants' numerical models on the prediction of detailed void distributions and critical powers. The development of truly mechanistic models for critical power prediction is currently underway. These innovative models should include elementary processes such as void distributions, droplet deposit, liquid film entrainment, etc. The benchmark problem includes both macroscopic and microscopic measurement data. In this context, the sub-channel grade void fraction data are regarded as the macroscopic data, and the digitized computer graphic images are the microscopic data. The proposed benchmark consists of two parts (phases), each part consisting of different exercises: Phase 1- Void distribution benchmark: Exercise 1- Steady-state sub-channel grade benchmark. Exercise 2- Steady-state microscopic grade benchmark. Exercise 3-Transient macroscopic grade benchmark. Phase 2-Critical power benchmark: Exercise 1-Steady-state benchmark. Exercise 2-Transient benchmark. (author)

  12. MFTF TOTAL benchmark

    International Nuclear Information System (INIS)

    Choy, J.H.

    1979-06-01

    A benchmark of the TOTAL data base management system as applied to the Mirror Fusion Test Facility (MFTF) data base was implemented and run in February and March of 1979. The benchmark was run on an Interdata 8/32 and involved the following tasks: (1) data base design, (2) data base generation, (3) data base load, and (4) develop and implement programs to simulate MFTF usage of the data base

  13. Accelerated Life Structural Benchmark Testing for a Stirling Convertor Heater Head

    Science.gov (United States)

    Krause, David L.; Kantzos, Pete T.

    2006-01-01

    For proposed long-duration NASA Space Science missions, the Department of Energy, Lockheed Martin, Infinia Corporation, and NASA Glenn Research Center are developing a high-efficiency, 110 W Stirling Radioisotope Generator (SRG110). A structurally significant limit state for the SRG110 heater head component is creep deformation induced at high material temperature and low stress level. Conventional investigations of creep behavior adequately rely on experimental results from uniaxial creep specimens, and a wealth of creep data is available for the Inconel 718 material of construction. However, the specified atypical thin heater head material is fine-grained with a heat treatment that limits precipitate growth, and little creep property data for this microstructure is available in the literature. In addition, the geometry and loading conditions apply a multiaxial stress state on the component, far from the conditions of uniaxial testing. For these reasons, an extensive experimental investigation is ongoing to aid in accurately assessing the durability of the SRG110 heater head. This investigation supplements uniaxial creep testing with pneumatic testing of heater head-like pressure vessels at design temperature with stress levels ranging from approximately the design stress to several times that. This paper presents experimental results, post-test microstructural analyses, and conclusions for four higher-stress, accelerated life tests. Analysts are using these results to calibrate deterministic and probabilistic analytical creep models of the SRG110 heater head.

  14. Case-mix adjustment approach to benchmarking prevalence rates of nosocomial infection in hospitals in Cyprus and Greece.

    Science.gov (United States)

    Kritsotakis, Evangelos I; Dimitriadis, Ioannis; Roumbelaki, Maria; Vounou, Emelia; Kontou, Maria; Papakyriakou, Panikos; Koliou-Mazeri, Maria; Varthalitis, Ioannis; Vrouchos, George; Troulakis, George; Gikas, Achilleas

    2008-08-01

    To examine the effect of heterogeneous case mix for a benchmarking analysis and interhospital comparison of the prevalence rates of nosocomial infection. Cross-sectional survey. Eleven hospitals located in Cyprus and in the region of Crete in Greece. The survey included all inpatients in the medical, surgical, pediatric, and gynecology-obstetrics wards, as well as those in intensive care units. Centers for Disease Control and Prevention criteria were used to define nosocomial infection. The information collected for all patients included demographic characteristics, primary admission diagnosis, Karnofsky functional status index, Charlson comorbidity index, McCabe-Jackson severity of illness classification, use of antibiotics, and prior exposures to medical and surgical risk factors. Outcome data were also recorded for all patients. Case mix-adjusted rates were calculated by using a multivariate logistic regression model for nosocomial infection risk and an indirect standardization method.Results. The overall prevalence rate of nosocomial infection was 7.0% (95% confidence interval, 5.9%-8.3%) among 1,832 screened patients. Significant variation in nosocomial infection rates was observed across hospitals (range, 2.2%-9.6%). Logistic regression analysis indicated that the mean predicted risk of nosocomial infection across hospitals ranged from 3.7% to 10.3%, suggesting considerable variation in patient risk. Case mix-adjusted rates ranged from 2.6% to 12.4%, and the relative ranking of hospitals was affected by case-mix adjustment in 8 cases (72.8%). Nosocomial infection was significantly and independently associated with mortality (adjusted odds ratio, 3.6 [95% confidence interval, 2.1-6.1]). The first attempt to rank the risk of nosocomial infection in these regions demonstrated the importance of accounting for heterogeneous case mix before attempting interhospital comparisons.

  15. Definition and Analysis of Heavy Water Reactor Benchmarks for Testing New Wims-D Libraries; Definicion y Analisis de Benchmarks de Reactores de Agua Pesada para Pruebas de Nuevas Bibliotecas de Datos Wims-D

    Energy Technology Data Exchange (ETDEWEB)

    Leszczynski, Francisco [Comision Nacional de Energia Atomica, Centro Atomico Bariloche (Argentina)

    2000-07-01

    This work is part of the IAEA-WIMS Library Update Project (WLUP). A group of heavy water reactor benchmarks have been selected for testing new WIMS-D libraries, including calculations with WIMSD5B program and the analysis of results.These benchmarks cover a wide variety of reactors and conditions, from fresh fuels to high burnup, and from natural to enriched uranium.Besides, each benchmark includes variations in lattice pitch and in coolants (normally heavy water and void).Multiplication factors with critical experimental bucklings and other parameters are calculated and compared with experimental reference values.The WIMS libraries used for the calculations were generated with basic data from JEF-2.2 Rev.3 (JEF) and ENDF/B-VI iNReleaseln 5 (E6) Results obtained with WIMS-86 (W86) library, included with WIMSD5B package, from Windfrith, UK with adjusted data, are included also, for showing the improvements obtained with the new -not adjusted- libraries.The calculations with WIMSD5B were made with two methods (input program options): PIJ (two-dimension collision probability method) and DSN (one-dimension Sn method, with homogenization of materials by ring).The general conclusions are: the library based on JEF data and the DSN meted give the best results, that in average are acceptable.

  16. Toxicological Benchmarks for Wildlife

    Energy Technology Data Exchange (ETDEWEB)

    Sample, B.E. Opresko, D.M. Suter, G.W.

    1993-01-01

    Ecological risks of environmental contaminants are evaluated by using a two-tiered process. In the first tier, a screening assessment is performed where concentrations of contaminants in the environment are compared to no observed adverse effects level (NOAEL)-based toxicological benchmarks. These benchmarks represent concentrations of chemicals (i.e., concentrations presumed to be nonhazardous to the biota) in environmental media (water, sediment, soil, food, etc.). While exceedance of these benchmarks does not indicate any particular level or type of risk, concentrations below the benchmarks should not result in significant effects. In practice, when contaminant concentrations in food or water resources are less than these toxicological benchmarks, the contaminants may be excluded from further consideration. However, if the concentration of a contaminant exceeds a benchmark, that contaminant should be retained as a contaminant of potential concern (COPC) and investigated further. The second tier in ecological risk assessment, the baseline ecological risk assessment, may use toxicological benchmarks as part of a weight-of-evidence approach (Suter 1993). Under this approach, based toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. Other sources of evidence include media toxicity tests, surveys of biota (abundance and diversity), measures of contaminant body burdens, and biomarkers. This report presents NOAEL- and lowest observed adverse effects level (LOAEL)-based toxicological benchmarks for assessment of effects of 85 chemicals on 9 representative mammalian wildlife species (short-tailed shrew, little brown bat, meadow vole, white-footed mouse, cottontail rabbit, mink, red fox, and whitetail deer) or 11 avian wildlife species (American robin, rough-winged swallow, American woodcock, wild turkey, belted kingfisher, great blue heron, barred owl, barn owl, Cooper's hawk, and red

  17. Thermal reactor benchmark testing of CENDL-2 and ENDF/B-6

    International Nuclear Information System (INIS)

    Liu Guisheng

    1996-01-01

    In order to test CENDL-2, ten homogeneous and eight heterogeneous thermal assemblies were used. Both of 123 group cross section libraries based on CENDL-2 and ENDF/B-6 were generated by a nuclear data processing system NSLINK, respectively. The calculations of resonance self-shielding, cell spectra, cell reaction rate ratios and effective multiplication factors (K eff ) of these assemblies have been performed by the modified PASC-1 code system. The calculated results using CENDL-2 show an excellent agreement with corresponding experimental values. However, for some assemblies the K eff values calculated by ENDF/B-6 data are underestimated. (7 tabs.)

  18. Thermal reactor benchmark testing of CENDL-2 and ENDF/B-6

    Energy Technology Data Exchange (ETDEWEB)

    Guisheng, Liu [Chinese Nuclear Data Center, Beijing, BJ (China)

    1996-06-01

    In order to test CENDL-2, ten homogeneous and eight heterogeneous thermal assemblies were used. Both of 123 group cross section libraries based on CENDL-2 and ENDF/B-6 were generated by a nuclear data processing system NSLINK, respectively. The calculations of resonance self-shielding, cell spectra, cell reaction rate ratios and effective multiplication factors (K{sub eff}) of these assemblies have been performed by the modified PASC-1 code system. The calculated results using CENDL-2 show an excellent agreement with corresponding experimental values. However, for some assemblies the K{sub eff} values calculated by ENDF/B-6 data are underestimated. (7 tabs.).

  19. Testing of the IRDF-90 cross-section library in benchmark neutron spectra

    International Nuclear Information System (INIS)

    Nolthenius, H.J.; Zsolnay, E.M.; Szondi, E.J.

    1993-09-01

    The new version of the International Reactor Dosimetry File IRDF-90 (called ''Version April 1993'') has been tested by calculation of average cross-sections and their uncertainties in a coarse three energy group structure and by neutron spectrum adjustments in reference neutron spectra. This paper presents the results obtained and compares them with the corresponding ones of the old IRDF-85 and with the data of the Nuclear Data Guide for Reactor Neutron Metrology. The applicability of the new library in the field of neutron metrology is discussed. (orig.)

  20. Regulatory Benchmarking

    DEFF Research Database (Denmark)

    Agrell, Per J.; Bogetoft, Peter

    2017-01-01

    Benchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators. The appli......Benchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators....... The application of bench-marking in regulation, however, requires specific steps in terms of data validation, model specification and outlier detection that are not systematically documented in open publications, leading to discussions about regulatory stability and economic feasibility of these techniques...

  1. Regulatory Benchmarking

    DEFF Research Database (Denmark)

    Agrell, Per J.; Bogetoft, Peter

    2017-01-01

    Benchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators. The appli......Benchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators....... The application of benchmarking in regulation, however, requires specific steps in terms of data validation, model specification and outlier detection that are not systematically documented in open publications, leading to discussions about regulatory stability and economic feasibility of these techniques...

  2. Spanish participation in the Benchmark of the OECD of CFDs applied to nuclear safety: the test of Tee Junction

    International Nuclear Information System (INIS)

    Abdelaziz Essa, M. A.; Chiva, S.; Munoz-Cobo, J. L.

    2011-01-01

    In this paper we show the main results obtained by the groups of the Universities UPV and UJI (GITH-Inter-Universitary Group on Thermal-Hidraulics), in the modeling of the OCDE Benchmark on the T-Junction performed during the year 2010. This benchmark consisted in predicting, using CFD codes the velocity and temperature distributions downstream the junction. Also was considered a important issue the prediction of the velocity and temperature fluctuations, especially the Power Spectral density of the temperature fluctuations. The results obtained in the Benchmark by the GITH group were very good.

  3. Benchmarking ENDF/B-VII.0

    International Nuclear Information System (INIS)

    Marck, Steven C. van der

    2006-01-01

    benchmarks deviates only 0.017% from the measured benchmark value. Moreover, no clear trends (with e.g. enrichment, lattice pitch, or spectrum) have been observed. Also for fast spectrum benchmarks, both for intermediately or highly enriched uranium and for plutonium, clear improvements are apparent from the calculations. The results for bare assemblies have improved, as well as those with a depleted or natural uranium reflector. On the other hand, the results for plutonium solutions (PU-SOL-THERM) are still high, on average (over 120 benchmarks) roughly 0.6%. Furthermore there still is a bias for a range of benchmarks based on cores in the Zero Power Reactor (ANL) with sizable amounts of tungsten in them. The results for the fusion shielding benchmarks have not changed significantly, compared to ENDF/B-VI.8, for most materials. The delayed neutron testing shows that the values for both thermal and fast spectrum cases are now well predicted, which is an improvement when compared with ENDF/B-VI.8

  4. RETRAN-3D analysis of the base case and the four extreme cases of the OECD/NRC Peach Bottom 2 Turbine Trip benchmark

    International Nuclear Information System (INIS)

    Barten, Werner; Coddington, Paul; Ferroukhi, Hakim

    2006-01-01

    This paper presents the results of RETRAN-3D calculations of the base case and the four extreme cases of phase 3 of the Peach Bottom 2 OECD/NRC Turbine Trip benchmark for coupled thermal-hydraulic and neutronic codes. The PSI-RETRAN-3D model gives good agreement with the measured data of the base case. In addition to the base case, the analysis of the extreme cases provides a further understanding of the reactor behaviour, which is the result of the dynamic coupling of the whole system, i.e., the interaction between the steam line and vessel flows, the pressure, the Doppler, void and control reactivity and power. For the extreme cases without scram the bank of safety relief valves is able to mitigate the effects of the turbine trip for short times. The 3-D nature of the core power distribution has been investigated by analysing the power density of the different thermal-hydraulic channels. In all cases prior to the reactor scram the course of the power is similar in all the channels with differences of the order of a few percent showing that, by and large, the core acts in a coherent manner. At the time of maximum power, the axial power distribution in the different channels is increased at the core centre with respect to the distribution at time zero, by an amount, which is different for the different channels

  5. Calculations of the IAEA-CRP-6 Benchmark Cases by Using the ABAQUS FE Model for a Comparison with the COPA Results

    International Nuclear Information System (INIS)

    Cho, Moon-Sung; Kim, Y. M.; Lee, Y. W.; Jeong, K. C.; Kim, Y. K.; Oh, S. C.

    2006-01-01

    The fundamental design for a gas-cooled reactor relies on an understanding of the behavior of a coated particle fuel. KAERI, which has been carrying out the Korean VHTR (Very High Temperature modular gas cooled Reactor) Project since 2004, is developing a fuel performance analysis code for a VHTR named COPA (COated Particle fuel Analysis). COPA predicts temperatures, stresses, a fission gas release and failure probabilities of a coated particle fuel in normal operating conditions. Validation of COPA in the process of its development is realized partly by participating in the benchmark section of the international CRP-6 program led by IAEA which provides comprehensive benchmark problems and analysis results obtained from the CRP-6 member countries. Apart from the validation effort through the CRP-6, a validation of COPA was attempted by comparing its benchmark results with the visco-elastic solutions obtained from the ABAQUS code calculations for the same CRP-6 TRISO coated particle benchmark problems involving creep, swelling, and pressure. The study shows the calculation results of the IAEA-CRP-6 benchmark cases 5 through 7 by using the ABAQUS FE model for a comparison with the COPA results

  6. Benchmark simulation model no 2: general protocol and exploratory case studies

    DEFF Research Database (Denmark)

    Jeppsson, U.; Pons, M.N.; Nopens, I.

    2007-01-01

    and digester models, the included temperature dependencies and the reject water storage. BSM2-implementations are now available in a wide range of simulation platforms and a ring test has verified their proper implementation, consistent with the BSM2 definition. This guarantees that users can focus...

  7. Benchmark of the SixTrack-Fluka Active Coupling Against the SPS Scrapers Burst Test

    CERN Multimedia

    Mereghetti, A; Cerutti, F

    2014-01-01

    The SPS scrapers are a key ingredient for the clean injection into the LHC: they cut off halo particles quite close to the beam core (e.g.~3.5 sigma) just before extraction, to minimise the risk for quenches. The improved beam parameters as envisaged by the LHC Injectors Upgrade (LIU) Project required a revision of the present system, to assess its suitability and robustness. In particular, a burst (i.e. endurance) test of the scraper blades has been carried out, with the whole bunch train being scraped at the centre (worst working conditions). In order to take into account the effect of betatron and longitudinal beam dynamics on energy deposition patterns, and nuclear and Coulomb scattering in the absorbing medium onto loss patterns, the SixTrack and Fluka codes have been coupled, profiting from the best of the refined physical models they respectively embed. The coupling envisages an active exchange of tracked particles between the two codes at each turn, and an on-line aperture check in SixTrack, in order ...

  8. Coarse-graining polymers with the MARTINI force-field: polystyrene as a benchmark case

    DEFF Research Database (Denmark)

    Rossi, G.; Monticelli, L.; Puisto, S. R.

    2011-01-01

    We hereby introduce a new hybrid thermodynamic-structural approach to the coarse-graining of polymers. The new model is developed within the framework of the MARTINI force-field (Marrink et al., J. Phys. Chem. B, 2007, 111, 7812), which uses mainly thermodynamic properties as targets...... of microseconds. Finally, we tested our model in dilute conditions. The collapse of the polymer chains in a bad solvent and the swelling in a good solvent could be reproduced....

  9. Comparison benchmark between tokamak simulation code and TokSys for Chinese Fusion Engineering Test Reactor vertical displacement control design

    International Nuclear Information System (INIS)

    Qiu Qing-Lai; Xiao Bing-Jia; Guo Yong; Liu Lei; Wang Yue-Hang

    2017-01-01

    Vertical displacement event (VDE) is a big challenge to the existing tokamak equipment and that being designed. As a Chinese next-step tokamak, the Chinese Fusion Engineering Test Reactor (CFETR) has to pay attention to the VDE study with full-fledged numerical codes during its conceptual design. The tokamak simulation code (TSC) is a free boundary time-dependent axisymmetric tokamak simulation code developed in PPPL, which advances the MHD equations describing the evolution of the plasma in a rectangular domain. The electromagnetic interactions between the surrounding conductor circuits and the plasma are solved self-consistently. The TokSys code is a generic modeling and simulation environment developed in GA. Its RZIP model treats the plasma as a fixed spatial distribution of currents which couple with the surrounding conductors through circuit equations. Both codes have been individually used for the VDE study on many tokamak devices, such as JT-60U, EAST, NSTX, DIII-D, and ITER. Considering the model differences, benchmark work is needed to answer whether they reproduce each other’s results correctly. In this paper, the TSC and TokSys codes are used for analyzing the CFETR vertical instability passive and active controls design simultaneously. It is shown that with the same inputs, the results from these two codes conform with each other. (paper)

  10. Analytic energy gradient of excited electronic state within TDDFT/MMpol framework: Benchmark tests and parallel implementation.

    Science.gov (United States)

    Zeng, Qiao; Liang, WanZhen

    2015-10-07

    The time-dependent density functional theory (TDDFT) has become the most popular method to calculate the electronic excitation energies, describe the excited-state properties, and perform the excited-state geometric optimization of medium and large-size molecules due to the implementation of analytic excited-state energy gradient and Hessian in many electronic structure software packages. To describe the molecules in condensed phase, one usually adopts the computationally efficient hybrid Quantum Mechanics/Molecular Mechanics (QM/MM) models. Here, we extend our previous work on the energy gradient of TDDFT/MM excited state to account for the mutual polarization effects between QM and MM regions, which is believed to hold a crucial position in the potential energy surface of molecular systems when the photoexcitation-induced charge rearrangement in the QM region is drastic. The implementation of a simple polarizable TDDFT/MM (TDDFT/MMpol) model in Q-Chem/CHARMM interface with both the linear response and the state-specific features has been realized. Several benchmark tests and preliminary applications are exhibited to confirm our implementation and assess the effects of different treatment of environmental polarization on the excited-state properties, and the efficiency of parallel implementation is demonstrated as well.

  11. Correlation of In Vivo Versus In Vitro Benchmark Doses (BMDs) Derived From Micronucleus Test Data: A Proof of Concept Study.

    Science.gov (United States)

    Soeteman-Hernández, Lya G; Fellows, Mick D; Johnson, George E; Slob, Wout

    2015-12-01

    In this study, we explored the applicability of using in vitro micronucleus (MN) data from human lymphoblastoid TK6 cells to derive in vivo genotoxicity potency information. Nineteen chemicals covering a broad spectrum of genotoxic modes of action were tested in an in vitro MN test using TK6 cells using the same study protocol. Several of these chemicals were considered to need metabolic activation, and these were administered in the presence of S9. The Benchmark dose (BMD) approach was applied using the dose-response modeling program PROAST to estimate the genotoxic potency from the in vitro data. The resulting in vitro BMDs were compared with previously derived BMDs from in vivo MN and carcinogenicity studies. A proportional correlation was observed between the BMDs from the in vitro MN and the BMDs from the in vivo MN assays. Further, a clear correlation was found between the BMDs from in vitro MN and the associated BMDs for malignant tumors. Although these results are based on only 19 compounds, they show that genotoxicity potencies estimated from in vitro tests may result in useful information regarding in vivo genotoxic potency, as well as expected cancer potency. Extension of the number of compounds and further investigation of metabolic activation (S9) and of other toxicokinetic factors would be needed to validate our initial conclusions. However, this initial work suggests that this approach could be used for in vitro to in vivo extrapolations which would support the reduction of animals used in research (3Rs: replacement, reduction, and refinement). © The Author 2015. Published by Oxford University Press on behalf of the Society of Toxicology.

  12. A novel methodology for energy performance benchmarking of buildings by means of Linear Mixed Effect Model: The case of space and DHW heating of out-patient Healthcare Centres

    International Nuclear Information System (INIS)

    Capozzoli, Alfonso; Piscitelli, Marco Savino; Neri, Francesco; Grassi, Daniele; Serale, Gianluca

    2016-01-01

    Highlights: • 100 Healthcare Centres were analyzed to assess energy consumption reference values. • A novel robust methodology for energy benchmarking process was proposed. • A Linear Mixed Effect estimation Model was used to treat heterogeneous datasets. • A nondeterministic approach was adopted to consider the uncertainty in the process. • The methodology was developed to be upgradable and generalizable to other datasets. - Abstract: The current EU energy efficiency directive 2012/27/EU defines the existing building stocks as one of the most promising potential sector for achieving energy saving. Robust methodologies aimed to quantify the potential reduction of energy consumption for large building stocks need to be developed. To this purpose, a benchmarking analysis is necessary in order to support public planners in determining how well a building is performing, in setting credible targets for improving performance or in detecting abnormal energy consumption. In the present work, a novel methodology is proposed to perform a benchmarking analysis particularly suitable for heterogeneous samples of buildings. The methodology is based on the estimation of a statistical model for energy consumption – the Linear Mixed Effects Model –, so as to account for both the fixed effects shared by all individuals within a dataset and the random effects related to particular groups/classes of individuals in the population. The groups of individuals within the population have been classified by resorting to a supervised learning technique. Under this backdrop, a Monte Carlo simulation is worked out to compute the frequency distribution of annual energy consumption and identify a reference value for each group/class of buildings. The benchmarking analysis was tested for a case study of 100 out-patient Healthcare Centres in Northern Italy, finally resulting in 12 different frequency distributions for space and Domestic Hot Water heating energy consumption, one for

  13. The benchmarks of carbon emissions and policy implications for China's cities: Case of Nanjing

    International Nuclear Information System (INIS)

    Bi Jun; Zhang Rongrong; Wang Haikun; Liu Miaomiao; Wu Yi

    2011-01-01

    The development of urbanization is accelerating in China, and there are great pressures and opportunities in cities to reduce carbon emissions. An emissions inventory is a basic requirement for analyzing emissions of greenhouse gases (GHGs), their potential reduction and to realize low-carbon development of cities. This study describes a method to establish a GHGs emissions inventory in Chinese cities for 6 emission sources including industrial energy consumption, transportation, household energy consumption, commercial energy consumption, industrial processes and waste. Nanjing city was selected as a representative case to analyze the characteristics of carbon emissions in Chinese cities. The results show that carbon emissions in Nanjing have increased nearly 50% during the last decade. The three largest GHGs contributors were industrial energy consumption, industrial processes and transportation, which contributed 37-44%, 35-40% and 6-10%, respectively, to the total GHGs emissions. Per GDP carbon emissions decreased by 55% from 2002 to 2009, and the per capita and per GDP carbon emissions were comparable or even lower than the world average levels. These results have important policy implications for Chinese cities to control their carbon emissions. - Highlights: → Carbon emissions inventory using bottom-up methodology was firstly reported for a Chinese city. → Emission characteristics of Nanjing city were compared with other international cities. → Low carbon policies for Chinese cities were recommended based on the results of this research.

  14. Two benchmark cases for the trio two-phase flow module

    Energy Technology Data Exchange (ETDEWEB)

    Helton, D.; Hassan, Y. [Texas A and M University, Nuclear Engineering Dept., College Station, Texas (United States); Kumbaro, A. [CEA Saclay, 91 - Gif-sur-Yvette (France). Dept. de Mecanique et de Technologie

    2001-07-01

    This report presents a series of problems that were studied in order to assess the new implementations recently made to the two-phase flow module. Each problem is designed to give insight into a particular area of the code refinement. As such, each problem, and its corresponding results will be discussed individually, with comparisons made to experimental or analytical results whenever possible. TrioU is a thermal hydraulics program created by CEA. It is currently evolving in to a multi-dimensional, multi-fluid, multi-phase program. The purpose of TrioU is to provide a platform for testing of new numerical methods and physical models that are developed by the Nuclear Reactor Division of CEA. TrioU is written in an object-oriented programming language, and maintained by a version-controllable environment, for ease in parallelization and multiple-site development. (author)

  15. Two benchmark cases for the trio two-phase flow module

    International Nuclear Information System (INIS)

    Helton, D.; Hassan, Y.; Kumbaro, A.

    2001-01-01

    This report presents a series of problems that were studied in order to assess the new implementations recently made to the two-phase flow module. Each problem is designed to give insight into a particular area of the code refinement. As such, each problem, and its corresponding results will be discussed individually, with comparisons made to experimental or analytical results whenever possible. TrioU is a thermal hydraulics program created by CEA. It is currently evolving in to a multi-dimensional, multi-fluid, multi-phase program. The purpose of TrioU is to provide a platform for testing of new numerical methods and physical models that are developed by the Nuclear Reactor Division of CEA. TrioU is written in an object-oriented programming language, and maintained by a version-controllable environment, for ease in parallelization and multiple-site development. (author)

  16. Benchmarking for Best Practice

    CERN Document Server

    Zairi, Mohamed

    1998-01-01

    Benchmarking for Best Practice uses up-to-the-minute case-studies of individual companies and industry-wide quality schemes to show how and why implementation has succeeded. For any practitioner wanting to establish best practice in a wide variety of business areas, this book makes essential reading. .It is also an ideal textbook on the applications of TQM since it describes concepts, covers definitions and illustrates the applications with first-hand examples. Professor Mohamed Zairi is an international expert and leading figure in the field of benchmarking. His pioneering work in this area l

  17. [Do you mean benchmarking?].

    Science.gov (United States)

    Bonnet, F; Solignac, S; Marty, J

    2008-03-01

    The purpose of benchmarking is to settle improvement processes by comparing the activities to quality standards. The proposed methodology is illustrated by benchmark business cases performed inside medical plants on some items like nosocomial diseases or organization of surgery facilities. Moreover, the authors have built a specific graphic tool, enhanced with balance score numbers and mappings, so that the comparison between different anesthesia-reanimation services, which are willing to start an improvement program, is easy and relevant. This ready-made application is even more accurate as far as detailed tariffs of activities are implemented.

  18. NUPEC BWR Full-size Fine-mesh Bundle Test (BFBT) Benchmark. Volume II: uncertainty and sensitivity analyses of void distribution and critical power - Specification

    International Nuclear Information System (INIS)

    Aydogan, F.; Hochreiter, L.; Ivanov, K.; Martin, M.; Utsuno, H.; Sartori, E.

    2010-01-01

    experimental cases from the BFBT database for both steady-state void distribution and steady-state critical power uncertainty analyses. In order to study the basic thermal-hydraulics in a single channel, where the concern regarding the cross-flow effect modelling could be removed, an elemental task is proposed, consisting of two sub-tasks that are placed in each phase of the benchmark scope as follows: - Sub-task 1: Void fraction in elemental channel benchmark; - Sub-task 2: Critical power in elemental channel benchmark. The first task can also be utilised as an uncertainty analysis exercise for fine computational fluid dynamics (CFD) models for which the full bundle sensitivity or uncertainty analysis is more difficult. The task is added to the second volume of the specification as an optional exercise. Chapter 2 of this document provides the definition of UA/SA terms. Chapter 3 provides the selection and characterisation of the input uncertain parameters for the BFBT benchmark and the description of the elemental task. Chapter 4 describes the suggested approach for UA/SA of the BFBT benchmark. Chapter 5 provides the selection of data sets for the uncertainty analysis and the elemental task from the BFBT database. Chapter 6 specifies the requested output for void distribution and critical power uncertainty analyses (Exercises I-4 and II-3) as well as for the elemental task. Chapter 7 provides conclusions. Appendix 1 discusses the UA/SA methods. Appendix 2 presents the Phenomena Identification Ranking Tables (PIRT) developed at PSU for void distribution and critical power predictions in order to assist participants in selecting the most sensitive/uncertain code model parameters

  19. Simulation of VVER MCCI reactor test case with ASTEC V2/MEDICIS computer code

    International Nuclear Information System (INIS)

    Stefanova, A.; Grudev, P.; Gencheva, R.

    2011-01-01

    This paper presents an application of the ASTEC v2, module MEDICIS for simulation of VVER Molten core concrete interaction test (MCCI) case without water injection. The main purpose of performed calculation is verification and improvement of module MEDICIS/ASTECv2 for better simulation of core concrete interaction processes. The VVER-1000 reference nuclear power plant was chosen as SARNET2 benchmark MCCI test-case. The initial conditions for MCCI test are taken after SBO scenario calculated with ASTEC version 1.3R2 by INRNE. (authors)

  20. Testing Cases under Title VII.

    Science.gov (United States)

    Rothschild, Michael; Werden, Gregory J.

    This paper discusses Congressional and judicial attempts to deal with the problem of employment practices which lead to discriminatory outcomes but which may not be discriminatory in intent. The use of paper and pencil tests as standards for hiring and promotion is focused on as an example of this type of employment practice. An historical account…

  1. Usability Testing: A Case Study.

    Science.gov (United States)

    Chisman, Janet; Walbridge, Sharon; Diller, Karen

    1999-01-01

    Discusses the development and results of usability testing of Washington State University's Web-based OPAC (online public access catalog); examines how easily users could navigate the catalog and whether they understood what they were seeing; and identifies problems and what action if any was taken. (LRW)

  2. Tools for Test Case Generation

    NARCIS (Netherlands)

    Belinfante, Axel; Frantzen, Lars; Schallhart, Christian; Broy, Manfred; Jonsson, Bengt; Katoen, Joost P.; Leucker, Martin; Pretschner, Alexander

    2005-01-01

    The preceding parts of this book have mainly dealt with test theory, aimed at improving the practical techniques which are applied by testers to enhance the quality of soft- and hardware systems. Only if these academic results can be efficiently and successfully transferred back to practice, they

  3. Preliminary Assessment of ATR-C Capabilities to Provide Integral Benchmark Data for Key Structural/Matrix Materials that May be Used for Nuclear Data Testing and Analytical Methods Validation

    Energy Technology Data Exchange (ETDEWEB)

    John D. Bess

    2009-03-01

    The purpose of this research is to provide a fundamental computational investigation into the possible integration of experimental activities with the Advanced Test Reactor Critical (ATR-C) facility with the development of benchmark experiments. Criticality benchmarks performed in the ATR-C could provide integral data for key matrix and structural materials used in nuclear systems. Results would then be utilized in the improvement of nuclear data libraries and as a means for analytical methods validation. It is proposed that experiments consisting of well-characterized quantities of materials be placed in the Northwest flux trap position of the ATR-C. The reactivity worth of the material could be determined and computationally analyzed through comprehensive benchmark activities including uncertainty analyses. Experiments were modeled in the available benchmark model of the ATR using MCNP5 with the ENDF/B-VII.0 cross section library. A single bar (9.5 cm long, 0.5 cm wide, and 121.92 cm high) of each material could provide sufficient reactivity difference in the core geometry for computational modeling and analysis. However, to provide increased opportunity for the validation of computational models, additional bars of material placed in the flux trap would increase the effective reactivity up to a limit of 1$ insertion. For simplicity in assembly manufacture, approximately four bars of material could provide a means for additional experimental benchmark configurations, except in the case of strong neutron absorbers and many materials providing positive reactivity. Future tasks include the cost analysis and development of the experimental assemblies, including means for the characterization of the neutron flux and spectral indices. Oscillation techniques may also serve to provide additional means for experimentation and validation of computational methods and acquisition of integral data for improving neutron cross sections. Further assessment of oscillation

  4. Preliminary Assessment of ATR-C Capabilities to Provide Integral Benchmark Data for Key Structural/Matrix Materials that May be Used for Nuclear Data Testing and Analytical Methods Validation

    Energy Technology Data Exchange (ETDEWEB)

    John D. Bess

    2009-07-01

    The purpose of this document is to identify some suggested types of experiments that can be performed in the Advanced Test Reactor Critical (ATR-C) facility. A fundamental computational investigation is provided to demonstrate possible integration of experimental activities in the ATR-C with the development of benchmark experiments. Criticality benchmarks performed in the ATR-C could provide integral data for key matrix and structural materials used in nuclear systems. Results would then be utilized in the improvement of nuclear data libraries and as a means for analytical methods validation. It is proposed that experiments consisting of well-characterized quantities of materials be placed in the Northwest flux trap position of the ATR-C. The reactivity worth of the material could be determined and computationally analyzed through comprehensive benchmark activities including uncertainty analyses. Experiments were modeled in the available benchmark model of the ATR using MCNP5 with the ENDF/B-VII.0 cross section library. A single bar (9.5 cm long, 0.5 cm wide, and 121.92 cm high) of each material could provide sufficient reactivity difference in the core geometry for computational modeling and analysis. However, to provide increased opportunity for the validation of computational models, additional bars of material placed in the flux trap would increase the effective reactivity up to a limit of 1$ insertion. For simplicity in assembly manufacture, approximately four bars of material could provide a means for additional experimental benchmark configurations, except in the case of strong neutron absorbers and many materials providing positive reactivity. Future tasks include the cost analysis and development of the experimental assemblies, including means for the characterization of the neutron flux and spectral indices. Oscillation techniques may also serve to provide additional means for experimentation and validation of computational methods and acquisition of

  5. Shielding Benchmark Computational Analysis

    International Nuclear Information System (INIS)

    Hunter, H.T.; Slater, C.O.; Holland, L.B.; Tracz, G.; Marshall, W.J.; Parsons, J.L.

    2000-01-01

    Over the past several decades, nuclear science has relied on experimental research to verify and validate information about shielding nuclear radiation for a variety of applications. These benchmarks are compared with results from computer code models and are useful for the development of more accurate cross-section libraries, computer code development of radiation transport modeling, and building accurate tests for miniature shielding mockups of new nuclear facilities. When documenting measurements, one must describe many parts of the experimental results to allow a complete computational analysis. Both old and new benchmark experiments, by any definition, must provide a sound basis for modeling more complex geometries required for quality assurance and cost savings in nuclear project development. Benchmarks may involve one or many materials and thicknesses, types of sources, and measurement techniques. In this paper the benchmark experiments of varying complexity are chosen to study the transport properties of some popular materials and thicknesses. These were analyzed using three-dimensional (3-D) models and continuous energy libraries of MCNP4B2, a Monte Carlo code developed at Los Alamos National Laboratory, New Mexico. A shielding benchmark library provided the experimental data and allowed a wide range of choices for source, geometry, and measurement data. The experimental data had often been used in previous analyses by reputable groups such as the Cross Section Evaluation Working Group (CSEWG) and the Organization for Economic Cooperation and Development/Nuclear Energy Agency Nuclear Science Committee (OECD/NEANSC)

  6. Co-ordinated research programme on benchmark study for the seismic analysis and testing of WWER-type nuclear power plants. V. 3A. Kozloduy NPP units 5/6: Analysis/testing. Working material

    International Nuclear Information System (INIS)

    1995-01-01

    The Co-ordinated research programme on the benchmark study for the seismic analysis and testing of WWER-type nuclear power plants was initiated subsequent to the request from representatives of Member States. The conclusions adopted at the Technical Committee Meeting on Seismic Issues related to existing nuclear power plants held in Tokyo in 1991 called for the harmonization of methods and criteria used in Member States in issues related to seismic safety. The Consulltants' Meeting which followed resulted in producing a working document for CRP. It was decided that a benchmark study is the most effective way to achieve the principal objective. Two types of WWER reactors (WWER-440/213 and WWER-1000) were selected as prototypes for the benchmark exercise to be tested on a full scale using explosions and/or vibration generators. This volume of Working material contains reports related analyses and testing of Kozloduy nuclear power plant, units 5 and 6

  7. Co-ordinated research programme on benchmark study for the seismic analysis and testing of WWER-type nuclear power plants. V. 3B. Kozloduy NPP units 5/6: Analysis/testing. Working material

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-07-01

    The Co-ordinated research programme on the benchmark study for the seismic analysis and testing of WWER-type nuclear power plants was initiated subsequent to the request from representatives of Member States. The conclusions adopted at the Technical Committee Meeting on Seismic Issues related to existing nuclear power plants held in Tokyo in 1991 called for the harmonization of methods and criteria used in Member States in issues related to seismic safety. The Consulltants' Meeting which followed resulted in producing a working document for CRP. It was decided that a benchmark study is the most effective way to achieve the principal objective. Two types of WWER reactors (WWER-440/213 and WWER-1000) were selected as prototypes for the benchmark exercise to be tested on a full scale using explosions and/or vibration generators. This volume of Working material contains reports related analyses and testing of Kozloduy nuclear power plant, units 5 and 6.

  8. A benchmark study of 2D and 3D finite element calculations simulating dynamic pulse buckling tests of cylindrical shells under axial impact

    International Nuclear Information System (INIS)

    Hoffman, E.L.; Ammerman, D.J.

    1993-01-01

    A series of tests investigating dynamic pulse buckling of a cylindrical shell under axial impact is compared to several finite element simulations of the event. The purpose of the study is to compare the performance of the various analysis codes and element types with respect to a problem which is applicable to radioactive material transport packages, and ultimately to develop a benchmark problem to qualify finite element analysis codes for the transport package design industry

  9. Co-ordinated research programme on benchmark study for the seismic analysis and testing of WWER-type nuclear power plants. V. 5B. Experience data. Working material

    International Nuclear Information System (INIS)

    1996-01-01

    The Co-ordinated research programme on the benchmark study for the seismic analysis and testing of WWER-type nuclear power plants was initiated subsequent to the request from representatives of Member States. The conclusions adopted at the Technical Committee Meeting on Seismic Issues related to existing nuclear power plants held in Tokyo in 1991 called for the harmonization of methods and criteria used in Member States in issues related to seismic safety. The Consulltants' Meeting which followed resulted in producing a working document for CRP. It was decided that a benchmark study is the most effective way to achieve the principal objective. Two types of WWER reactors (WWER-440/213 and WWER-1000) were selected as prototypes for the benchmark exercise to be tested on a full scale using explosions and/or vibration generators. The two prototypes are Kozloduy Units 5/6 for WWER-1000 and Paks for WWER-440/213 nuclear power plants. This volume of Working material contains reports on the effects of Armenia earthquakes on selected power, industry and commercial facilities and seismic functional qualification of active mechanical and electrical components tested on shaking table

  10. The Couplex test cases: models and lessons

    International Nuclear Information System (INIS)

    Bourgeat, A.; Kern, M.; Schumacher, S.; Talandier, J.

    2003-01-01

    The Couplex test cases are a set of numerical test models for nuclear waste deep geological disposal simulation. They are centered around the numerical issues arising in the near and far field transport simulation. They were used in an international contest, and are now becoming a reference in the field. We present the models used in these test cases, and show sample results from the award winning teams. (authors)

  11. KENO-IV code benchmark calculation, (6)

    International Nuclear Information System (INIS)

    Nomura, Yasushi; Naito, Yoshitaka; Yamakawa, Yasuhiro.

    1980-11-01

    A series of benchmark tests has been undertaken in JAERI in order to examine the capability of JAERI's criticality safety evaluation system consisting of the Monte Carlo calculation code KENO-IV and the newly developed multigroup constants library MGCL. The present report describes the results of a benchmark test using criticality experiments about Plutonium fuel in various shape. In all, 33 cases of experiments have been calculated for Pu(NO 3 ) 4 aqueous solution, Pu metal or PuO 2 -polystyrene compact in various shape (sphere, cylinder, rectangular parallelepiped). The effective multiplication factors calculated for the 33 cases distribute widely between 0.955 and 1.045 due to wide range of system variables. (author)

  12. Chances and limitations of "benchmarking" in the reform of welfare state structures - the case of pension policy

    NARCIS (Netherlands)

    Schludi, M.

    2003-01-01

    The concept of benchmarking, originally developed as a tool to improve the performance of business enterprises by learning from others’ "best practice", is now increasingly applied in the areas of employment and social policy. Rising internal and external pressures on national welfare states,

  13. Human factors reliability Benchmark exercise

    International Nuclear Information System (INIS)

    Poucet, A.

    1989-06-01

    The Joint Research Centre of the European Commission has organized a Human Factors Reliability Benchmark Exercise (HF-RBE) with the aim of assessing the state of the art in human reliability modelling and assessment. Fifteen teams from eleven countries, representing industry, utilities, licensing organisations and research institutes, participated in the HF-RBE. The HF-RBE was organized around two study cases: (1) analysis of routine functional Test and Maintenance (T and M) procedures: with the aim of assessing the probability of test induced failures, the probability of failures to remain unrevealed and the potential to initiate transients because of errors performed in the test; (2) analysis of human actions during an operational transient: with the aim of assessing the probability that the operators will correctly diagnose the malfunctions and take proper corrective action. This report contains the final summary reports produced by the participants in the exercise

  14. Compilation of MCNP data library based on JENDL-3T and test through analysis of benchmark experiment

    International Nuclear Information System (INIS)

    Sakurai, K.; Sasamoto, N.; Kosako, K.; Ishikawa, T.; Sato, O.; Oyama, Y.; Narita, H.; Maekawa, H.; Ueki, K.

    1989-01-01

    Based on an evaluated nuclear data library JENDL-3T, a temporary version of JENDL-3, a pointwise neutron cross section library for MCNP code is compiled which involves 39 nuclides from H-1 to Am-241 which are important for shielding calculations. Compilation is performed with the code system which consists of the nuclear data processing code NJOY-83 and library compilation code MACROS. Validity of the code system and reliability of the library are certified by analysing benchmark experiments. (author)

  15. Co-ordinated research programme on benchmark study for the seismic analysis and testing of WWER-type nuclear power plants. V. 5C. Experience data. Working material

    International Nuclear Information System (INIS)

    1999-01-01

    In August 1991, following the SMiRT-11 Conference in Tokyo, a Technical Committee Meeting was held on the 'Seismic safety issues relating to existing NPPs'. The Proceedings of this TCM was subsequently compiled in an IAEA Working Material. One of the main recommendations of this TCM, called for the harmonization of criteria and methods used in Member States in seismic reassessment and upgrading of existing NPPs. Twenty four institutions from thirteen countries participated in the CRP named 'Benchmark study for the seismic analysis and testing of WWER type NPPs'. Two types of WWER reactors (WWER-1000 and WWER-440/213) selected for benchmarking. Kozloduy NPP Units 5/6 and Paks NPP represented these respectively as prototypes. Consistent with the recommendations of the TCM and the working paper prepared by the subsequent Consultants' Meeting, the focal activity of the CRP was the benchmarking exercises. A similar methodology was followed both for Paks NPP and Kozloduy NPP Unit 5. Firstly, the NPP (mainly the reactor building) was tested using a blast loading generated by a series of explosions from buried TNT charges. Records from this test were obtained at several free field locations (both downhole and surface), foundation mat, various elevations of structures as well as some tanks and the stack. Then the benchmark participants were provided with structural drawings, soil data and the free field record of the blast experiment. Their task was to make a blind prediction of the response at preselected locations. The analytical results from these participants were then compared with the results from the test. Although the benchmarking exercises constituted the focus of the CRP, there were many other interesting problems related to the seismic safety of WWER type NPPs which were addressed by the participants. These involved generic studies, i.e. codes and standards used in original WWER designs and their comparison with current international practice; seismic analysis

  16. Benchmark job – Watch out!

    CERN Multimedia

    Staff Association

    2017-01-01

    On 12 December 2016, in Echo No. 259, we already discussed at length the MERIT and benchmark jobs. Still, we find that a couple of issues warrant further discussion. Benchmark job – administrative decision on 1 July 2017 On 12 January 2017, the HR Department informed all staff members of a change to the effective date of the administrative decision regarding benchmark jobs. The benchmark job title of each staff member will be confirmed on 1 July 2017, instead of 1 May 2017 as originally announced in HR’s letter on 18 August 2016. Postponing the administrative decision by two months will leave a little more time to address the issues related to incorrect placement in a benchmark job. Benchmark job – discuss with your supervisor, at the latest during the MERIT interview In order to rectify an incorrect placement in a benchmark job, it is essential that the supervisor and the supervisee go over the assigned benchmark job together. In most cases, this placement has been done autom...

  17. Direct data access protocols benchmarking on DPM

    CERN Document Server

    Furano, Fabrizio; Keeble, Oliver; Mancinelli, Valentina

    2015-01-01

    The Disk Pool Manager is an example of a multi-protocol, multi-VO system for data access on the Grid that went though a considerable technical evolution in the last years. Among other features, its architecture offers the opportunity of testing its different data access frontends under exactly the same conditions, including hardware and backend software. This characteristic inspired the idea of collecting monitoring information from various testbeds in order to benchmark the behaviour of the HTTP and Xrootd protocols for the use case of data analysis, batch or interactive. A source of information is the set of continuous tests that are run towards the worldwide endpoints belonging to the DPM Collaboration, which accumulated relevant statistics in its first year of activity. On top of that, the DPM releases are based on multiple levels of automated testing that include performance benchmarks of various kinds, executed regularly every day. At the same time, the recent releases of DPM can report monitoring infor...

  18. Measuring Test Case Similarity to Support Test Suite Understanding

    NARCIS (Netherlands)

    Greiler, M.S.; Van Deursen, A.; Zaidman, A.E.

    2012-01-01

    Preprint of paper published in: TOOLS 2012 - Proceedings of the 50th International Conference, Prague, Czech Republic, May 29-31, 2012; doi:10.1007/978-3-642-30561-0_8 In order to support test suite understanding, we investigate whether we can automatically derive relations between test cases. In

  19. Test case preparation using a prototype

    OpenAIRE

    Treharne, Helen; Draper, J.; Schneider, Steve A.

    1998-01-01

    This paper reports on the preparation of test cases using a prototype within the context of a formal development. It describes an approach to building a prototype using an example. It discusses how a prototype contributes to the testing activity as part of a lifecycle based on the use of formal methods. The results of applying the approach to an embedded avionics case study are also presented.

  20. MoMaS reactive transport benchmark using PFLOTRAN

    Science.gov (United States)

    Park, H.

    2017-12-01

    MoMaS benchmark was developed to enhance numerical simulation capability for reactive transport modeling in porous media. The benchmark was published in late September of 2009; it is not taken from a real chemical system, but realistic and numerically challenging tests. PFLOTRAN is a state-of-art massively parallel subsurface flow and reactive transport code that is being used in multiple nuclear waste repository projects at Sandia National Laboratories including Waste Isolation Pilot Plant and Used Fuel Disposition. MoMaS benchmark has three independent tests with easy, medium, and hard chemical complexity. This paper demonstrates how PFLOTRAN is applied to this benchmark exercise and shows results of the easy benchmark test case which includes mixing of aqueous components and surface complexation. Surface complexations consist of monodentate and bidentate reactions which introduces difficulty in defining selectivity coefficient if the reaction applies to a bulk reference volume. The selectivity coefficient becomes porosity dependent for bidentate reaction in heterogeneous porous media. The benchmark is solved by PFLOTRAN with minimal modification to address the issue and unit conversions were made properly to suit PFLOTRAN.

  1. OECD/NEA BENCHMARK FOR UNCERTAINTY ANALYSIS IN MODELING (UAM FOR LWRS – SUMMARY AND DISCUSSION OF NEUTRONICS CASES (PHASE I

    Directory of Open Access Journals (Sweden)

    RYAN N. BRATTON

    2014-06-01

    Full Text Available A Nuclear Energy Agency (NEA, Organization for Economic Co-operation and Development (OECD benchmark for Uncertainty Analysis in Modeling (UAM is defined in order to facilitate the development and validation of available uncertainty analysis and sensitivity analysis methods for best-estimate Light water Reactor (LWR design and safety calculations. The benchmark has been named the OECD/NEA UAM-LWR benchmark, and has been divided into three phases each of which focuses on a different portion of the uncertainty propagation in LWR multi-physics and multi-scale analysis. Several different reactor cases are modeled at various phases of a reactor calculation. This paper discusses Phase I, known as the “Neutronics Phase”, which is devoted mostly to the propagation of nuclear data (cross-section uncertainty throughout steady-state stand-alone neutronics core calculations. Three reactor systems (for which design, operation and measured data are available are rigorously studied in this benchmark: Peach Bottom Unit 2 BWR, Three Mile Island Unit 1 PWR, and VVER-1000 Kozloduy-6/Kalinin-3. Additional measured data is analyzed such as the KRITZ LEU criticality experiments and the SNEAK-7A and 7B experiments of the Karlsruhe Fast Critical Facility. Analyzed results include the top five neutron-nuclide reactions, which contribute the most to the prediction uncertainty in keff, as well as the uncertainty in key parameters of neutronics analysis such as microscopic and macroscopic cross-sections, six-group decay constants, assembly discontinuity factors, and axial and radial core power distributions. Conclusions are drawn regarding where further studies should be done to reduce uncertainties in key nuclide reaction uncertainties (i.e.: 238U radiative capture and inelastic scattering (n, n’ as well as the average number of neutrons released per fission event of 239Pu.

  2. ENDF/B-VII.1 Neutron Cross Section Data Testing with Critical Assembly Benchmarks and Reactor Experiments

    Energy Technology Data Exchange (ETDEWEB)

    Kahler, A.C.; Herman, M.; Kahler,A.C.; MacFarlane,R.E.; Mosteller,R.D.; Kiedrowski,B.C.; Frankle,S.C.; Chadwick,M.B.; McKnight,R.D.; Lell,R.M.; Palmiotti,G.; Hiruta,H.; Herman,M.; Arcilla,R.; Mughabghab,S.F.; Sublet,J.C.; Trkov,A.; Trumbull,T.H.; Dunn,M.

    2011-12-01

    The ENDF/B-VII.1 library is the latest revision to the United States Evaluated Nuclear Data File (ENDF). The ENDF library is currently in its seventh generation, with ENDF/B-VII.0 being released in 2006. This revision expands upon that library, including the addition of new evaluated files (was 393 neutron files previously, now 423 including replacement of elemental vanadium and zinc evaluations with isotopic evaluations) and extension or updating of many existing neutron data files. Complete details are provided in the companion paper [M. B. Chadwick et al., 'ENDF/B-VII.1 Nuclear Data for Science and Technology: Cross Sections, Covariances, Fission Product Yields and Decay Data,' Nuclear Data Sheets, 112, 2887 (2011)]. This paper focuses on how accurately application libraries may be expected to perform in criticality calculations with these data. Continuous energy cross section libraries, suitable for use with the MCNP Monte Carlo transport code, have been generated and applied to a suite of nearly one thousand critical benchmark assemblies defined in the International Criticality Safety Benchmark Evaluation Project's International Handbook of Evaluated Criticality Safety Benchmark Experiments. This suite covers uranium and plutonium fuel systems in a variety of forms such as metallic, oxide or solution, and under a variety of spectral conditions, including unmoderated (i.e., bare), metal reflected and water or other light element reflected. Assembly eigenvalues that were accurately predicted with ENDF/B-VII.0 cross sections such as unmoderated and uranium reflected {sup 235}U and {sup 239}Pu assemblies, HEU solution systems and LEU oxide lattice systems that mimic commercial PWR configurations continue to be accurately calculated with ENDF/B-VII.1 cross sections, and deficiencies in predicted eigenvalues for assemblies containing selected materials, including titanium, manganese, cadmium and tungsten are greatly reduced. Improvements are also

  3. ENDF/B-VII.1 Neutron Cross Section Data Testing with Critical Assembly Benchmarks and Reactor Experiments

    Energy Technology Data Exchange (ETDEWEB)

    Kahler, A. [Los Alamos National Laboratory (LANL); Macfarlane, R E [Los Alamos National Laboratory (LANL); Mosteller, R D [Los Alamos National Laboratory (LANL); Kiedrowski, B C [Los Alamos National Laboratory (LANL); Frankle, S C [Los Alamos National Laboratory (LANL); Chadwick, M. B. [Los Alamos National Laboratory (LANL); Mcknight, R D [Argonne National Laboratory (ANL); Lell, R M [Argonne National Laboratory (ANL); Palmiotti, G [Idaho National Laboratory (INL); Hiruta, h [Idaho National Laboratory (INL); Herman, Micheal W [Brookhaven National Laboratory (BNL); Arcilla, r [Brookhaven National Laboratory (BNL); Mughabghab, S F [Brookhaven National Laboratory (BNL); Sublet, J C [Culham Science Center, Abington, UK; Trkov, A. [Jozef Stefan Institute, Slovenia; Trumbull, T H [Knolls Atomic Power Laboratory; Dunn, Michael E [ORNL

    2011-01-01

    The ENDF/B-VII.1 library is the latest revision to the United States' Evaluated Nuclear Data File (ENDF). The ENDF library is currently in its seventh generation, with ENDF/B-VII.0 being released in 2006. This revision expands upon that library, including the addition of new evaluated files (was 393 neutron files previously, now 423 including replacement of elemental vanadium and zinc evaluations with isotopic evaluations) and extension or updating of many existing neutron data files. Complete details are provided in the companion paper [1]. This paper focuses on how accurately application libraries may be expected to perform in criticality calculations with these data. Continuous energy cross section libraries, suitable for use with the MCNP Monte Carlo transport code, have been generated and applied to a suite of nearly one thousand critical benchmark assemblies defined in the International Criticality Safety Benchmark Evaluation Project's International Handbook of Evaluated Criticality Safety Benchmark Experiments. This suite covers uranium and plutonium fuel systems in a variety of forms such as metallic, oxide or solution, and under a variety of spectral conditions, including unmoderated (i.e., bare), metal reflected and water or other light element reflected. Assembly eigenvalues that were accurately predicted with ENDF/B-VII.0 cross sections such as unrnoderated and uranium reflected (235)U and (239)Pu assemblies, HEU solution systems and LEU oxide lattice systems that mimic commercial PWR configurations continue to be accurately calculated with ENDF/B-VII.1 cross sections, and deficiencies in predicted eigenvalues for assemblies containing selected materials, including titanium, manganese, cadmium and tungsten are greatly reduced. Improvements are also confirmed for selected actinide reaction rates such as (236)U; (238,242)Pu and (241,243)Am capture in fast systems. Other deficiencies, such as the overprediction of Pu solution system critical

  4. Analysis of Benchmark 2 results

    International Nuclear Information System (INIS)

    Bacha, F.; Lefievre, B.; Maillard, J.; Silva, J.

    1994-01-01

    The code GEANT315 has been compared to different codes in two benchmarks. We analyze its performances through our results, especially in the thick target case. In spite of gaps in nucleus-nucleus interaction theories at intermediate energies, benchmarks allow possible improvements of physical models used in our codes. Thereafter, a scheme of radioactive waste burning system is studied. (authors). 4 refs., 7 figs., 1 tab

  5. A Discussion of Low Reynolds Number Flow for the Two-Dimensional Benchmark Test Case

    DEFF Research Database (Denmark)

    Weng, Miaocheng; Nielsen, Peter V.; Liu, Li

    The use of CFD in ventilation research has arrived to a high level, but there are some conditions in the general CFD procedure which do not apply to all situations in the ventilation research. An example of this isthe turbulence models in Reynolds-averaged Navier-Stokes equations, i.e. (RANS...

  6. Benchmarking the Multidimensional Stellar Implicit Code MUSIC

    Science.gov (United States)

    Goffrey, T.; Pratt, J.; Viallet, M.; Baraffe, I.; Popov, M. V.; Walder, R.; Folini, D.; Geroux, C.; Constantino, T.

    2017-04-01

    We present the results of a numerical benchmark study for the MUltidimensional Stellar Implicit Code (MUSIC) based on widely applicable two- and three-dimensional compressible hydrodynamics problems relevant to stellar interiors. MUSIC is an implicit large eddy simulation code that uses implicit time integration, implemented as a Jacobian-free Newton Krylov method. A physics based preconditioning technique which can be adjusted to target varying physics is used to improve the performance of the solver. The problems used for this benchmark study include the Rayleigh-Taylor and Kelvin-Helmholtz instabilities, and the decay of the Taylor-Green vortex. Additionally we show a test of hydrostatic equilibrium, in a stellar environment which is dominated by radiative effects. In this setting the flexibility of the preconditioning technique is demonstrated. This work aims to bridge the gap between the hydrodynamic test problems typically used during development of numerical methods and the complex flows of stellar interiors. A series of multidimensional tests were performed and analysed. Each of these test cases was analysed with a simple, scalar diagnostic, with the aim of enabling direct code comparisons. As the tests performed do not have analytic solutions, we verify MUSIC by comparing it to established codes including ATHENA and the PENCIL code. MUSIC is able to both reproduce behaviour from established and widely-used codes as well as results expected from theoretical predictions. This benchmarking study concludes a series of papers describing the development of the MUSIC code and provides confidence in future applications.

  7. Does the European natural gas market pass the competitive benchmark of the theory of storage? Indirect tests for three major trading points

    International Nuclear Information System (INIS)

    Stronzik, Marcus; Rammerstorfer, Margarethe; Neumann, Anne

    2009-01-01

    This paper presents the first comparative analysis of the relationship between natural gas storage utilization and price patterns at three major European trading points. Using two indirect tests developed by that are applied in other commodity markets, we impose the no arbitrage condition to model the efficiency of the natural gas market. The results reveal that while operators of European storage facilities realize seasonal arbitrage profits, substantial arbitrage potentials remain. We suggest that the indirect approach is well suited to provide market insights for periods with limited data. We find that overall market performance differs substantially from the competitive benchmark of the theory of storage. (author)

  8. Benchmarking in Foodservice Operations

    National Research Council Canada - National Science Library

    Johnson, Bonnie

    1998-01-01

    The objective of this study was to identify usage of foodservice performance measures, important activities in foodservice benchmarking, and benchmarking attitudes, beliefs, and practices by foodservice directors...

  9. A comprehensive set of benchmark tests for a land surface model of simultaneous fluxes of water and carbon at both the global and seasonal scale

    Directory of Open Access Journals (Sweden)

    E. Blyth

    2011-04-01

    Full Text Available Evaluating the models we use in prediction is important as it allows us to identify uncertainties in prediction as well as guiding the priorities for model development. This paper describes a set of benchmark tests that is designed to quantify the performance of the land surface model that is used in the UK Hadley Centre General Circulation Model (JULES: Joint UK Land Environment Simulator. The tests are designed to assess the ability of the model to reproduce the observed fluxes of water and carbon at the global and regional spatial scale, and on a seasonal basis. Five datasets are used to test the model: water and carbon dioxide fluxes from ten FLUXNET sites covering the major global biomes, atmospheric carbon dioxide concentrations at four representative stations from the global network, river flow from seven catchments, the seasonal mean NDVI over the seven catchments and the potential land cover of the globe (after the estimated anthropogenic changes have been removed. The model is run in various configurations and results are compared with the data.

    A few examples are chosen to demonstrate the importance of using combined use of observations of carbon and water fluxes in essential in order to understand the causes of model errors. The benchmarking approach is suitable for application to other global models.

  10. Using Benchmarking To Strengthen the Assessment of Persistence.

    Science.gov (United States)

    McLachlan, Michael S; Zou, Hongyan; Gouin, Todd

    2017-01-03

    Chemical persistence is a key property for assessing chemical risk and chemical hazard. Current methods for evaluating persistence are based on laboratory tests. The relationship between the laboratory based estimates and persistence in the environment is often unclear, in which case the current methods for evaluating persistence can be questioned. Chemical benchmarking opens new possibilities to measure persistence in the field. In this paper we explore how the benchmarking approach can be applied in both the laboratory and the field to deepen our understanding of chemical persistence in the environment and create a firmer scientific basis for laboratory to field extrapolation of persistence test results.

  11. Burn-up TRIGA Mark II benchmark experiment

    International Nuclear Information System (INIS)

    Persic, A.; Ravnik, M.; Zagar, T.

    1998-01-01

    Different reactor codes are used for calculations of reactor parameters. The accuracy of the programs is tested through comparison of the calculated values with the experimental results. Well-defined and accurately measured benchmarks are required. The experimental results of reactivity measurements, fuel element reactivity worth distribution and fuel-up measurements are presented in this paper. The experiments were performed with partly burnt reactor core. The experimental conditions were well defined, so that the results can be used as a burn-up benchmark test case for a TRIGA Mark II reactor calculations.(author)

  12. Benchmarks for Uncertainty Analysis in Modelling (UAM) for the Design, Operation and Safety Analysis of LWRs - Volume I: Specification and Support Data for Neutronics Cases (Phase I)

    International Nuclear Information System (INIS)

    Ivanov, K.; Avramova, M.; Kamerow, S.; Kodeli, I.; Sartori, E.; Ivanov, E.; Cabellos, O.

    2013-01-01

    The objective of the OECD LWR UAM activity is to establish an internationally accepted benchmark framework to compare, assess and further develop different uncertainty analysis methods associated with the design, operation and safety of LWRs. As a result, the LWR UAM benchmark will help to address current nuclear power generation industry and regulation needs and issues related to practical implementation of risk-informed regulation. The realistic evaluation of consequences must be made with best-estimate coupled codes, but to be meaningful, such results should be supplemented by an uncertainty analysis. The use of coupled codes allows us to avoid unnecessary penalties due to incoherent approximations in the traditional decoupled calculations, and to obtain more accurate evaluation of margins regarding licensing limit. This becomes important for licensing power upgrades, improved fuel assembly and control rod designs, higher burn-up and others issues related to operating LWRs as well as to the new Generation 3+ designs being licensed now (ESBWR, AP-1 000, EPR-1 600, etc.). Establishing an internationally accepted LWR UAM benchmark framework offers the possibility to accelerate the licensing process when using best estimate methods. The proposed technical approach is to establish a benchmark for uncertainty analysis in best-estimate modelling and coupled multi-physics and multi-scale LWR analysis, using as bases a series of well-defined problems with complete sets of input specifications and reference experimental data. The objective is to determine the uncertainty in LWR system calculations at all stages of coupled reactor physics/thermal hydraulics calculations. The full chain of uncertainty propagation from basic data, engineering uncertainties, across different scales (multi-scale), and physics phenomena (multi-physics) will be tested on a number of benchmark exercises for which experimental data are available and for which the power plant details have been

  13. Automation of Test Cases for Web Applications : Automation of CRM Test Cases

    OpenAIRE

    Seyoum, Alazar

    2012-01-01

    The main theme of this project was to design a test automation framework for automating web related test cases. Automating test cases designed for testing a web interface provide a means of improving a software development process by shortening the testing phase in the software development life cycle. In this project an existing AutoTester framework and iMacros test automation tools were used. CRM Test Agent was developed to integrate AutoTester to iMacros and to enable the AutoTester,...

  14. Co-ordinated research programme on benchmark study for the seismic analysis and testing of WWER-type nuclear power plants. V. 3E. Kozloduy NPP units 5/6: Analysis/testing. Working material

    International Nuclear Information System (INIS)

    1996-01-01

    The Co-ordinated research programme on the benchmark study for the seismic analysis and testing of WWER-type nuclear power plants was initiated subsequent to the request from representatives of Member States. The conclusions adopted at the Technical Committee Meeting on Seismic Issues related to existing nuclear power plants held in Tokyo in 1991 called for the harmonization of methods and criteria used in Member States in issues related to seismic safety. The Consulltants' Meeting which followed resulted in producing a working document for CRP. It was decided that a benchmark study is the most effective way to achieve the principal objective. Two types of WWER reactors (WWER-440/213 and WWER-1000) were selected as prototypes for the benchmark exercise to be tested on a full scale using explosions and/or vibration generators. The two prototypes are Kozloduy Units 5/6 for WWER-1000 and Paks for WWER-440/213 nuclear power plants. This volume of Working material contains reports on data related to floor response spectra of Kozloduy NPP; calculational-experimental examination and ensuring of equipment and pipelines seismic resistance at starting and operating WWER-type NPPs; analysis of design floor response spectra and testing of the electrical systems; experimental investigations and seismic analysis Kozloduy NPP; testing of components on the shaking table facilities and contribution to full scale dynamic testing of Kozloduy NPP; seismic evaluation of the main steam line, piping systems, containment pre-stressing and steel ventilation chimney of Kozloduy NPP

  15. HPC Benchmark Suite NMx, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Intelligent Automation Inc., (IAI) and University of Central Florida (UCF) propose to develop a comprehensive numerical test suite for benchmarking current and...

  16. Test case prioritization using Cuscuta search

    Directory of Open Access Journals (Sweden)

    Mukesh Mann

    2014-12-01

    Full Text Available Most companies are under heavy time and resource constraints when it comes to testing a software system. Test prioritization technique(s allows the most useful tests to be executed first, exposing faults earlier in the testing process. Thus makes software testing more efficient and cost effective by covering maximum faults in minimum time. But test case prioritization is not an easy and straightforward process and it requires huge efforts and time. Number of approaches is available with their proclaimed advantages and limitations, but accessibility of any one of them is a subject dependent. In this paper, artificial Cuscuta search algorithm (CSA inspired by real Cuscuta parasitism is used to solve time constraint prioritization problem. We have applied CSA for prioritizing test cases in an order of maximum fault coverage with minimum test suite execution and compare its effectiveness with different prioritization ordering. Taking into account the experimental results, we conclude that (i The average percentage of faults detection (APFD is 82.5% using our proposed CSA ordering which is equal to the APFD of optimal and ant colony based ordering whereas No ordering, Random ordering and Reverse ordering has 76.25%, 75%, 68.75% of APFD respectively.

  17. Comparison of investigator-delineated gross tumor volumes and quality assurance in pancreatic cancer: Analysis of the pretrial benchmark case for the SCALOP trial.

    Science.gov (United States)

    Fokas, Emmanouil; Clifford, Charlotte; Spezi, Emiliano; Joseph, George; Branagan, Jennifer; Hurt, Chris; Nixon, Lisette; Abrams, Ross; Staffurth, John; Mukherjee, Somnath

    2015-12-01

    To evaluate the variation in investigator-delineated volumes and assess plans from the radiotherapy trial quality assurance (RTTQA) program of SCALOP, a phase II trial in locally advanced pancreatic cancer. Participating investigators (n=25) outlined a pre-trial benchmark case as per RT protocol, and the accuracy of investigators' GTV (iGTV) and PTV (iPTV) was evaluated, against the trials team-defined gold standard GTV (gsGTV) and PTV (gsPTV), using both qualitative and geometric analyses. The median Jaccard Conformity Index (JCI) and Geographical Miss Index (GMI) were calculated. Participating RT centers also submitted a radiotherapy plan for this benchmark case, which was centrally reviewed against protocol-defined constraints. Twenty-five investigator-defined contours were evaluated. The median JCI and GMI of iGTVs were 0.57 (IQR: 0.51-0.65) and 0.26 (IQR: 0.15-0.40). For iPTVs, these were 0.75 (IQR: 0.71-0.79) and 0.14 (IQR: 0.11-0.22) respectively. Qualitative analysis showed largest variation at the tumor edges and failure to recognize a peri-pancreatic lymph node. There were no major protocol deviations in RT planning, but three minor PTV coverage deviations were identified. . SCALOP demonstrated considerable variation in iGTV delineation. RTTQA workshops and real-time central review of delineations are needed in future trials. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  18. Benchmarking and Performance Measurement.

    Science.gov (United States)

    Town, J. Stephen

    This paper defines benchmarking and its relationship to quality management, describes a project which applied the technique in a library context, and explores the relationship between performance measurement and benchmarking. Numerous benchmarking methods contain similar elements: deciding what to benchmark; identifying partners; gathering…

  19. Co-ordinated research programme on benchmark study for the seismic analysis and testing of WWER-type nuclear power plants. V. 4D. Paks NPP: Analysis and testing. Working material

    International Nuclear Information System (INIS)

    1996-01-01

    The Co-ordinated research programme on the benchmark study for the seismic analysis and testing of WWER-type nuclear power plants was initiated subsequent to the request from representatives of Member States. The conclusions adopted at the Technical Committee Meeting on Seismic Issues related to existing nuclear power plants held in Tokyo in 1991 called for the harmonization of methods and criteria used in Member States in issues related to seismic safety. The Consulltants' Meeting which followed resulted in producing a working document for CRP. It was decided that a benchmark study is the most effective way to achieve the principal objective. Two types of WWER reactors (WWER-440/213 and WWER-1000) were selected as prototypes for the benchmark exercise to be tested on a full scale using explosions and/or vibration generators. The two prototypes are Kozloduy Units 5/6 for WWER-1000 and Paks for WWER-440/213 nuclear power plants. This volume of Working material contains reports on seismic margin assessment and earthquake experience based methods for WWER-440/213 type NPPs; structural analysis and site inspection for site requalification; structural response of Paks NPP reactor building; analysis and testing of model worm type tanks on shaking table; vibration test of a worm tank model; evaluation of potential hazard for operating WWER control rods under seismic excitation

  20. Co-ordinated research programme on benchmark study for the seismic analysis and testing of WWER-type nuclear power plants. V. 3I. Kozloduy NPP units 5/6: Analysis and testing. Working material

    International Nuclear Information System (INIS)

    1999-01-01

    In August 1991, following the SMiRT-11 Conference in Tokyo, a Technical Committee Meeting was held on the 'Seismic safety issues relating to existing NPPs'. The Proceedings of this TCM was subsequently compiled in an IAEA Working Material. One of the main recommendations of this TCM, called for the harmonization of criteria and methods used in Member States in seismic reassessment and upgrading of existing NPPs. Twenty four institutions from thirteen countries participated in the CRP named 'Benchmark study for the seismic analysis and testing of WWER type NPPs'. Two types of WWER reactors (WWER-1000 and WWER-440/213) selected for benchmarking. Kozloduy NPP Units 5/6 and Paks NPP represented these respectively as prototypes. Consistent with the recommendations of the TCM and the working paper prepared by the subsequent Consultants' Meeting, the focal activity of the CRP was the benchmarking exercises. A similar methodology was followed both for Paks NPP and Kozloduy NPP Unit 5. Firstly, the NPP (mainly the reactor building) was tested using a blast loading generated by a series of explosions from buried TNT charges. Records from this test were obtained at several free field locations (both downhole and surface), foundation mat, various elevations of structures as well as some tanks and the stack. Then the benchmark participants were provided with structural drawings, soil data and the free field record of the blast experiment. Their task was to make a blind prediction of the response at preselected locations. The analytical results from these participants were then compared with the results from the test. Although the benchmarking exercises constituted the focus of the CRP, there were many other interesting problems related to the seismic safety of WWER type NPPs which were addressed by the participants. These involved generic studies, i.e. codes and standards used in original WWER designs and their comparison with current international practice; seismic analysis

  1. Benchmarking Supplier Development: An Empirical Case Study of Validating a Framework to Improve Buyer-Supplier Relationship

    Directory of Open Access Journals (Sweden)

    Shahzad Khuram

    2016-03-01

    Full Text Available In today’s dynamic business environment, firms are required to utilize efficiently and effectively all the useful resources to gain competitive advantage. Supplier development has evolved as an important strategic instrument to improve buyer supplier relationships. For that reason, this study focuses on providing the strategic significance of supplier development approaches to improve business relationships. By using qualitative research method, an integrated framework of supplier development and buyer-supplier relationship development has been tested and validated in a Finnish case company to provide empirical evidence. It particularly investigates how supplier development approaches can develop buyer-supplier relationships. The study present a set of propositions that identify significant supplier development approaches critical for the development of buyer-supplier relationships and develop a theoretical framework that specifies how these different supplier development approaches support in order to strengthen the relationships. The results are produced from an in-depth case study by implementing the proposed research framework. The findings reveal that supplier development strategies i.e., supplier incentives and direct involvements strongly effect in developing buyer-supplier relationships. Further research may focus on considering in-depth investigation of trust and communication factors along with propositions developed in the study to find out general applicability in dynamic business environment. Proposed integrated framework along with propositions is a unique combination of useful solutions for tactical and strategic management’s decision making and also valid for academic researchers to develop supplier development theories.

  2. FENDL neutronics benchmark: Specifications for the calculational neutronics and shielding benchmark

    International Nuclear Information System (INIS)

    Sawan, M.E.

    1994-12-01

    During the IAEA Advisory Group Meeting on ''Improved Evaluations and Integral Data Testing for FENDL'' held in Garching near Munich, Germany in the period 12-16 September 1994, the Working Group II on ''Experimental and Calculational Benchmarks on Fusion Neutronics for ITER'' recommended that a calculational benchmark representative of the ITER design should be developed. This report describes the neutronics and shielding calculational benchmark available for scientists interested in performing analysis for this benchmark. (author)

  3. Consortial Benchmarking: a method of academic-practitioner collaborative research and its application in a B2B environment

    NARCIS (Netherlands)

    Schiele, Holger; Krummaker, Stefan

    2010-01-01

    Purpose of the paper and literature addressed: Development of a new method for academicpractitioner collaboration, addressing the literature on collaborative research Research method: Model elaboration and test with an in-depth case study Research findings: In consortial benchmarking, practitioners

  4. Benchmarking in the Netherlands

    International Nuclear Information System (INIS)

    1999-01-01

    In two articles an overview is given of the activities in the Dutch industry and energy sector with respect to benchmarking. In benchmarking operational processes of different competitive businesses are compared to improve your own performance. Benchmark covenants for energy efficiency between the Dutch government and industrial sectors contribute to a growth of the number of benchmark surveys in the energy intensive industry in the Netherlands. However, some doubt the effectiveness of the benchmark studies

  5. Gas Flow Validation with Panda Tests from the OECD SETH Benchmark Covering Steam/Air and Steam/Helium/Air Mixtures

    International Nuclear Information System (INIS)

    Royl, P.; Travis, J.R.; Breitung, W.; Kim, J.; Kim, S.B.

    2009-01-01

    The CFD code GASFLOW solves the time-dependent compressible Navier-Stokes Equations with multiple gas species. GASFLOW was developed for nonnuclear and nuclear applications. The major nuclear applications of GASFLOW are 3D analyses of steam/hydrogen distributions in complex PWR containment buildings to simulate scenarios of beyond design basis accidents. Validation of GASFLOW has been a continuously ongoing process together with the development of this code. This contribution reports the results from the open posttest GASFLOW calculations that have been performed for new experiments from the OECD SETH Benchmark. Discussed are the steam distribution tests 9 and 9 bis, 21 and 21 bis involving comparable sequences with and without steam condensation and the last SETH test 25 with steam/helium release and condensation. The latter one involves lighter gas mixture sources like they can result in real accidents. The helium is taken as simulant for hydrogen

  6. Development and application of freshwater sediment-toxicity benchmarks for currently used pesticides

    Energy Technology Data Exchange (ETDEWEB)

    Nowell, Lisa H., E-mail: lhnowell@usgs.gov [U.S. Geological Survey, California Water Science Center, Placer Hall, 6000 J Street, Sacramento, CA 95819 (United States); Norman, Julia E., E-mail: jnorman@usgs.gov [U.S. Geological Survey, Oregon Water Science Center, 2130 SW 5" t" h Avenue, Portland, OR 97201 (United States); Ingersoll, Christopher G., E-mail: cingersoll@usgs.gov [U.S. Geological Survey, Columbia Environmental Research Center, 4200 New Haven Road, Columbia, MO 65021 (United States); Moran, Patrick W., E-mail: pwmoran@usgs.gov [U.S. Geological Survey, Washington Water Science Center, 934 Broadway, Suite 300, Tacoma, WA 98402 (United States)

    2016-04-15

    Sediment-toxicity benchmarks are needed to interpret the biological significance of currently used pesticides detected in whole sediments. Two types of freshwater sediment benchmarks for pesticides were developed using spiked-sediment bioassay (SSB) data from the literature. These benchmarks can be used to interpret sediment-toxicity data or to assess the potential toxicity of pesticides in whole sediment. The Likely Effect Benchmark (LEB) defines a pesticide concentration in whole sediment above which there is a high probability of adverse effects on benthic invertebrates, and the Threshold Effect Benchmark (TEB) defines a concentration below which adverse effects are unlikely. For compounds without available SSBs, benchmarks were estimated using equilibrium partitioning (EqP). When a sediment sample contains a pesticide mixture, benchmark quotients can be summed for all detected pesticides to produce an indicator of potential toxicity for that mixture. Benchmarks were developed for 48 pesticide compounds using SSB data and 81 compounds using the EqP approach. In an example application, data for pesticides measured in sediment from 197 streams across the United States were evaluated using these benchmarks, and compared to measured toxicity from whole-sediment toxicity tests conducted with the amphipod Hyalella azteca (28-d exposures) and the midge Chironomus dilutus (10-d exposures). Amphipod survival, weight, and biomass were significantly and inversely related to summed benchmark quotients, whereas midge survival, weight, and biomass showed no relationship to benchmarks. Samples with LEB exceedances were rare (n = 3), but all were toxic to amphipods (i.e., significantly different from control). Significant toxicity to amphipods was observed for 72% of samples exceeding one or more TEBs, compared to 18% of samples below all TEBs. Factors affecting toxicity below TEBs may include the presence of contaminants other than pesticides, physical/chemical characteristics

  7. Development and application of freshwater sediment-toxicity benchmarks for currently used pesticides

    International Nuclear Information System (INIS)

    Nowell, Lisa H.; Norman, Julia E.; Ingersoll, Christopher G.; Moran, Patrick W.

    2016-01-01

    Sediment-toxicity benchmarks are needed to interpret the biological significance of currently used pesticides detected in whole sediments. Two types of freshwater sediment benchmarks for pesticides were developed using spiked-sediment bioassay (SSB) data from the literature. These benchmarks can be used to interpret sediment-toxicity data or to assess the potential toxicity of pesticides in whole sediment. The Likely Effect Benchmark (LEB) defines a pesticide concentration in whole sediment above which there is a high probability of adverse effects on benthic invertebrates, and the Threshold Effect Benchmark (TEB) defines a concentration below which adverse effects are unlikely. For compounds without available SSBs, benchmarks were estimated using equilibrium partitioning (EqP). When a sediment sample contains a pesticide mixture, benchmark quotients can be summed for all detected pesticides to produce an indicator of potential toxicity for that mixture. Benchmarks were developed for 48 pesticide compounds using SSB data and 81 compounds using the EqP approach. In an example application, data for pesticides measured in sediment from 197 streams across the United States were evaluated using these benchmarks, and compared to measured toxicity from whole-sediment toxicity tests conducted with the amphipod Hyalella azteca (28-d exposures) and the midge Chironomus dilutus (10-d exposures). Amphipod survival, weight, and biomass were significantly and inversely related to summed benchmark quotients, whereas midge survival, weight, and biomass showed no relationship to benchmarks. Samples with LEB exceedances were rare (n = 3), but all were toxic to amphipods (i.e., significantly different from control). Significant toxicity to amphipods was observed for 72% of samples exceeding one or more TEBs, compared to 18% of samples below all TEBs. Factors affecting toxicity below TEBs may include the presence of contaminants other than pesticides, physical/chemical characteristics

  8. Analyzing the BBOB results by means of benchmarking concepts.

    Science.gov (United States)

    Mersmann, O; Preuss, M; Trautmann, H; Bischl, B; Weihs, C

    2015-01-01

    We present methods to answer two basic questions that arise when benchmarking optimization algorithms. The first one is: which algorithm is the "best" one? and the second one is: which algorithm should I use for my real-world problem? Both are connected and neither is easy to answer. We present a theoretical framework for designing and analyzing the raw data of such benchmark experiments. This represents a first step in answering the aforementioned questions. The 2009 and 2010 BBOB benchmark results are analyzed by means of this framework and we derive insight regarding the answers to the two questions. Furthermore, we discuss how to properly aggregate rankings from algorithm evaluations on individual problems into a consensus, its theoretical background and which common pitfalls should be avoided. Finally, we address the grouping of test problems into sets with similar optimizer rankings and investigate whether these are reflected by already proposed test problem characteristics, finding that this is not always the case.

  9. Co-ordinated research programme on benchmark study for the seismic analysis and testing of WWER-type nuclear power plants. V. 3H. Kozloduy NPP units 5/6: Analysis and testing. Working material

    International Nuclear Information System (INIS)

    1999-01-01

    In August 1991, following the SMiRT-11 Conference in Tokyo, a Technical Committee Meeting was held on the 'Seismic safety issues relating to existing NPPs'. The Proceedings of this TCM was subsequently compiled in an IAEA Working Material. One of the main recommendations of this TCM, called for the harmonization of criteria and methods used in Member States in seismic reassessment and upgrading of existing NPPs. Twenty four institutions from thirteen countries participated in the CRP named 'Benchmark study for the seismic analysis and testing of WWER type NPPs'. Two types of WWER reactors (WWER-1000 and WWER-440/213) selected for benchmarking. Kozloduy NPP Units 5/6 and Paks NPP represented these respectively as prototypes. Consistent with the recommendations of the TCM and the working paper prepared by the subsequent Consultants' Meeting, the focal activity of the CRP was the benchmarking exercises. A similar methodology was followed both for Paks NPP and Kozloduy NPP Unit 5. Firstly, the NPP (mainly the reactor building) was tested using a blast loading generated by a series of explosions from buried TNT charges. Records from this test were obtained at several free field locations (both downhole and surface), foundation mat, various elevations of structures as well as some tanks and the stack. Then the benchmark participants were provided with structural drawings, soil data and the free field record of the blast experiment. Their task was to make a blind prediction of the response at preselected locations. The analytical results from these participants were then compared with the results from the test. Although the benchmarking exercises constituted the focus of the CRP, there were many other interesting problems related to the seismic safety of WWER type NPPs which were addressed by the participants. These involved generic studies, i.e. codes and standards used in original WWER designs and their comparison with current international practice; seismic analysis

  10. Towards benchmarking an in-stream water quality model

    Directory of Open Access Journals (Sweden)

    2007-01-01

    Full Text Available A method of model evaluation is presented which utilises a comparison with a benchmark model. The proposed benchmarking concept is one that can be applied to many hydrological models but, in this instance, is implemented in the context of an in-stream water quality model. The benchmark model is defined in such a way that it is easily implemented within the framework of the test model, i.e. the approach relies on two applications of the same model code rather than the application of two separate model codes. This is illustrated using two case studies from the UK, the Rivers Aire and Ouse, with the objective of simulating a water quality classification, general quality assessment (GQA, which is based on dissolved oxygen, biochemical oxygen demand and ammonium. Comparisons between the benchmark and test models are made based on GQA, as well as a step-wise assessment against the components required in its derivation. The benchmarking process yields a great deal of important information about the performance of the test model and raises issues about a priori definition of the assessment criteria.

  11. Experimental studies and computational benchmark on heavy liquid metal natural circulation in a full height-scale test loop for small modular reactors

    Energy Technology Data Exchange (ETDEWEB)

    Shin, Yong-Hoon, E-mail: chaotics@snu.ac.kr [Department of Energy Systems Engineering, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul 08826 (Korea, Republic of); Cho, Jaehyun [Korea Atomic Energy Research Institute, 111 Daedeok-daero, 989 Beon-gil, Yuseong-gu, Daejeon 34057 (Korea, Republic of); Lee, Jueun; Ju, Heejae; Sohn, Sungjune; Kim, Yeji; Noh, Hyunyub; Hwang, Il Soon [Department of Energy Systems Engineering, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul 08826 (Korea, Republic of)

    2017-05-15

    Highlights: • Experimental studies on natural circulation for lead-bismuth eutectic were conducted. • Adiabatic wall boundaries conditions were established by compensating heat loss. • Computational benchmark with a system thermal-hydraulics code was performed. • Numerical simulation and experiment showed good agreement in mass flow rate. • An empirical relation was formulated for mass flow rate with experimental data. - Abstract: In order to test the enhanced safety of small lead-cooled fast reactors, lead-bismuth eutectic (LBE) natural circulation characteristics have been studied. We present results of experiments with LBE non-isothermal natural circulation in a full-height scale test loop, HELIOS (heavy eutectic liquid metal loop for integral test of operability and safety of PEACER), and the validation of a system thermal-hydraulics code. The experimental studies on LBE were conducted under steady state as a function of core power conditions from 9.8 kW to 33.6 kW. Local surface heaters on the main loop were activated and finely tuned by trial-and-error approach to make adiabatic wall boundary conditions. A thermal-hydraulic system code MARS-LBE was validated by using the well-defined benchmark data. It was found that the predictions were mostly in good agreement with the experimental data in terms of mass flow rate and temperature difference that were both within 7%, respectively. With experiment results, an empirical relation predicting mass flow rate at a non-isothermal, adiabatic condition in HELIOS was derived.

  12. Co-ordinated research programme on benchmark study for the seismic analysis and testing of WWER-type nuclear power plants. V. 3F. Kozloduy NPP units 5/6: Analysis and testing. Working material

    International Nuclear Information System (INIS)

    1999-01-01

    In August 1991, following the SMiRT-11 Conference in Tokyo, a Technical Committee Meeting was held on the 'Seismic safety issues relating to existing NPPs'. The Proceedings of this TCM was subsequently compiled in an IAEA Working Material. One of the main recommendations of this TCM, called for the harmonization of criteria and methods used in Member States in seismic reassessment and upgrading of existing NPPs. Twenty four institutions from thirteen countries participated in the CRP named 'Benchmark study for the seismic analysis and testing of WWER type NPPs'. Two types of WWER reactors (WWER-1000 and WWER-440/213) selected for benchmarking. Kozloduy NPP Units 5/6 and Paks NPP represented these respectively as prototypes. Consistent with the recommendations of the TCM and the working paper prepared by the subsequent Consultants' Meeting, the focal activity of the CRP was the benchmarking exercises. A similar methodology was followed both for Paks NPP and Kozloduy NPP Unit 5. Firstly, the NPP (mainly the reactor building) was tested using a blast loading generated by a series of explosions from buried TNT charges. Records from this test were obtained at several free field locations (both downhole and surface), foundation mat, various elevations of structures as well as some tanks and the stack. Then the benchmark participants were provided with structural drawings, soil data and the free field record of the blast experiment. Their task was to make a blind prediction of the response at preselected locations. The analytical results from these participants were then compared with the results from the test. Although the benchmarking exercises constituted the focus of the CRP, there were many other interesting problems related to the seismic safety of WWER type NPPs which were addressed by the participants. These involved generic studies, i.e. codes and standards used in original WWER designs and their comparison with current international practice; seismic analysis

  13. Co-ordinated research programme on benchmark study for the seismic analysis and testing of WWER-type nuclear power plants. V. 4A. Paks NPP: Analysis/testing. Working material

    International Nuclear Information System (INIS)

    1995-01-01

    The Co-ordinated research programme on the benchmark study for the seismic analysis and testing of WWER-type nuclear power plants was initiated subsequent to the request from representatives of Member States. The conclusions adopted at the Technical Committee Meeting on Seismic Issues related to existing nuclear power plants held in Tokyo in 1991 called for the harmonization of methods and criteria used in Member States in issues related to seismic safety. The Consulltants' Meeting which followed resulted in producing a working document for CRP. It was decided that a benchmark study is the most effective way to achieve the principal objective. Two types of WWER reactors (WWER-440/213 and WWER-1000) were selected as prototypes for the benchmark exercise to be tested on a full scale using explosions and/or vibration generators. The two prototypes are Kozloduy Units 5/6 for WWER-1000 and Paks for WWER-440/213 nuclear power plants. This volume of Working material contains reports on data related to seismic analyses of structures of Paks and Kozloduy reactor buildings and WWER-440/213 primary coolant loops with different antiseismic devices

  14. Co-ordinated research programme on benchmark study for the seismic analysis and testing of WWER-type nuclear power plants. V. 4B. Paks NPP: Analysis/testing. Working material

    International Nuclear Information System (INIS)

    1995-01-01

    The Co-ordinated research programme on the benchmark study for the seismic analysis and testing of WWER-type nuclear power plants was initiated subsequent to the request from representatives of Member States. The conclusions adopted at the Technical Committee Meeting on Seismic Issues related to existing nuclear power plants held in Tokyo in 1991 called for the harmonization of methods and criteria used in Member States in issues related to seismic safety. The Consulltants' Meeting which followed resulted in producing a working document for CRP. It was decided that a benchmark study is the most effective way to achieve the principal objective. Two types of WWER reactors (WWER-440/213 and WWER-1000) were selected as prototypes for the benchmark exercise to be tested on a full scale using explosions and/or vibration generators. The two prototypes are Kozloduy Units 5/6 for WWER-1000 and Paks for WWER-440/213 nuclear power plants. This volume of Working material contains reports on dynamic study of the main building of the Paks NPP; shake table investigation at Paks NPP and the Final report of the Co-ordinated Research Programme

  15. Co-ordinated research programme on benchmark study for the seismic analysis and testing of WWER-type nuclear power plants. V. 4A. Paks NPP: Analysis/testing. Working material

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-07-01

    The Co-ordinated research programme on the benchmark study for the seismic analysis and testing of WWER-type nuclear power plants was initiated subsequent to the request from representatives of Member States. The conclusions adopted at the Technical Committee Meeting on Seismic Issues related to existing nuclear power plants held in Tokyo in 1991 called for the harmonization of methods and criteria used in Member States in issues related to seismic safety. The Consulltants' Meeting which followed resulted in producing a working document for CRP. It was decided that a benchmark study is the most effective way to achieve the principal objective. Two types of WWER reactors (WWER-440/213 and WWER-1000) were selected as prototypes for the benchmark exercise to be tested on a full scale using explosions and/or vibration generators. The two prototypes are Kozloduy Units 5/6 for WWER-1000 and Paks for WWER-440/213 nuclear power plants. This volume of Working material contains reports on data related to seismic analyses of structures of Paks and Kozloduy reactor buildings and WWER-440/213 primary coolant loops with different antiseismic devices.

  16. Co-ordinated research programme on benchmark study for the seismic analysis and testing of WWER-type nuclear power plants. V. 4C. Paks NPP: Analysis and testing. Working material

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-07-01

    The Co-ordinated research programme on the benchmark study for the seismic analysis and testing of WWER-type nuclear power plants was initiated subsequent to the request from representatives of Member States. The conclusions adopted at the Technical Committee Meeting on Seismic Issues related to existing nuclear power plants held in Tokyo in 1991 called for the harmonization of methods and criteria used in Member States in issues related to seismic safety. The Consulltants' Meeting which followed resulted in producing a working document for CRP. It was decided that a benchmark study is the most effective way to achieve the principal objective. Two types of WWER reactors (WWER-440/213 and WWER-1000) were selected as prototypes for the benchmark exercise to be tested on a full scale using explosions and/or vibration generators. The two prototypes are Kozloduy Units 5/6 for WWER-1000 and Paks for WWER-440/213 nuclear power plants. This volume of Working material involves comparative analysis of the seismic analysis results of the reactor building for soft soil conditions, derivation of design response spectra for components and systems; and upper range design response spectra for soft soil site conditions at Paks NPP.

  17. Co-ordinated research programme on benchmark study for the seismic analysis and testing of WWER-type nuclear power plants. V. 4C. Paks NPP: Analysis and testing. Working material

    International Nuclear Information System (INIS)

    1996-01-01

    The Co-ordinated research programme on the benchmark study for the seismic analysis and testing of WWER-type nuclear power plants was initiated subsequent to the request from representatives of Member States. The conclusions adopted at the Technical Committee Meeting on Seismic Issues related to existing nuclear power plants held in Tokyo in 1991 called for the harmonization of methods and criteria used in Member States in issues related to seismic safety. The Consulltants' Meeting which followed resulted in producing a working document for CRP. It was decided that a benchmark study is the most effective way to achieve the principal objective. Two types of WWER reactors (WWER-440/213 and WWER-1000) were selected as prototypes for the benchmark exercise to be tested on a full scale using explosions and/or vibration generators. The two prototypes are Kozloduy Units 5/6 for WWER-1000 and Paks for WWER-440/213 nuclear power plants. This volume of Working material involves comparative analysis of the seismic analysis results of the reactor building for soft soil conditions, derivation of design response spectra for components and systems; and upper range design response spectra for soft soil site conditions at Paks NPP

  18. Dynamic benchmarking of simulation codes

    International Nuclear Information System (INIS)

    Henry, R.E.; Paik, C.Y.; Hauser, G.M.

    1996-01-01

    Computer simulation of nuclear power plant response can be a full-scope control room simulator, an engineering simulator to represent the general behavior of the plant under normal and abnormal conditions, or the modeling of the plant response to conditions that would eventually lead to core damage. In any of these, the underlying foundation for their use in analysing situations, training of vendor/utility personnel, etc. is how well they represent what has been known from industrial experience, large integral experiments and separate effects tests. Typically, simulation codes are benchmarked with some of these; the level of agreement necessary being dependent upon the ultimate use of the simulation tool. However, these analytical models are computer codes, and as a result, the capabilities are continually enhanced, errors are corrected, new situations are imposed on the code that are outside of the original design basis, etc. Consequently, there is a continual need to assure that the benchmarks with important transients are preserved as the computer code evolves. Retention of this benchmarking capability is essential to develop trust in the computer code. Given the evolving world of computer codes, how is this retention of benchmarking capabilities accomplished? For the MAAP4 codes this capability is accomplished through a 'dynamic benchmarking' feature embedded in the source code. In particular, a set of dynamic benchmarks are included in the source code and these are exercised every time the archive codes are upgraded and distributed to the MAAP users. Three different types of dynamic benchmarks are used: plant transients; large integral experiments; and separate effects tests. Each of these is performed in a different manner. The first is accomplished by developing a parameter file for the plant modeled and an input deck to describe the sequence; i.e. the entire MAAP4 code is exercised. The pertinent plant data is included in the source code and the computer

  19. Criteria of benchmark selection for efficient flexible multibody system formalisms

    Directory of Open Access Journals (Sweden)

    Valášek M.

    2007-10-01

    Full Text Available The paper deals with the selection process of benchmarks for testing and comparing efficient flexible multibody formalisms. The existing benchmarks are briefly summarized. The purposes for benchmark selection are investigated. The result of this analysis is the formulation of the criteria of benchmark selection for flexible multibody formalisms. Based on them the initial set of suitable benchmarks is described. Besides that the evaluation measures are revised and extended.

  20. Co-ordinated research programme on benchmark study for the seismic analysis and testing of WWER-type nuclear power plants. V. 4E. Paks NPP: Analysis and testing. Working material

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-07-01

    In August 1991, following the SMiRT-11 Conference in Tokyo, a Technical Committee Meeting was held on the 'Seismic safety issues relating to existing NPPs'. The Proceedings of this TCM was subsequently compiled in an IAEA Working Material. One of the main recommendations of this TCM, called for the harmonization of criteria and methods used in Member States in seismic reassessment and upgrading of existing NPPs. Twenty four institutions from thirteen countries participated in the CRP named 'Benchmark study for the seismic analysis and testing of WWER type NPPs'. Two types of WWER reactors (WWER-1000 and WWER-440/213) selected for benchmarking. Kozloduy NPP Units 5/6 and Paks NPP represented these respectively as prototypes. Consistent with the recommendations of the TCM and the working paper prepared by the subsequent Consultants' Meeting, the focal activity of the CRP was the benchmarking exercises. A similar methodology was followed both for Paks NPP and Kozloduy NPP Unit 5. Firstly, the NPP (mainly the reactor building) was tested using a blast loading generated by a series of explosions from buried TNT charges. Records from this test were obtained at several free field locations (both downhole and surface), foundation mat, various elevations of structures as well as some tanks and the stack. Then the benchmark participants were provided with structural drawings, soil data and the free field record of the blast experiment. Their task was to make a blind prediction of the response at preselected locations. The analytical results from these participants were then compared with the results from the test. Although the benchmarking exercises constituted the focus of the CRP, there were many other interesting problems related to the seismic safety of WWER type NPPs which were addressed by the participants. These involved generic studies, i.e. codes and standards used in original WWER designs and their comparison with current international practice; seismic analysis

  1. Co-ordinated research programme on benchmark study for the seismic analysis and testing of WWER-type nuclear power plants. V. 4F. Paks NPP: Analysis and testing. Working material

    International Nuclear Information System (INIS)

    1999-01-01

    In August 1991, following the SMiRT-11 Conference in Tokyo, a Technical Committee Meeting was held on the 'Seismic safety issues relating to existing NPPs'. The Proceedings of this TCM was subsequently compiled in an IAEA Working Material. One of the main recommendations of this TCM, called for the harmonization of criteria and methods used in Member States in seismic reassessment and upgrading of existing NPPs. Twenty four institutions from thirteen countries participated in the CRP named 'Benchmark study for the seismic analysis and testing of WWER type NPPs'. Two types of WWER reactors (WWER-1000 and WWER-440/213) selected for benchmarking. Kozloduy NPP Units 5/6 and Paks NPP represented these respectively as prototypes. Consistent with the recommendations of the TCM and the working paper prepared by the subsequent Consultants' Meeting, the focal activity of the CRP was the benchmarking exercises. A similar methodology was followed both for Paks NPP and Kozloduy NPP Unit 5. Firstly, the NPP (mainly the reactor building) was tested using a blast loading generated by a series of explosions from buried TNT charges. Records from this test were obtained at several free field locations (both downhole and surface), foundation mat, various elevations of structures as well as some tanks and the stack. Then the benchmark participants were provided with structural drawings, soil data and the free field record of the blast experiment. Their task was to make a blind prediction of the response at preselected locations. The analytical results from these participants were then compared with the results from the test. Although the benchmarking exercises constituted the focus of the CRP, there were many other interesting problems related to the seismic safety of WWER type NPPs which were addressed by the participants. These involved generic studies, i.e. codes and standards used in original WWER designs and their comparison with current international practice; seismic analysis

  2. Co-ordinated research programme on benchmark study for the seismic analysis and testing of WWER-type nuclear power plants. V. 3G. Kozloduy NPP units 5/6: Analysis and testing. Working material

    International Nuclear Information System (INIS)

    1999-01-01

    In August 1991, following the SMiRT-11 Conference in Tokyo, a Technical Committee Meeting was held on the 'Seismic safety issues relating to existing NPPs'. The Proceedings of this TCM was subsequently compiled in an IAEA Working Material. One of the main recommendations of this TCM, called for the harmonization of criteria and methods used in Member States in seismic reassessment and upgrading of existing NPPs. Twenty four institutions from thirteen countries participated in the CRP named 'Benchmark study for the seismic analysis and testing of WWER type NPPs'. Two types of WWER reactors (WWER-1000 and WWER-440/213) selected for benchmarking. Kozloduy NPP Units 5/6 and Paks NPP represented these respectively as prototypes. Consistent with the recommendations of the TCM and the working paper prepared by the subsequent Consultants' Meeting, the focal activity of the CRP was the benchmarking exercises. A similar methodology was followed both for Paks NPP and Kozloduy NPP Unit 5. Firstly, the NPP (mainly the reactor building) was tested using a blast loading generated by a series of explosions from buried TNT charges. Records from this test were obtained at several free field locations (both downhole and surface), foundation mat, various elevations of structures as well as some tanks and the stack. Then the benchmark participants were provided with structural drawings, soil data and the free field record of the blast experiment. Their task was to make a blind prediction of the response at preselected locations. The analytical results from these participants were then compared with the results from the test. Although the benchmarking exercises constituted the focus of the CRP, there were many other interesting problems related to the seismic safety of WWER type NPPs which were addressed by the participants. These involved generic studies, i.e. codes and standards used in original WWER designs and their comparison with current international practice; seismic analysis

  3. Co-ordinated research programme on benchmark study for the seismic analysis and testing of WWER-type nuclear power plants. V. 4E. Paks NPP: Analysis and testing. Working material

    International Nuclear Information System (INIS)

    1995-01-01

    In August 1991, following the SMiRT-11 Conference in Tokyo, a Technical Committee Meeting was held on the 'Seismic safety issues relating to existing NPPs'. The Proceedings of this TCM was subsequently compiled in an IAEA Working Material. One of the main recommendations of this TCM, called for the harmonization of criteria and methods used in Member States in seismic reassessment and upgrading of existing NPPs. Twenty four institutions from thirteen countries participated in the CRP named 'Benchmark study for the seismic analysis and testing of WWER type NPPs'. Two types of WWER reactors (WWER-1000 and WWER-440/213) selected for benchmarking. Kozloduy NPP Units 5/6 and Paks NPP represented these respectively as prototypes. Consistent with the recommendations of the TCM and the working paper prepared by the subsequent Consultants' Meeting, the focal activity of the CRP was the benchmarking exercises. A similar methodology was followed both for Paks NPP and Kozloduy NPP Unit 5. Firstly, the NPP (mainly the reactor building) was tested using a blast loading generated by a series of explosions from buried TNT charges. Records from this test were obtained at several free field locations (both downhole and surface), foundation mat, various elevations of structures as well as some tanks and the stack. Then the benchmark participants were provided with structural drawings, soil data and the free field record of the blast experiment. Their task was to make a blind prediction of the response at preselected locations. The analytical results from these participants were then compared with the results from the test. Although the benchmarking exercises constituted the focus of the CRP, there were many other interesting problems related to the seismic safety of WWER type NPPs which were addressed by the participants. These involved generic studies, i.e. codes and standards used in original WWER designs and their comparison with current international practice; seismic analysis

  4. Analysis of benchmark experiments for testing the IKE multigroup cross-section libraries based on ENDF/B-III and IV

    International Nuclear Information System (INIS)

    Keinert, J.; Mattes, M.

    1975-01-01

    Benchmark experiments offer the most direct method for validation of nuclear cross-section sets and calculational methods. For 16 fast and thermal critical assemblies containing uranium and/or plutonium of different compositions we compared our calculational results with measured integral quantities, such as ksub(eff), central reaction rate ratios or fast and thermal activation (dis)advantage factors. Cause of the simple calculational modelling of these assemblies the calculations proved as a good test for the IKE multigroup cross-section libraries essentially based on ENDF/B-IV. In general, our calculational results are in excellent agreement with the measured values. Only with some critical systems the basic ENDF/B-IV data proved to be insufficient in calculating ksub(eff), probably due to Pu neutron data and U 238 fast capture cross-sections. (orig.) [de

  5. Benchmarking multimedia performance

    Science.gov (United States)

    Zandi, Ahmad; Sudharsanan, Subramania I.

    1998-03-01

    With the introduction of faster processors and special instruction sets tailored to multimedia, a number of exciting applications are now feasible on the desktops. Among these is the DVD playback consisting, among other things, of MPEG-2 video and Dolby digital audio or MPEG-2 audio. Other multimedia applications such as video conferencing and speech recognition are also becoming popular on computer systems. In view of this tremendous interest in multimedia, a group of major computer companies have formed, Multimedia Benchmarks Committee as part of Standard Performance Evaluation Corp. to address the performance issues of multimedia applications. The approach is multi-tiered with three tiers of fidelity from minimal to full compliant. In each case the fidelity of the bitstream reconstruction as well as quality of the video or audio output are measured and the system is classified accordingly. At the next step the performance of the system is measured. In many multimedia applications such as the DVD playback the application needs to be run at a specific rate. In this case the measurement of the excess processing power, makes all the difference. All these make a system level, application based, multimedia benchmark very challenging. Several ideas and methodologies for each aspect of the problems will be presented and analyzed.

  6. Atomic Energy Research benchmark activity

    International Nuclear Information System (INIS)

    Makai, M.

    1998-01-01

    The test problems utilized in the validation and verification process of computer programs in Atomic Energie Research are collected into one bunch. This is the first step towards issuing a volume in which tests for VVER are collected, along with reference solutions and a number of solutions. The benchmarks do not include the ZR-6 experiments because they have been published along with a number of comparisons in the Final reports of TIC. The present collection focuses on operational and mathematical benchmarks which cover almost the entire range of reaktor calculation. (Author)

  7. Aquatic Life Benchmarks

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Aquatic Life Benchmarks is an EPA-developed set of criteria for freshwater species. These benchmarks are based on toxicity values reviewed by EPA and used in the...

  8. Operational benchmarking of Japanese and Danish hopsitals

    DEFF Research Database (Denmark)

    Traberg, Andreas; Itoh, Kenji; Jacobsen, Peter

    2010-01-01

    This benchmarking model is designed as an integration of three organizational dimensions suited for the healthcare sector. The model incorporates posterior operational indicators, and evaluates upon aggregation of performance. The model is tested upon seven cases from Japan and Denmark. Japanese...... hospitals focus on productivity and reducing errors provide operational benefits, which primarily is achieved by high degree of overwork among staff. Danish hospitals on the contrary pay the price of productivity, with focus on pleasing caring needs of the patient and limiting overwork among employees....

  9. Benchmark Two-Good Utility Functions

    NARCIS (Netherlands)

    de Jaegher, K.

    Benchmark two-good utility functions involving a good with zero income elasticity and unit income elasticity are well known. This paper derives utility functions for the additional benchmark cases where one good has zero cross-price elasticity, unit own-price elasticity, and zero own price

  10. Simulation of sound waves using the Lattice Boltzmann Method for fluid flow: Benchmark cases for outdoor sound propagation

    NARCIS (Netherlands)

    Salomons, E.M.; Lohman, W.J.A.; Zhou, H.

    2016-01-01

    Propagation of sound waves in air can be considered as a special case of fluid dynamics. Consequently, the lattice Boltzmann method (LBM) for fluid flow can be used for simulating sound propagation. In this article application of the LBM to sound propagation is illustrated for various cases:

  11. Aircraft Engine Gas Path Diagnostic Methods: Public Benchmarking Results

    Science.gov (United States)

    Simon, Donald L.; Borguet, Sebastien; Leonard, Olivier; Zhang, Xiaodong (Frank)

    2013-01-01

    Recent technology reviews have identified the need for objective assessments of aircraft engine health management (EHM) technologies. To help address this issue, a gas path diagnostic benchmark problem has been created and made publicly available. This software tool, referred to as the Propulsion Diagnostic Method Evaluation Strategy (ProDiMES), has been constructed based on feedback provided by the aircraft EHM community. It provides a standard benchmark problem enabling users to develop, evaluate and compare diagnostic methods. This paper will present an overview of ProDiMES along with a description of four gas path diagnostic methods developed and applied to the problem. These methods, which include analytical and empirical diagnostic techniques, will be described and associated blind-test-case metric results will be presented and compared. Lessons learned along with recommendations for improving the public benchmarking processes will also be presented and discussed.

  12. Local implementation of the Essence of Care benchmarks.

    Science.gov (United States)

    Jones, Sue

    To understand clinical practice benchmarking from the perspective of nurses working in a large acute NHS trust and to determine whether the nurses perceived that their commitment to Essence of Care led to improvements in care, the factors that influenced their role in the process and the organisational factors that influenced benchmarking. An ethnographic case study approach was adopted. Six themes emerged from the data. Two organisational issues emerged: leadership and the values and/or culture of the organisation. The findings suggested that the leadership ability of the Essence of Care link nurses and the value placed on this work by the organisation were key to the success of benchmarking. A model for successful implementation of the Essence of Care is proposed based on the findings of this study, which lends itself to testing by other organisations.

  13. Benchmark Analysis for Condition Monitoring Test Techniques of Aged Low Voltage Cables in Nuclear Power Plants. Final Results of a Coordinated Research Project

    International Nuclear Information System (INIS)

    2017-10-01

    This publication provides information and guidelines on how to monitor the performance of insulation and jacket materials of existing cables and establish a programme of cable degradation monitoring and ageing management for operating reactors and the next generation of nuclear facilities. This research was done through a coordinated research project (CRP) with participants from 17 Member States. This group of experts compiled the current knowledge in a report together with areas of future research and development to cover aging mechanisms and means to identify and manage the consequences of aging. They established a benchmarking programme using cable samples aged under thermal and/or radiation conditions, and tested before and after ageing by various methods and organizations. In particular, 12 types of cable insulation or jacket material were tested, each using 14 different condition monitoring techniques. Condition monitoring techniques yield usable and traceable results. Techniques such as elongation at break, indenter modulus, oxidation induction time and oxidation induction temperature were found to work reasonably well for degradation trending of all materials. However, other condition monitoring techniques, such as insulation resistance, were only partially successful on some cables and other methods like ultrasonic or Tan δ were either unsuccessful or failed to provide reliable information to qualify the method for degradation trending or ageing assessment of cables. The electrical in situ tests did not show great promise for cable degradation trending or ageing assessment, although these methods are known to be very effective for finding and locating faults in cable insulation material. In particular, electrical methods such as insulation resistance and reflectometry techniques are known to be rather effective for locating insulation damage, hot spots or other faults in essentially all cable types. The advantage of electrical methods is that they can be

  14. Strategic behaviour under regulatory benchmarking

    Energy Technology Data Exchange (ETDEWEB)

    Jamasb, T. [Cambridge Univ. (United Kingdom). Dept. of Applied Economics; Nillesen, P. [NUON NV (Netherlands); Pollitt, M. [Cambridge Univ. (United Kingdom). Judge Inst. of Management

    2004-09-01

    In order to improve the efficiency of electricity distribution networks, some regulators have adopted incentive regulation schemes that rely on performance benchmarking. Although regulation benchmarking can influence the ''regulation game,'' the subject has received limited attention. This paper discusses how strategic behaviour can result in inefficient behaviour by firms. We then use the Data Envelopment Analysis (DEA) method with US utility data to examine implications of illustrative cases of strategic behaviour reported by regulators. The results show that gaming can have significant effects on the measured performance and profitability of firms. (author)

  15. Human factors reliability benchmark exercise

    International Nuclear Information System (INIS)

    Poucet, A.

    1989-08-01

    The Joint Research Centre of the European Commission has organised a Human Factors Reliability Benchmark Exercise (HF-RBE) with the aim of assessing the state of the art in human reliability modelling and assessment. Fifteen teams from eleven countries, representing industry, utilities, licensing organisations and research institutes, participated in the HF-RBE. The HF-RBE was organised around two study cases: (1) analysis of routine functional Test and Maintenance (TPM) procedures: with the aim of assessing the probability of test induced failures, the probability of failures to remain unrevealed and the potential to initiate transients because of errors performed in the test; (2) analysis of human actions during an operational transient: with the aim of assessing the probability that the operators will correctly diagnose the malfunctions and take proper corrective action. This report summarises the contributions received from the participants and analyses these contributions on a comparative basis. The aim of this analysis was to compare the procedures, modelling techniques and quantification methods used, to obtain insight in the causes and magnitude of the variability observed in the results, to try to identify preferred human reliability assessment approaches and to get an understanding of the current state of the art in the field identifying the limitations that are still inherent to the different approaches

  16. Establishing benchmarks and metrics for utilization management.

    Science.gov (United States)

    Melanson, Stacy E F

    2014-01-01

    The changing environment of healthcare reimbursement is rapidly leading to a renewed appreciation of the importance of utilization management in the clinical laboratory. The process of benchmarking of laboratory operations is well established for comparing organizational performance to other hospitals (peers) and for trending data over time through internal benchmarks. However, there are relatively few resources available to assist organizations in benchmarking for laboratory utilization management. This article will review the topic of laboratory benchmarking with a focus on the available literature and services to assist in managing physician requests for laboratory testing. © 2013.

  17. Professional Performance and Bureaucratic Benchmarking Information

    DEFF Research Database (Denmark)

    Schneider, Melanie L.; Mahlendorf, Matthias D.; Schäffer, Utz

    Prior research documents positive effects of benchmarking information provision on performance and attributes this to social comparisons. However, the effects on professional recipients are unclear. Studies of professional control indicate that professional recipients often resist bureaucratic...... controls because of organizational-professional conflicts. We therefore analyze the association between bureaucratic benchmarking information provision and professional performance and suggest that the association is more positive if prior professional performance was low. We test our hypotheses based...... on archival, publicly disclosed, professional performance data for 191 German orthopedics departments, matched with survey data on bureaucratic benchmarking information given to chief orthopedists by the administration. We find a positive association between bureaucratic benchmarking information provision...

  18. Benchmarking semantic web technology

    CERN Document Server

    García-Castro, R

    2009-01-01

    This book addresses the problem of benchmarking Semantic Web Technologies; first, from a methodological point of view, proposing a general methodology to follow in benchmarking activities over Semantic Web Technologies and, second, from a practical point of view, presenting two international benchmarking activities that involved benchmarking the interoperability of Semantic Web technologies using RDF(S) as the interchange language in one activity and OWL in the other.The book presents in detail how the different resources needed for these interoperability benchmarking activities were defined:

  19. Radiation Detection Computational Benchmark Scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Shaver, Mark W.; Casella, Andrew M.; Wittman, Richard S.; McDonald, Ben S.

    2013-09-24

    Modeling forms an important component of radiation detection development, allowing for testing of new detector designs, evaluation of existing equipment against a wide variety of potential threat sources, and assessing operation performance of radiation detection systems. This can, however, result in large and complex scenarios which are time consuming to model. A variety of approaches to radiation transport modeling exist with complementary strengths and weaknesses for different problems. This variety of approaches, and the development of promising new tools (such as ORNL’s ADVANTG) which combine benefits of multiple approaches, illustrates the need for a means of evaluating or comparing different techniques for radiation detection problems. This report presents a set of 9 benchmark problems for comparing different types of radiation transport calculations, identifying appropriate tools for classes of problems, and testing and guiding the development of new methods. The benchmarks were drawn primarily from existing or previous calculations with a preference for scenarios which include experimental data, or otherwise have results with a high level of confidence, are non-sensitive, and represent problem sets of interest to NA-22. From a technical perspective, the benchmarks were chosen to span a range of difficulty and to include gamma transport, neutron transport, or both and represent different important physical processes and a range of sensitivity to angular or energy fidelity. Following benchmark identification, existing information about geometry, measurements, and previous calculations were assembled. Monte Carlo results (MCNP decks) were reviewed or created and re-run in order to attain accurate computational times and to verify agreement with experimental data, when present. Benchmark information was then conveyed to ORNL in order to guide testing and development of hybrid calculations. The results of those ADVANTG calculations were then sent to PNNL for

  20. Direct data access protocols benchmarking on DPM

    Science.gov (United States)

    Furano, Fabrizio; Devresse, Adrien; Keeble, Oliver; Mancinelli, Valentina

    2015-12-01

    The Disk Pool Manager is an example of a multi-protocol, multi-VO system for data access on the Grid that went though a considerable technical evolution in the last years. Among other features, its architecture offers the opportunity of testing its different data access frontends under exactly the same conditions, including hardware and backend software. This characteristic inspired the idea of collecting monitoring information from various testbeds in order to benchmark the behaviour of the HTTP and Xrootd protocols for the use case of data analysis, batch or interactive. A source of information is the set of continuous tests that are run towards the worldwide endpoints belonging to the DPM Collaboration, which accumulated relevant statistics in its first year of activity. On top of that, the DPM releases are based on multiple levels of automated testing that include performance benchmarks of various kinds, executed regularly every day. At the same time, the recent releases of DPM can report monitoring information about any data access protocol to the same monitoring infrastructure that is used to monitor the Xrootd deployments. Our goal is to evaluate under which circumstances the HTTP-based protocols can be good enough for batch or interactive data access. In this contribution we show and discuss the results that our test systems have collected under the circumstances that include ROOT analyses using TTreeCache and stress tests on the metadata performance.

  1. Benchmarking in University Toolbox

    Directory of Open Access Journals (Sweden)

    Katarzyna Kuźmicz

    2015-06-01

    Full Text Available In the face of global competition and rising challenges that higher education institutions (HEIs meet, it is imperative to increase innovativeness and efficiency of their management. Benchmarking can be the appropriate tool to search for a point of reference necessary to assess institution’s competitive position and learn from the best in order to improve. The primary purpose of the paper is to present in-depth analysis of benchmarking application in HEIs worldwide. The study involves indicating premises of using benchmarking in HEIs. It also contains detailed examination of types, approaches and scope of benchmarking initiatives. The thorough insight of benchmarking applications enabled developing classification of benchmarking undertakings in HEIs. The paper includes review of the most recent benchmarking projects and relating them to the classification according to the elaborated criteria (geographical range, scope, type of data, subject, support and continuity. The presented examples were chosen in order to exemplify different approaches to benchmarking in higher education setting. The study was performed on the basis of the published reports from benchmarking projects, scientific literature and the experience of the author from the active participation in benchmarking projects. The paper concludes with recommendations for university managers undertaking benchmarking, derived on the basis of the conducted analysis.

  2. International Criticality Safety Benchmark Evaluation Project (ICSBEP) - ICSBEP 2015 Handbook

    International Nuclear Information System (INIS)

    Bess, John D.

    2015-01-01

    The Criticality Safety Benchmark Evaluation Project (CSBEP) was initiated in October of 1992 by the United States Department of Energy (DOE). The project quickly became an international effort as scientists from other interested countries became involved. The International Criticality Safety Benchmark Evaluation Project (ICSBEP) became an official activity of the Nuclear Energy Agency (NEA) in 1995. This handbook contains criticality safety benchmark specifications that have been derived from experiments performed at various critical facilities around the world. The benchmark specifications are intended for use by criticality safety engineers to validate calculation techniques used to establish minimum subcritical margins for operations with fissile material and to determine criticality alarm requirements and placement. Many of the specifications are also useful for nuclear data testing. Example calculations are presented; however, these calculations do not constitute a validation of the codes or cross-section data. The evaluated criticality safety benchmark data are given in nine volumes. These volumes span approximately 69000 pages and contain 567 evaluations with benchmark specifications for 4874 critical, near-critical or subcritical configurations, 31 criticality alarm placement/shielding configurations with multiple dose points for each, and 207 configurations that have been categorised as fundamental physics measurements that are relevant to criticality safety applications. New to the handbook are benchmark specifications for neutron activation foil and thermoluminescent dosimeter measurements performed at the SILENE critical assembly in Valduc, France as part of a joint venture in 2010 between the US DOE and the French Alternative Energies and Atomic Energy Commission (CEA). A photograph of this experiment is shown on the front cover. Experiments that are found unacceptable for use as criticality safety benchmark experiments are discussed in these

  3. Core Benchmarks Descriptions

    International Nuclear Information System (INIS)

    Pavlovichev, A.M.

    2001-01-01

    Actual regulations while designing of new fuel cycles for nuclear power installations comprise a calculational justification to be performed by certified computer codes. It guarantees that obtained calculational results will be within the limits of declared uncertainties that are indicated in a certificate issued by Gosatomnadzor of Russian Federation (GAN) and concerning a corresponding computer code. A formal justification of declared uncertainties is the comparison of calculational results obtained by a commercial code with the results of experiments or of calculational tests that are calculated with an uncertainty defined by certified precision codes of MCU type or of other one. The actual level of international cooperation provides an enlarging of the bank of experimental and calculational benchmarks acceptable for a certification of commercial codes that are being used for a design of fuel loadings with MOX fuel. In particular, the work is practically finished on the forming of calculational benchmarks list for a certification of code TVS-M as applied to MOX fuel assembly calculations. The results on these activities are presented

  4. Thick target benchmark test for the code used in the design of high intensity proton accelerator project

    International Nuclear Information System (INIS)

    Meigo, Shin-ichiro; Harada, Masatoshi

    2003-01-01

    In the neutronics design for the JAERI and KEK Joint high intensity accelerator facilities, transport codes of NMTC/JAM, MCNPX and MARS are used. In order to confirm the predict ability for these code, it is important to compare with the experiment result. For the validation of the source term of neutron, the calculations are compared with the experimental spectrum of neutrons produced from thick target, which are carried out at LANL and KEK. As for validation of low energy incident case, the calculations are compared with experiment carried out at LANL, in which target of C, Al, Fe, and 238 U are irradiated with 256-MeV protons. By the comparison, it is found that both NMTC/JAM and MCNPX show good agreement with the experiment within by a factor of 2. MARS shows good agreement for C and Al target. MARS, however, gives rather underestimation for all targets in the neutron energy region higher than 30 MeV. For the validation high incident energy case, the codes are compared with the experiment carried out at KEK. In this experiment, W and Pb targets are bombarded with 0.5- and 1.5-GeV protons. Although slightly disagreement exists, NMTC/JAM, MCNPX and MARS are in good agreement with the experiment within by a factor of 2. (author)

  5. Benchmarking of human resources management

    Directory of Open Access Journals (Sweden)

    David M. Akinnusi

    2008-11-01

    Full Text Available This paper reviews the role of human resource management (HRM which, today, plays a strategic partnership role in management. The focus of the paper is on HRM in the public sector, where much hope rests on HRM as a means of transforming the public service and achieving much needed service delivery. However, a critical evaluation of HRM practices in the public sector reveals that these services leave much to be desired. The paper suggests the adoption of benchmarking as a process to revamp HRM in the public sector so that it is able to deliver on its promises. It describes the nature and process of benchmarking and highlights the inherent difficulties in applying benchmarking in HRM. It concludes with some suggestions for a plan of action. The process of identifying “best” practices in HRM requires the best collaborative efforts of HRM practitioners and academicians. If used creatively, benchmarking has the potential to bring about radical and positive changes in HRM in the public sector. The adoption of the benchmarking process is, in itself, a litmus test of the extent to which HRM in the public sector has grown professionally.

  6. Repeated Results Analysis for Middleware Regression Benchmarking

    Czech Academy of Sciences Publication Activity Database

    Bulej, Lubomír; Kalibera, T.; Tůma, P.

    2005-01-01

    Roč. 60, - (2005), s. 345-358 ISSN 0166-5316 R&D Projects: GA ČR GA102/03/0672 Institutional research plan: CEZ:AV0Z10300504 Keywords : middleware benchmarking * regression benchmarking * regression testing Subject RIV: JD - Computer Applications, Robotics Impact factor: 0.756, year: 2005

  7. A brief overview of the distribution test grids with a distributed generation inclusion case study

    Directory of Open Access Journals (Sweden)

    Stanisavljević Aleksandar M.

    2018-01-01

    Full Text Available The paper presents an overview of the electric distribution test grids issued by different technical institutions. They are used for testing different scenarios in operation of a grid for research, benchmarking, comparison and other purposes. Their types, main characteristics, features as well as application possibilities are shown. Recently, these grids are modified with inclusion of distributed generation. An example of modification and application of the IEEE 13-bus for testing effects of faults in cases without and with a distributed generator connection to the grid is presented. [Project of the Serbian Ministry of Education, Science and Technological Development, Grant no. III 042004: Smart Electricity Distribution Grids Based on Distribution Management System and Distributed Generation

  8. Benchmark test of 14-MeV neutron-induced gamma-ray production data in JENDL-3.2 and FENDL/E-1.0 through analysis of the OKTAVIAN experiments

    International Nuclear Information System (INIS)

    Maekawa, F.; Oyama, F.

    1996-01-01

    Secondary gamma rays play an important role along with neutrons in influencing nuclear design parameters, such as nuclear heating, radiation dose, and material damage on the plasma-facing components, vacuum vessel, and superconducting magnets, of fusion devices. Because evaluated nuclear data libraries are used in the designs, one must examine the accuracy of secondary gamma-ray data in these libraries through benchmark tests of existing experiments. The validity of the data should be confirmed, or problems with the data should be pointed out through these benchmark tests to ensure the quality of the design. Here, gamma-ray production data of carbon, fluorine, aluminum, silicon, titanium, chromium, manganese, cobalt, copper, niobium, molybdenum, tungsten, and lead in JENDL-3.2 and FENDL/E-1.0 induced by 14-MeV neutrons are tested through benchmark analyses of leakage gamma-ray spectrum measurements conducted at the OKTAVIAN deuterium-tritium neutron source facility. The MCNP transport code is used along with the flagging method for detailed analyses of the spectra. As a result, several moderate problems are pointed out for secondary gamma-ray data of titanium, chromium, manganese, and lead in FENDL/E-1.0. Because no fatal errors are found, however, secondary gamma-ray data for the 13 elements in both libraries are reasonably well validated through these benchmark tests as far as 14-MeV neutron incidence is concerned

  9. Mathematical Basis and Test Cases for Colloid-Facilitated Radionuclide Transport Modeling in GDSA-PFLOTRAN

    Energy Technology Data Exchange (ETDEWEB)

    Reimus, Paul William [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-07-31

    This report provides documentation of the mathematical basis for a colloid-facilitated radionuclide transport modeling capability that can be incorporated into GDSA-PFLOTRAN. It also provides numerous test cases against which the modeling capability can be benchmarked once the model is implemented numerically in GDSA-PFLOTRAN. The test cases were run using a 1-D numerical model developed by the author, and the inputs and outputs from the 1-D model are provided in an electronic spreadsheet supplement to this report so that all cases can be reproduced in GDSA-PFLOTRAN, and the outputs can be directly compared with the 1-D model. The cases include examples of all potential scenarios in which colloid-facilitated transport could result in the accelerated transport of a radionuclide relative to its transport in the absence of colloids. Although it cannot be claimed that all the model features that are described in the mathematical basis were rigorously exercised in the test cases, the goal was to test the features that matter the most for colloid-facilitated transport; i.e., slow desorption of radionuclides from colloids, slow filtration of colloids, and equilibrium radionuclide partitioning to colloids that is strongly favored over partitioning to immobile surfaces, resulting in a substantial fraction of radionuclide mass being associated with mobile colloids.

  10. DEVELOPMENT AND ADAPTATION OF VORTEX REALIZABLE MEASUREMENT SYSTEM FOR BENCHMARK TEST WITH LARGE SCALE MODEL OF NUCLEAR REACTOR

    Directory of Open Access Journals (Sweden)

    S. M. Dmitriev

    2017-01-01

    Full Text Available The last decades development of applied calculation methods of nuclear reactor thermal and hydraulic processes are marked by the rapid growth of the High Performance Computing (HPC, which contribute to the active introduction of Computational Fluid Dynamics (CFD. The use of such programs to justify technical and economic parameters and especially the safety of nuclear reactors requires comprehensive verification of mathematical models and CFD programs. The aim of the work was the development and adaptation of a measuring system having the characteristics necessary for its application in the verification test (experimental facility. It’s main objective is to study the processes of coolant flow mixing with different physical properties (for example, the concentration of dissolved impurities inside a large-scale reactor model. The basic method used for registration of the spatial concentration field in the mixing area is the method of spatial conductometry. In the course of the work, a measurement complex, including spatial conductometric sensors, a system of secondary converters and software, was created. Methods of calibration and normalization of measurement results are developed. Averaged concentration fields, nonstationary realizations of the measured local conductivity were obtained during the first experimental series, spectral and statistical analysis of the realizations were carried out.The acquired data are compared with pretest CFD-calculations performed in the ANSYS CFX program. A joint analysis of the obtained results made it possible to identify the main regularities of the process under study, and to demonstrate the capabilities of the designed measuring system to receive the experimental data of the «CFD-quality» required for verification.The carried out adaptation of spatial sensors allows to conduct a more extensive program of experimental tests, on the basis of which a databank and necessary generalizations will be created

  11. Co-ordinated research programme on benchmark study for the seismic analysis and testing of WWER-type nuclear power plants. V. 4G. Paks NPP: Analysis and testing. Working material

    International Nuclear Information System (INIS)

    1999-01-01

    In August 1991, following the SMiRT-11 Conference in Tokyo, a Technical Committee Meeting was held on the 'Seismic safety issues relating to existing NPPs'. The Proceedings of this TCM was subsequently compiled in an IAEA Working Material. One of the main recommendations of this TCM, called for the harmonization of criteria and methods used in Member States in seismic reassessment and upgrading of existing NPPs. Twenty four institutions from thirteen countries participated in the CRP named 'Benchmark study for the seismic analysis and testing of WWER type NPPs'. Two types of WWER reactors (WWER-1000 and WWER-440/213) selected for benchmarking. Kozloduy NPP Units 5/6 and Paks NPP represented these respectively as prototypes. Consistent with the recommendations of the TCM and the working paper prepared by the subsequent Consultants' Meeting, the focal activity of the CRP was the benchmarking exercises. A similar methodology was followed both for Paks NPP and Kozloduy NPP Unit 5. Firstly, the NPP (mainly the reactor building) was tested using a blast loading generated by a series of explosions from buried TNT charges. Records from this test were obtained at several free field locations (both downhole and surface), foundation mat, various elevations of structures as well as some tanks and the stack. Then the benchmark participants were provided with structural drawings, soil data and the free field record of the blast experiment. Their task was to make a blind prediction of the response at preselected locations. The analytical results from these participants were then compared with the results from the test. Although the benchmarking exercises constituted the focus of the CRP, there were many other interesting problems related to the seismic safety of WWER type NPPs which were addressed by the participants. These involved generic studies, i.e. codes and standards used in original WWER designs and their comparison with current international practice; seismic analysis

  12. Simulation of Sound Waves Using the Lattice Boltzmann Method for Fluid Flow: Benchmark Cases for Outdoor Sound Propagation.

    Science.gov (United States)

    Salomons, Erik M; Lohman, Walter J A; Zhou, Han

    2016-01-01

    Propagation of sound waves in air can be considered as a special case of fluid dynamics. Consequently, the lattice Boltzmann method (LBM) for fluid flow can be used for simulating sound propagation. In this article application of the LBM to sound propagation is illustrated for various cases: free-field propagation, propagation over porous and non-porous ground, propagation over a noise barrier, and propagation in an atmosphere with wind. LBM results are compared with solutions of the equations of acoustics. It is found that the LBM works well for sound waves, but dissipation of sound waves with the LBM is generally much larger than real dissipation of sound waves in air. To circumvent this problem it is proposed here to use the LBM for assessing the excess sound level, i.e. the difference between the sound level and the free-field sound level. The effect of dissipation on the excess sound level is much smaller than the effect on the sound level, so the LBM can be used to estimate the excess sound level for a non-dissipative atmosphere, which is a useful quantity in atmospheric acoustics. To reduce dissipation in an LBM simulation two approaches are considered: i) reduction of the kinematic viscosity and ii) reduction of the lattice spacing.

  13. Test case for a near-surface repository

    International Nuclear Information System (INIS)

    Elert, M.; Jones, C.; Nilsson, L.B.; Skagius, K.; Wiborgh, M.

    1998-01-01

    A test case is presented for assessment of a near-surface disposal facility for radioactive waste. The case includes waste characterization and repository design, requirements and constraints in an assessment context, scenario development, model description and test calculations

  14. Benchmarking LWR codes capability to model radionuclide deposition within SFR containments: An analysis of the Na ABCOVE tests

    Energy Technology Data Exchange (ETDEWEB)

    Herranz, Luis E., E-mail: luisen.herranz@ciemat.es [CIEMAT, Unit of Nuclear Safety Research, Av. Complutense, 40, 28040 Madrid (Spain); Garcia, Monica, E-mail: monica.gmartin@ciemat.es [CIEMAT, Unit of Nuclear Safety Research, Av. Complutense, 40, 28040 Madrid (Spain); Morandi, Sonia, E-mail: sonia.morandi@rse-web.it [Nuclear and Industrial Plant Safety Team, Power Generation System Department, RSE, via Rubattino 54, 20134 Milano (Italy)

    2013-12-15

    Highlights: • Assessment of LWR codes capability to model aerosol deposition within SFR containments. • Original hypotheses proposed to partially accommodate drawbacks from Na oxidation reactions. • A defined methodology to derive a more accurate characterization of Na-based particles. • Key missing models in LWR codes for SFR applications are identified. - Abstract: Postulated BDBAs in SFRs might result in contaminated-coolant discharge at high temperature into the containment. A full scope safety analysis of this reactor type requires computation tools properly validated in all the related fields. Radionuclide transport, particularly within the containment, is one of those fields. This sets two major challenges: to have reliable codes available and to build up a sound data base. Development of SFR source term codes was abandoned in the 80's and few data are available at present. The ABCOVE experimental programme conducted in the 80's is still a reference in the field. Postulated BDBAs in SFRs might result in contaminated-coolant discharge at high temperature into the containment. A full scope safety analysis of this reactor type requires computation tools properly validated in all the related fields. Radionuclide deposition, particularly within the containment, is one of those fields. This sets two major challenges: to have reliable codes available and to build up a sound data base. Development of SFR source term codes was abandoned in the 80's and few data are available at present. The ABCOVE experimental programme conducted in the 80's is still a reference in the field. The present paper is aimed at assessing the current capability of LWR codes to model aerosol deposition within a SFR containment under BDBA conditions. Through a systematic application of the ASTEC, ECART and MELCOR codes to relevant ABCOVE tests, insights have been gained into drawbacks and capabilities of these computation tools. Hypotheses and approximations have

  15. Benchmarking LWR codes capability to model radionuclide deposition within SFR containments: An analysis of the Na ABCOVE tests

    International Nuclear Information System (INIS)

    Herranz, Luis E.; Garcia, Monica; Morandi, Sonia

    2013-01-01

    Highlights: • Assessment of LWR codes capability to model aerosol deposition within SFR containments. • Original hypotheses proposed to partially accommodate drawbacks from Na oxidation reactions. • A defined methodology to derive a more accurate characterization of Na-based particles. • Key missing models in LWR codes for SFR applications are identified. - Abstract: Postulated BDBAs in SFRs might result in contaminated-coolant discharge at high temperature into the containment. A full scope safety analysis of this reactor type requires computation tools properly validated in all the related fields. Radionuclide transport, particularly within the containment, is one of those fields. This sets two major challenges: to have reliable codes available and to build up a sound data base. Development of SFR source term codes was abandoned in the 80's and few data are available at present. The ABCOVE experimental programme conducted in the 80's is still a reference in the field. Postulated BDBAs in SFRs might result in contaminated-coolant discharge at high temperature into the containment. A full scope safety analysis of this reactor type requires computation tools properly validated in all the related fields. Radionuclide deposition, particularly within the containment, is one of those fields. This sets two major challenges: to have reliable codes available and to build up a sound data base. Development of SFR source term codes was abandoned in the 80's and few data are available at present. The ABCOVE experimental programme conducted in the 80's is still a reference in the field. The present paper is aimed at assessing the current capability of LWR codes to model aerosol deposition within a SFR containment under BDBA conditions. Through a systematic application of the ASTEC, ECART and MELCOR codes to relevant ABCOVE tests, insights have been gained into drawbacks and capabilities of these computation tools. Hypotheses and approximations have been adopted so that

  16. Benchmark Analyses on the Control Rod Withdrawal Tests Performed During the PHÉNIX End-of-Life Experiments. Report of a Coordinated Research Project 2008–2011

    International Nuclear Information System (INIS)

    2014-06-01

    The IAEA supports Member State activities in advanced fast reactor technology development by providing a major fulcrum for information exchange and collaborative research programmes. The IAEA’s activities in this field are mainly carried out within the framework of the Technical Working Group on Fast Reactors (TWG-FR), which assists in the implementation of corresponding IAEA activities and ensures that all technical activities are in line with the expressed needs of Member States. In the broad range of activities, the IAEA proposes and establishes coordinated research projects (CRPs) aimed at improving Member States’ capabilities in fast reactor design and analysis. An important opportunity to conduct collaborative research activities was provided by the experimental campaign run by the French Alternative Energies and Atomic Energy Commission (CEA, Commissariat à l’énergie atomique et aux énergies alternatives) at the PHÉNIX, a prototype sodium cooled fast reactor. Before the definitive shutdown in 2009, end-of-life tests were conducted to gather additional experience on the operation of sodium cooled reactors. Thanks to the CEA opening the experiments to international cooperation, the IAEA decided in 2007 to launch the CRP entitled Control Rod Withdrawal and Sodium Natural Circulation Tests Performed during the PHÉNIX End-of-Life Experiments. The CRP, together with institutes from seven States, contributed to improving capabilities in sodium cooled fast reactor simulation through code verification and validation, with particular emphasis on temperature and power distribution calculations and the analysis of sodium natural circulation phenomena. The objective of this publication is to document the results and main achievements of the benchmark analyses on the control rod withdrawal test performed within the framework of the PHÉNIX end-of-life experimental campaign

  17. Benchmark Analyses on the Natural Circulation Test Performed During the PHENIX End-of-Life Experiments. Final Report of a Co-ordinated Research Project 2008-2011

    International Nuclear Information System (INIS)

    2013-07-01

    The International Atomic Energy Agency (IAEA) supports Member State activities in the area of advanced fast reactor technology development by providing a forum for information exchange and collaborative research programmes. The Agency's activities in this field are mainly carried out within the framework of the Technical Working Group on Fast Reactors (TWG-FR), which assists in the implementation of corresponding IAEA activities and ensures that all technical activities are in line with the expressed needs of Member States. Among its broad range of activities, the IAEA proposes and establishes coordinated research projects (CRPs) aimed at the improvement of Member State capabilities in the area of fast reactor design and analysis. An important opportunity to undertake collaborative research was provided by the experimental campaign of the French Alternative Energies and Atomic Energy Commission (CEA) in the prototype sodium fast reactor PHENIX before it was shut down in 2009. The overall purpose of the end of life tests was to gather additional experience on the operation of sodium cooled reactors. As the CEA opened the experiments to international cooperation, in 2007 the IAEA launched a CRP on ''Control Rod Withdrawal and Sodium Natural Circulation Tests Performed during the PHENIX End-of-Life Experiments''. The CRP, with the participation of institutes from eight countries, contributed to improving capabilities in sodium cooled reactor simulation through code verification and validation, with particular emphasis on temperature and power distribution calculations and the analysis of sodium natural circulation phenomena. The objective of this report is to document the results and main achievements of the benchmark analyses on the natural circulation test performed in the framework of the PHENIX end of life experimental campaign

  18. Benchmark Analyses on the Control Rod Withdrawal Tests Performed During the PHÉNIX End-of-Life Experiments. Report of a Coordinated Research Project 2008–2011

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2014-06-15

    The IAEA supports Member State activities in advanced fast reactor technology development by providing a major fulcrum for information exchange and collaborative research programmes. The IAEA’s activities in this field are mainly carried out within the framework of the Technical Working Group on Fast Reactors (TWG-FR), which assists in the implementation of corresponding IAEA activities and ensures that all technical activities are in line with the expressed needs of Member States. In the broad range of activities, the IAEA proposes and establishes coordinated research projects (CRPs) aimed at improving Member States’ capabilities in fast reactor design and analysis. An important opportunity to conduct collaborative research activities was provided by the experimental campaign run by the French Alternative Energies and Atomic Energy Commission (CEA, Commissariat à l’énergie atomique et aux énergies alternatives) at the PHÉNIX, a prototype sodium cooled fast reactor. Before the definitive shutdown in 2009, end-of-life tests were conducted to gather additional experience on the operation of sodium cooled reactors. Thanks to the CEA opening the experiments to international cooperation, the IAEA decided in 2007 to launch the CRP entitled Control Rod Withdrawal and Sodium Natural Circulation Tests Performed during the PHÉNIX End-of-Life Experiments. The CRP, together with institutes from seven States, contributed to improving capabilities in sodium cooled fast reactor simulation through code verification and validation, with particular emphasis on temperature and power distribution calculations and the analysis of sodium natural circulation phenomena. The objective of this publication is to document the results and main achievements of the benchmark analyses on the control rod withdrawal test performed within the framework of the PHÉNIX end-of-life experimental campaign.

  19. Zn Coordination Chemistry:  Development of Benchmark Suites for Geometries, Dipole Moments, and Bond Dissociation Energies and Their Use To Test and Validate Density Functionals and Molecular Orbital Theory.

    Science.gov (United States)

    Amin, Elizabeth A; Truhlar, Donald G

    2008-01-01

    We present nonrelativistic and relativistic benchmark databases (obtained by coupled cluster calculations) of 10 Zn-ligand bond distances, 8 dipole moments, and 12 bond dissociation energies in Zn coordination compounds with O, S, NH3, H2O, OH, SCH3, and H ligands. These are used to test the predictions of 39 density functionals, Hartree-Fock theory, and seven more approximate molecular orbital theories. In the nonrelativisitic case, the M05-2X, B97-2, and mPW1PW functionals emerge as the most accurate ones for this test data, with unitless balanced mean unsigned errors (BMUEs) of 0.33, 0.38, and 0.43, respectively. The best local functionals (i.e., functionals with no Hartree-Fock exchange) are M06-L and τ-HCTH with BMUEs of 0.54 and 0.60, respectively. The popular B3LYP functional has a BMUE of 0.51, only slightly better than the value of 0.54 for the best local functional, which is less expensive. Hartree-Fock theory itself has a BMUE of 1.22. The M05-2X functional has a mean unsigned error of 0.008 Å for bond lengths, 0.19 D for dipole moments, and 4.30 kcal/mol for bond energies. The X3LYP functional has a smaller mean unsigned error (0.007 Å) for bond lengths but has mean unsigned errors of 0.43 D for dipole moments and 5.6 kcal/mol for bond energies. The M06-2X functional has a smaller mean unsigned error (3.3 kcal/mol) for bond energies but has mean unsigned errors of 0.017 Å for bond lengths and 0.37 D for dipole moments. The best of the semiempirical molecular orbital theories are PM3 and PM6, with BMUEs of 1.96 and 2.02, respectively. The ten most accurate functionals from the nonrelativistic benchmark analysis are then tested in relativistic calculations against new benchmarks obtained with coupled-cluster calculations and a relativistic effective core potential, resulting in M05-2X (BMUE = 0.895), PW6B95 (BMUE = 0.90), and B97-2 (BMUE = 0.93) as the top three functionals. We find significant relativistic effects (∼0.01 Å in bond lengths, ∼0

  20. MCNP neutron benchmarks

    International Nuclear Information System (INIS)

    Hendricks, J.S.; Whalen, D.J.; Cardon, D.A.; Uhle, J.L.

    1991-01-01

    Over 50 neutron benchmark calculations have recently been completed as part of an ongoing program to validate the MCNP Monte Carlo radiation transport code. The new and significant aspects of this work are as follows: These calculations are the first attempt at a validation program for MCNP and the first official benchmarking of version 4 of the code. We believe the chosen set of benchmarks is a comprehensive set that may be useful for benchmarking other radiation transport codes and data libraries. These calculations provide insight into how well neutron transport calculations can be expected to model a wide variety of problems

  1. Benchmarking ENDF/B-VII.1, JENDL-4.0 and JEFF-3.1.1 with MCNP6

    International Nuclear Information System (INIS)

    Marck, Steven C. van der

    2012-01-01

    Recent releases of three major world nuclear reaction data libraries, ENDF/B-VII.1, JENDL-4.0, and JEFF-3.1.1, have been tested extensively using benchmark calculations. The calculations were performed with the latest release of the continuous energy Monte Carlo neutronics code MCNP, i.e. MCNP6. Three types of benchmarks were used, viz. criticality safety benchmarks, (fusion) shielding benchmarks, and reference systems for which the effective delayed neutron fraction is reported. For criticality safety, more than 2000 benchmarks from the International Handbook of Criticality Safety Benchmark Experiments were used. Benchmarks from all categories were used, ranging from low-enriched uranium, compound fuel, thermal spectrum ones (LEU-COMP-THERM), to mixed uranium-plutonium, metallic fuel, fast spectrum ones (MIX-MET-FAST). For fusion shielding many benchmarks were based on IAEA specifications for the Oktavian experiments (for Al, Co, Cr, Cu, LiF, Mn, Mo, Si, Ti, W, Zr), Fusion Neutronics Source in Japan (for Be, C, N, O, Fe, Pb), and Pulsed Sphere experiments at Lawrence Livermore National Laboratory (for 6 Li, 7 Li, Be, C, N, O, Mg, Al, Ti, Fe, Pb, D2O, H2O, concrete, polyethylene and teflon). The new functionality in MCNP6 to calculate the effective delayed neutron fraction was tested by comparison with more than thirty measurements in widely varying systems. Among these were measurements in the Tank Critical Assembly (TCA in Japan) and IPEN/MB-01 (Brazil), both with a thermal spectrum, two cores in Masurca (France) and three cores in the Fast Critical Assembly (FCA, Japan), all with fast spectra. The performance of the three libraries, in combination with MCNP6, is shown to be good. The results for the LEU-COMP-THERM category are on average very close to the benchmark value. Also for most other categories the results are satisfactory. Deviations from the benchmark values do occur in certain benchmark series, or in isolated cases within benchmark series. Such

  2. SSI and structural benchmarks

    International Nuclear Information System (INIS)

    Philippacopoulos, A.J.; Miller, C.A.; Costantino, C.J.; Graves, H.

    1987-01-01

    This paper presents the latest results of the ongoing program entitled, Standard Problems for Structural Computer Codes, currently being worked on at BNL for the USNRC, Office of Nuclear Regulatory Research. During FY 1986, efforts were focussed on three tasks, namely, (1) an investigation of ground water effects on the response of Category I structures, (2) the Soil-Structure Interaction Workshop and (3) studies on structural benchmarks associated with Category I structures. The objective of the studies on ground water effects is to verify the applicability and the limitations of the SSI methods currently used by the industry in performing seismic evaluations of nuclear plants which are located at sites with high water tables. In a previous study by BNL (NUREG/CR-4588), it has been concluded that the pore water can influence significantly the soil-structure interaction process. This result, however, is based on the assumption of fully saturated soil profiles. Consequently, the work was further extended to include cases associated with variable water table depths. In this paper, results related to cut-off depths beyond which the pore water effects can be ignored in seismic calculations, are addressed. Comprehensive numerical data are given for soil configurations typical to those encountered in nuclear plant sites. These data were generated by using a modified version of the SLAM code which is capable of handling problems related to the dynamic response of saturated soils. Further, the paper presents some key aspects of the Soil-Structure Interaction Workshop (NUREG/CP-0054) which was held in Bethesda, MD on June 1, 1986. Finally, recent efforts related to the task on the structural benchmarks are described

  3. Benchmarking - a validation of UTDefect

    International Nuclear Information System (INIS)

    Niklasson, Jonas; Bostroem, Anders; Wirdelius, Haakan

    2006-06-01

    New and stronger demands on reliability of used NDE/NDT procedures and methods have stimulated the development of simulation tools of NDT. Modelling of ultrasonic non-destructive testing is useful for a number of reasons, e.g. physical understanding, parametric studies and in the qualification of procedures and personnel. The traditional way of qualifying a procedure is to generate a technical justification by employing experimental verification of the chosen technique. The manufacturing of test pieces is often very expensive and time consuming. It also tends to introduce a number of possible misalignments between the actual NDT situation and the proposed experimental simulation. The UTDefect computer code (SUNDT/simSUNDT) has been developed, together with the Dept. of Mechanics at Chalmers Univ. of Technology, during a decade and simulates the entire ultrasonic testing situation. A thorough validated model has the ability to be an alternative and a complement to the experimental work in order to reduce the extensive cost. The validation can be accomplished by comparisons with other models, but ultimately by comparisons with experiments. This project addresses the last alternative but provides an opportunity to, in a later stage, compare with other software when all data are made public and available. The comparison has been with experimental data from an international benchmark study initiated by the World Federation of NDE Centers. The experiments have been conducted with planar and spherically focused immersion transducers. The defects considered are side-drilled holes, flat-bottomed holes, and a spherical cavity. The data from the experiments are a reference signal used for calibration (the signal from the front surface of the test block at normal incidence) and the raw output from the scattering experiment. In all, more than forty cases have been compared. The agreement between UTDefect and the experiments was in general good (deviation less than 2dB) when the

  4. Benchmarking theoretical formalisms for (p ,p n ) reactions: The 15C(p ,p n )14C case

    Science.gov (United States)

    Yoshida, K.; Gómez-Ramos, M.; Ogata, K.; Moro, A. M.

    2018-02-01

    Background: Proton-induced knockout reactions of the form (p ,p N ) have experienced a renewed interest in recent years due to the possibility of performing these measurements with rare isotopes, using inverse kinematics. Several theoretical models are being used for the interpretation of these new data, such as the distorted-wave impulse approximation (DWIA), the transition amplitude formulation of the Faddeev equations due to Alt, Grassberger, and Sandhas (FAGS) and, more recently, a coupled-channels method here referred to as transfer-to-the- continuum (TC). Purpose: Our goal is to compare the momentum distributions calculated with the DWIA and TC models for the same reactions, using whenever possible the same inputs (e.g., distorting potential). A comparison with already published results for the FAGS formalism is performed as well. Method: We choose the 15C(p ,p n )14C reaction at an incident energy of 420 MeV/u, which has been previously studied with the FAGS formalism. The knocked-out neutron is assumed to be in a 2 s single-particle orbital. Longitudinal and transverse momentum distributions are calculated for different assumed separation energies. Results: For all cases considered, we find a very good agreement between DWIA and TC results. The energy dependence of the distorting optical potentials is found to affect in a modest way the shape and magnitude of the momentum distributions. Moreover, when relativistic kinematics corrections are omitted, our calculations reproduce remarkably well the FAGS result. Conclusions: The results found in this work provide confidence on the consistency and accuracy of the DWIA and TC models for analyzing momentum distributions for (p ,p n ) reactions at intermediate energies.

  5. The role of benchmarking for yardstick competition

    International Nuclear Information System (INIS)

    Burns, Phil; Jenkins, Cloda; Riechmann, Christoph

    2005-01-01

    With the increasing interest in yardstick regulation, there is a need to understand the most appropriate method for realigning tariffs at the outset. Benchmarking is the tool used for such realignment and is therefore a necessary first-step in the implementation of yardstick competition. A number of concerns have been raised about the application of benchmarking, making some practitioners reluctant to move towards yardstick based regimes. We assess five of the key concerns often discussed and find that, in general, these are not as great as perceived. The assessment is based on economic principles and experiences with applying benchmarking to regulated sectors, e.g. in the electricity and water industries in the UK, The Netherlands, Austria and Germany in recent years. The aim is to demonstrate that clarity on the role of benchmarking reduces the concern about its application in different regulatory regimes. We find that benchmarking can be used in regulatory settlements, although the range of possible benchmarking approaches that are appropriate will be small for any individual regulatory question. Benchmarking is feasible as total cost measures and environmental factors are better defined in practice than is commonly appreciated and collusion is unlikely to occur in environments with more than 2 or 3 firms (where shareholders have a role in monitoring and rewarding performance). Furthermore, any concern about companies under-recovering costs is a matter to be determined through the regulatory settlement and does not affect the case for using benchmarking as part of that settlement. (author)

  6. Co-ordinated research programme on benchmark study for the seismic analysis and testing of WWER-type nuclear power plants. V. 2. Generic material: Codes, standards, criteria. Working material

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-07-01

    The Co-ordinated research programme on the benchmark study for the seismic analysis and testing of WWER-type nuclear power plants was initiated subsequent to the request from representatives of Member States. The conclusions adopted at the Technical Committee Meeting on Seismic Issues related to existing nuclear power plants held in Tokyo in 1991 called for the harmonization of methods and criteria used in Member States in issues related to seismic safety. The Consulltants' Meeting which followed resulted in producing a working document for CRP. It was decided that a benchmark study is the most effective way to achieve the principal objective. Two types of WWER reactors (WWER-440/213 and WWER-1000) were selected as prototypes for the benchmark exercise to be tested on a full scale using explosions and/or vibration generators. The two prototypes are Kozloduy Units 5/6 for WWER-1000 and Paks for WWER-440/213 nuclear power plants. This volume of Working material contains reports related to generic material, namely codes, standards and criteria for benchmark analysis.

  7. Co-ordinated research programme on benchmark study for the seismic analysis and testing of WWER-type nuclear power plants. V. 2. Generic material: Codes, standards, criteria. Working material

    International Nuclear Information System (INIS)

    1995-01-01

    The Co-ordinated research programme on the benchmark study for the seismic analysis and testing of WWER-type nuclear power plants was initiated subsequent to the request from representatives of Member States. The conclusions adopted at the Technical Committee Meeting on Seismic Issues related to existing nuclear power plants held in Tokyo in 1991 called for the harmonization of methods and criteria used in Member States in issues related to seismic safety. The Consulltants' Meeting which followed resulted in producing a working document for CRP. It was decided that a benchmark study is the most effective way to achieve the principal objective. Two types of WWER reactors (WWER-440/213 and WWER-1000) were selected as prototypes for the benchmark exercise to be tested on a full scale using explosions and/or vibration generators. The two prototypes are Kozloduy Units 5/6 for WWER-1000 and Paks for WWER-440/213 nuclear power plants. This volume of Working material contains reports related to generic material, namely codes, standards and criteria for benchmark analysis

  8. Results of a benchmark study for the seismic analysis and testing of WWER type NPPs: Overview and general comparison for Paks NPP

    International Nuclear Information System (INIS)

    Guerpinar, A.; Zola, M.

    2001-01-01

    Within the framework of the IAEA coordinated 'Benchmark Study for the seismic analysis and testing of WWER-type NPPs', in-situ dynamic structural testing activities have been performed at the Paks Nuclear Power Plant in Hungary. The specific objective of the investigation was to obtain experimental data on the actual dynamic structural behaviour of the plant's major constructions and equipment under normal operating conditions, for enabling a valid seismic safety review to be made. This paper refers on the comparison of the results obtained from the experimental activities performed by ISMES with those coming from analytical studies performed for the Coordinated Research Programme (CRP) by Siemens (Germany), EQE (Bulgaria), Central Laboratory (Bulgaria), M. David Consulting (Czech Republic), IVO (Finland). This paper gives a synthetic description of the conducted experiments and presents some results, regarding in particular the free-field excitations produced during the earthquake-simulation experiments and an experiment of the dynamic soil-structure interaction global effects at the base of the reactor containment structure. The specific objective of the experimental investigation was to obtain valid data on the dynamic behaviour of the plant's major constructions, under normal operating conditions, to support the analytical assessment of their actual seismic safety. The full-scale dynamic structural testing activities have been performed in December 1994 at the Paks (H) Nuclear Power Plant. The Paks NPP site has been subjected to low level earthquake-like ground shaking, through appropriately devised underground explosions, and the dynamic response of the plant's 1st reactor unit important structures was appropriately measured and digitally recorded, with the whole nuclear power plant under normal operating conditions. In-situ free field response was measured concurrently and, moreover, site-specific geophysical and seismological data were simultaneously

  9. Benchmark levels for the consumptive water footprint of crop production for different environmental conditions: a case study for winter wheat in China

    Science.gov (United States)

    Zhuo, La; Mekonnen, Mesfin M.; Hoekstra, Arjen Y.

    2016-11-01

    Meeting growing food demands while simultaneously shrinking the water footprint (WF) of agricultural production is one of the greatest societal challenges. Benchmarks for the WF of crop production can serve as a reference and be helpful in setting WF reduction targets. The consumptive WF of crops, the consumption of rainwater stored in the soil (green WF), and the consumption of irrigation water (blue WF) over the crop growing period varies spatially and temporally depending on environmental factors like climate and soil. The study explores which environmental factors should be distinguished when determining benchmark levels for the consumptive WF of crops. Hereto we determine benchmark levels for the consumptive WF of winter wheat production in China for all separate years in the period 1961-2008, for rain-fed vs. irrigated croplands, for wet vs. dry years, for warm vs. cold years, for four different soil classes, and for two different climate zones. We simulate consumptive WFs of winter wheat production with the crop water productivity model AquaCrop at a 5 by 5 arcmin resolution, accounting for water stress only. The results show that (i) benchmark levels determined for individual years for the country as a whole remain within a range of ±20 % around long-term mean levels over 1961-2008, (ii) the WF benchmarks for irrigated winter wheat are 8-10 % larger than those for rain-fed winter wheat, (iii) WF benchmarks for wet years are 1-3 % smaller than for dry years, (iv) WF benchmarks for warm years are 7-8 % smaller than for cold years, (v) WF benchmarks differ by about 10-12 % across different soil texture classes, and (vi) WF benchmarks for the humid zone are 26-31 % smaller than for the arid zone, which has relatively higher reference evapotranspiration in general and lower yields in rain-fed fields. We conclude that when determining benchmark levels for the consumptive WF of a crop, it is useful to primarily distinguish between different climate zones. If

  10. Casing pull tests for directionally drilled environmental wells

    International Nuclear Information System (INIS)

    Staller, G.E.; Wemple, R.P.; Layne, R.R.

    1994-11-01

    A series of tests to evaluate several types of environmental well casings have been conducted by Sandia National Laboratories (SNL) and it's industrial partner, The Charles Machine Works, Inc. (CMW). A test bed was constructed at the CMW test range to model a typical shallow, horizontal, directionally drilled wellbore. Four different types of casings were pulled through this test bed. The loads required to pull the casings through the test bed and the condition of the casing material were documented during the pulling operations. An additional test was conducted to make a comparison of test bed vs actual wellbore casing pull loads. A directionally drilled well was emplaced by CMW to closely match the test bed. An instrumented casing was installed in the well and the pull loads recorded. The completed tests are reviewed and the results reported

  11. Casing pull tests for directionally drilled environmental wells

    Energy Technology Data Exchange (ETDEWEB)

    Staller, G.E.; Wemple, R.P. [Sandia National Labs., Albuquerque, NM (United States); Layne, R.R. [Charles Machine Works, Inc., Perry, OK (United States)

    1994-11-01

    A series of tests to evaluate several types of environmental well casings have been conducted by Sandia National Laboratories (SNL) and it`s industrial partner, The Charles Machine Works, Inc. (CMW). A test bed was constructed at the CMW test range to model a typical shallow, horizontal, directionally drilled wellbore. Four different types of casings were pulled through this test bed. The loads required to pull the casings through the test bed and the condition of the casing material were documented during the pulling operations. An additional test was conducted to make a comparison of test bed vs actual wellbore casing pull loads. A directionally drilled well was emplaced by CMW to closely match the test bed. An instrumented casing was installed in the well and the pull loads recorded. The completed tests are reviewed and the results reported.

  12. Regression Benchmarking: An Approach to Quality Assurance in Performance

    OpenAIRE

    Bulej, Lubomír

    2005-01-01

    The paper presents a short summary of our work in the area of regression benchmarking and its application to software development. Specially, we explain the concept of regression benchmarking, the requirements for employing regression testing in a software project, and methods used for analyzing the vast amounts of data resulting from repeated benchmarking. We present the application of regression benchmarking on a real software project and conclude with a glimpse at the challenges for the fu...

  13. Professional Performance and Bureaucratic Benchmarking Information

    DEFF Research Database (Denmark)

    Schneider, Melanie L.; Mahlendorf, Matthias D.; Schäffer, Utz

    controls because of organizational-professional conflicts. We therefore analyze the association between bureaucratic benchmarking information provision and professional performance and suggest that the association is more positive if prior professional performance was low. We test our hypotheses based...

  14. Benchmarking af kommunernes sagsbehandling

    DEFF Research Database (Denmark)

    Amilon, Anna

    Fra 2007 skal Ankestyrelsen gennemføre benchmarking af kommuernes sagsbehandlingskvalitet. Formålet med benchmarkingen er at udvikle praksisundersøgelsernes design med henblik på en bedre opfølgning og at forbedre kommunernes sagsbehandling. Dette arbejdspapir diskuterer metoder for benchmarking...

  15. Internet based benchmarking

    DEFF Research Database (Denmark)

    Bogetoft, Peter; Nielsen, Kurt

    2005-01-01

    We discuss the design of interactive, internet based benchmarking using parametric (statistical) as well as nonparametric (DEA) models. The user receives benchmarks and improvement potentials. The user is also given the possibility to search different efficiency frontiers and hereby to explore...

  16. Benchmarking ENDF/B-VII.1, JENDL-4.0 and JEFF-3.1

    International Nuclear Information System (INIS)

    Van Der Marck, S. C.

    2012-01-01

    Three nuclear data libraries have been tested extensively using criticality safety benchmark calculations. The three libraries are the new release of the US library ENDF/B-VII.1 (2011), the new release of the Japanese library JENDL-4.0 (2011), and the OECD/NEA library JEFF-3.1 (2006). All calculations were performed with the continuous-energy Monte Carlo code MCNP (version 4C3, as well as version 6-beta1). Around 2000 benchmark cases from the International Handbook of Criticality Safety Benchmark Experiments (ICSBEP) were used. The results were analyzed per ICSBEP category, and per element. Overall, the three libraries show similar performance on most criticality safety benchmarks. The largest differences are probably caused by elements such as Be, C, Fe, Zr, W. (authors)

  17. Medical school benchmarking - from tools to programmes.

    Science.gov (United States)

    Wilkinson, Tim J; Hudson, Judith N; Mccoll, Geoffrey J; Hu, Wendy C Y; Jolly, Brian C; Schuwirth, Lambert W T

    2015-02-01

    Benchmarking among medical schools is essential, but may result in unwanted effects. To apply a conceptual framework to selected benchmarking activities of medical schools. We present an analogy between the effects of assessment on student learning and the effects of benchmarking on medical school educational activities. A framework by which benchmarking can be evaluated was developed and applied to key current benchmarking activities in Australia and New Zealand. The analogy generated a conceptual framework that tested five questions to be considered in relation to benchmarking: what is the purpose? what are the attributes of value? what are the best tools to assess the attributes of value? what happens to the results? and, what is the likely "institutional impact" of the results? If the activities were compared against a blueprint of desirable medical graduate outcomes, notable omissions would emerge. Medical schools should benchmark their performance on a range of educational activities to ensure quality improvement and to assure stakeholders that standards are being met. Although benchmarking potentially has positive benefits, it could also result in perverse incentives with unforeseen and detrimental effects on learning if it is undertaken using only a few selected assessment tools.

  18. HYBRID DATA APPROACH FOR SELECTING EFFECTIVE TEST CASES DURING THE REGRESSION TESTING

    OpenAIRE

    Mohan, M.; Shrimali, Tarun

    2017-01-01

    In the software industry, software testing becomes more important in the entire software development life cycle. Software testing is one of the fundamental components of software quality assurances. Software Testing Life Cycle (STLC)is a process involved in testing the complete software, which includes Regression Testing, Unit Testing, Smoke Testing, Integration Testing, Interface Testing, System Testing & etc. In the STLC of Regression testing, test case selection is one of the most importan...

  19. EGS4 benchmark program

    International Nuclear Information System (INIS)

    Yasu, Y.; Hirayama, H.; Namito, Y.; Yashiro, S.

    1995-01-01

    This paper proposes EGS4 Benchmark Suite which consists of three programs called UCSAMPL4, UCSAMPL4I and XYZDOS. This paper also evaluates optimization methods of recent RISC/UNIX systems, such as IBM, HP, DEC, Hitachi and Fujitsu, for the benchmark suite. When particular compiler option and math library were included in the evaluation process, system performed significantly better. Observed performance of some of the RISC/UNIX systems were beyond some so-called Mainframes of IBM, Hitachi or Fujitsu. The computer performance of EGS4 Code System on an HP9000/735 (99MHz) was defined to be the unit of EGS4 Unit. The EGS4 Benchmark Suite also run on various PCs such as Pentiums, i486 and DEC alpha and so forth. The performance of recent fast PCs reaches that of recent RISC/UNIX systems. The benchmark programs have been evaluated with correlation of industry benchmark programs, namely, SPECmark. (author)

  20. Reference materials and representative test materials: the nanotechnology case

    International Nuclear Information System (INIS)

    Roebben, G.; Rasmussen, K.; Kestens, V.; Linsinger, T. P. J.; Rauscher, H.; Emons, H.; Stamm, H.

    2013-01-01

    An increasing number of chemical, physical and biological tests are performed on manufactured nanomaterials for scientific and regulatory purposes. Existing test guidelines and measurement methods are not always directly applicable to or relevant for nanomaterials. Therefore, it is necessary to verify the use of the existing methods with nanomaterials, thereby identifying where modifications are needed, and where new methods need to be developed and validated. Efforts for verification, development and validation of methods as well as quality assurance of (routine) test results significantly benefit from the availability of suitable test and reference materials. This paper provides an overview of the existing types of reference materials and introduces a new class of test materials for which the term ‘representative test material’ is proposed. The three generic concepts of certified reference material, reference material(non-certified) and representative test material constitute a comprehensive system of benchmarks that can be used by all measurement and testing communities, regardless of their specific discipline. This paper illustrates this system with examples from the field of nanomaterials, including reference materials and representative test materials developed at the European Commission’s Joint Research Centre, in particular at the Institute for Reference Materials and Measurements (IRMM), and at the Institute for Health and Consumer Protection (IHCP).

  1. Verification and validation benchmarks.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy

    2007-02-01

    Verification and validation (V&V) are the primary means to assess the accuracy and reliability of computational simulations. V&V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V&V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the level of

  2. Verification and validation benchmarks

    International Nuclear Information System (INIS)

    Oberkampf, William Louis; Trucano, Timothy Guy

    2007-01-01

    Verification and validation (V and V) are the primary means to assess the accuracy and reliability of computational simulations. V and V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V and V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the

  3. Verification and validation benchmarks

    International Nuclear Information System (INIS)

    Oberkampf, William L.; Trucano, Timothy G.

    2008-01-01

    Verification and validation (V and V) are the primary means to assess the accuracy and reliability of computational simulations. V and V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V and V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the

  4. Vver-1000 Mox core computational benchmark

    International Nuclear Information System (INIS)

    2006-01-01

    The NEA Nuclear Science Committee has established an Expert Group that deals with the status and trends of reactor physics, fuel performance and fuel cycle issues related to disposing of weapons-grade plutonium in mixed-oxide fuel. The objectives of the group are to provide NEA member countries with up-to-date information on, and to develop consensus regarding, core and fuel cycle issues associated with burning weapons-grade plutonium in thermal water reactors (PWR, BWR, VVER-1000, CANDU) and fast reactors (BN-600). These issues concern core physics, fuel performance and reliability, and the capability and flexibility of thermal water reactors and fast reactors to dispose of weapons-grade plutonium in standard fuel cycles. The activities of the NEA Expert Group on Reactor-based Plutonium Disposition are carried out in close co-operation (jointly, in most cases) with the NEA Working Party on Scientific Issues in Reactor Systems (WPRS). A prominent part of these activities include benchmark studies. At the time of preparation of this report, the following benchmarks were completed or in progress: VENUS-2 MOX Core Benchmarks: carried out jointly with the WPRS (formerly the WPPR) (completed); VVER-1000 LEU and MOX Benchmark (completed); KRITZ-2 Benchmarks: carried out jointly with the WPRS (formerly the WPPR) (completed); Hollow and Solid MOX Fuel Behaviour Benchmark (completed); PRIMO MOX Fuel Performance Benchmark (ongoing); VENUS-2 MOX-fuelled Reactor Dosimetry Calculation (ongoing); VVER-1000 In-core Self-powered Neutron Detector Calculational Benchmark (started); MOX Fuel Rod Behaviour in Fast Power Pulse Conditions (started); Benchmark on the VENUS Plutonium Recycling Experiments Configuration 7 (started). This report describes the detailed results of the benchmark investigating the physics of a whole VVER-1000 reactor core using two-thirds low-enriched uranium (LEU) and one-third MOX fuel. It contributes to the computer code certification process and to the

  5. Integration testing through reusing representative unit test cases for high-confidence medical software.

    Science.gov (United States)

    Shin, Youngsul; Choi, Yunja; Lee, Woo Jin

    2013-06-01

    As medical software is getting larger-sized, complex, and connected with other devices, finding faults in integrated software modules gets more difficult and time consuming. Existing integration testing typically takes a black-box approach, which treats the target software as a black box and selects test cases without considering internal behavior of each software module. Though it could be cost-effective, this black-box approach cannot thoroughly test interaction behavior among integrated modules and might leave critical faults undetected, which should not happen in safety-critical systems such as medical software. This work anticipates that information on internal behavior is necessary even for integration testing to define thorough test cases for critical software and proposes a new integration testing method by reusing test cases used for unit testing. The goal is to provide a cost-effective method to detect subtle interaction faults at the integration testing phase by reusing the knowledge obtained from unit testing phase. The suggested approach notes that the test cases for the unit testing include knowledge on internal behavior of each unit and extracts test cases for the integration testing from the test cases for the unit testing for a given test criteria. The extracted representative test cases are connected with functions under test using the state domain and a single test sequence to cover the test cases is produced. By means of reusing unit test cases, the tester has effective test cases to examine diverse execution paths and find interaction faults without analyzing complex modules. The produced test sequence can have test coverage as high as the unit testing coverage and its length is close to the length of optimal test sequences. Copyright © 2013 Elsevier Ltd. All rights reserved.

  6. SPOC Benchmark Case: SNRE Model

    Energy Technology Data Exchange (ETDEWEB)

    Vishal Patel; Michael Eades; Claude Russel Joyner II

    2016-02-01

    The Small Nuclear Rocket Engine (SNRE) was modeled in the Center for Space Nuclear Research’s (CSNR) Space Propulsion Optimization Code (SPOC). SPOC aims to create nuclear thermal propulsion (NTP) geometries quickly to perform parametric studies on design spaces of historic and new NTP designs. The SNRE geometry was modeled in SPOC and a critical core with a reasonable amount of criticality margin was found. The fuel, tie-tubes, reflector, and control drum masses were predicted rather well. These are all very important for neutronics calculations so the active reactor geometries created with SPOC can continue to be trusted. Thermal calculations of the average and hot fuel channels agreed very well. The specific impulse calculations used historically and in SPOC disagree so mass flow rates and impulses differed. Modeling peripheral and power balance components that do not affect nuclear characteristics of the core is not a feature of SPOC and as such, these components should continue to be designed using other tools. A full paper detailing the available SNRE data and comparisons with SPOC outputs will be submitted as a follow-up to this abstract.

  7. Using a fuzzy comprehensive evaluation method to determine product usability: A test case.

    Science.gov (United States)

    Zhou, Ronggang; Chan, Alan H S

    2017-01-01

    In order to take into account the inherent uncertainties during product usability evaluation, Zhou and Chan [1] proposed a comprehensive method of usability evaluation for products by combining the analytic hierarchy process (AHP) and fuzzy evaluation methods for synthesizing performance data and subjective response data. This method was designed to provide an integrated framework combining the inevitable vague judgments from the multiple stages of the product evaluation process. In order to illustrate the effectiveness of the model, this study used a summative usability test case to assess the application and strength of the general fuzzy usability framework. To test the proposed fuzzy usability evaluation framework [1], a standard summative usability test was conducted to benchmark the overall usability of a specific network management software. Based on the test data, the fuzzy method was applied to incorporate both the usability scores and uncertainties involved in the multiple components of the evaluation. Then, with Monte Carlo simulation procedures, confidence intervals were used to compare the reliabilities among the fuzzy approach and two typical conventional methods combining metrics based on percentages. This case study showed that the fuzzy evaluation technique can be applied successfully for combining summative usability testing data to achieve an overall usability quality for the network software evaluated. Greater differences of confidence interval widths between the method of averaging equally percentage and weighted evaluation method, including the method of weighted percentage averages, verified the strength of the fuzzy method.

  8. Teledyne Energy Systems, Inc., Proton Exchange Member (PEM) Fuel Cell Engineering Model Powerplant. Test Report: Initial Benchmark Tests in the Original Orientation

    Science.gov (United States)

    Loyselle, Patricia; Prokopius, Kevin

    2011-01-01

    Proton Exchange Membrane (PEM) fuel cell technology is the leading candidate to replace the alkaline fuel cell technology, currently used on the Shuttle, for future space missions. During a 5-yr development program, a PEM fuel cell powerplant was developed. This report details the initial performance evaluation test results of the powerplant.

  9. Shielding benchmark problems, (2)

    International Nuclear Information System (INIS)

    Tanaka, Shun-ichi; Sasamoto, Nobuo; Oka, Yoshiaki; Shin, Kazuo; Tada, Keiko.

    1980-02-01

    Shielding benchmark problems prepared by Working Group of Assessment of Shielding Experiments in the Research Committee on Shielding Design in the Atomic Energy Society of Japan were compiled by Shielding Laboratory in Japan Atomic Energy Research Institute. Fourteen shielding benchmark problems are presented newly in addition to twenty-one problems proposed already, for evaluating the calculational algorithm and accuracy of computer codes based on discrete ordinates method and Monte Carlo method and for evaluating the nuclear data used in codes. The present benchmark problems are principally for investigating the backscattering and the streaming of neutrons and gamma rays in two- and three-dimensional configurations. (author)

  10. Benchmarks for enhanced network performance: hands-on testing of operating system solutions to identify the optimal application server platform for the Graduate School of Business and Public Policy

    OpenAIRE

    Burman, Rex; Coca, Anthony R.

    2010-01-01

    MBA Professional Report With the release of next generation operating systems, network managers face the prospect of upgrading their systems based on the assumption that "newer is better". The Graduate School of Business and Public Policy is in the process of upgrading their network application server and one of the most important decisions to be made is which Server Operating System to use. Based on hands-on benchmark tests and analysis we aim to assist the GSBPP by providing benchma...

  11. Assessment of the monitoring and evaluation system for integrated community case management (ICCM) in Ethiopia: a comparison against global benchmark indicators.

    Science.gov (United States)

    Mamo, Dereje; Hazel, Elizabeth; Lemma, Israel; Guenther, Tanya; Bekele, Abeba; Demeke, Berhanu

    2014-10-01

    Program managers require feasible, timely, reliable, and valid measures of iCCM implementation to identify problems and assess progress. The global iCCM Task Force developed benchmark indicators to guide implementers to develop or improve monitoring and evaluation (M&E) systems. To assesses Ethiopia's iCCM M&E system by determining the availability and feasibility of the iCCM benchmark indicators. We conducted a desk review of iCCM policy documents, monitoring tools, survey reports, and other rele- vant documents; and key informant interviews with government and implementing partners involved in iCCM scale-up and M&E. Currently, Ethiopia collects data to inform most (70% [33/47]) iCCM benchmark indicators, and modest extra effort could boost this to 83% (39/47). Eight (17%) are not available given the current system. Most benchmark indicators that track coordination and policy, human resources, service delivery and referral, supervision, and quality assurance are available through the routine monitoring systems or periodic surveys. Indicators for supply chain management are less available due to limited consumption data and a weak link with treatment data. Little information is available on iCCM costs. Benchmark indicators can detail the status of iCCM implementation; however, some indicators may not fit country priorities, and others may be difficult to collect. The government of Ethiopia and partners should review and prioritize the benchmark indicators to determine which should be included in the routine M&E system, especially since iCCMdata are being reviewed for addition to the HMIS. Moreover, the Health Extension Worker's reporting burden can be minimized by an integrated reporting approach.

  12. IPRT polarized radiative transfer model intercomparison project - Three-dimensional test cases (phase B)

    Science.gov (United States)

    Emde, Claudia; Barlakas, Vasileios; Cornet, Céline; Evans, Frank; Wang, Zhen; Labonotte, Laurent C.; Macke, Andreas; Mayer, Bernhard; Wendisch, Manfred

    2018-04-01

    Initially unpolarized solar radiation becomes polarized by scattering in the Earth's atmosphere. In particular molecular scattering (Rayleigh scattering) polarizes electromagnetic radiation, but also scattering of radiation at aerosols, cloud droplets (Mie scattering) and ice crystals polarizes. Each atmospheric constituent produces a characteristic polarization signal, thus spectro-polarimetric measurements are frequently employed for remote sensing of aerosol and cloud properties. Retrieval algorithms require efficient radiative transfer models. Usually, these apply the plane-parallel approximation (PPA), assuming that the atmosphere consists of horizontally homogeneous layers. This allows to solve the vector radiative transfer equation (VRTE) efficiently. For remote sensing applications, the radiance is considered constant over the instantaneous field-of-view of the instrument and each sensor element is treated independently in plane-parallel approximation, neglecting horizontal radiation transport between adjacent pixels (Independent Pixel Approximation, IPA). In order to estimate the errors due to the IPA approximation, three-dimensional (3D) vector radiative transfer models are required. So far, only a few such models exist. Therefore, the International Polarized Radiative Transfer (IPRT) working group of the International Radiation Commission (IRC) has initiated a model intercomparison project in order to provide benchmark results for polarized radiative transfer. The group has already performed an intercomparison for one-dimensional (1D) multi-layer test cases [phase A, 1]. This paper presents the continuation of the intercomparison project (phase B) for 2D and 3D test cases: a step cloud, a cubic cloud, and a more realistic scenario including a 3D cloud field generated by a Large Eddy Simulation (LES) model and typical background aerosols. The commonly established benchmark results for 3D polarized radiative transfer are available at the IPRT website (http

  13. Diagnostic Algorithm Benchmarking

    Science.gov (United States)

    Poll, Scott

    2011-01-01

    A poster for the NASA Aviation Safety Program Annual Technical Meeting. It describes empirical benchmarking on diagnostic algorithms using data from the ADAPT Electrical Power System testbed and a diagnostic software framework.

  14. Benchmarking Swiss electricity grids

    International Nuclear Information System (INIS)

    Walti, N.O.; Weber, Ch.

    2001-01-01

    This extensive article describes a pilot benchmarking project initiated by the Swiss Association of Electricity Enterprises that assessed 37 Swiss utilities. The data collected from these utilities on a voluntary basis included data on technical infrastructure, investments and operating costs. These various factors are listed and discussed in detail. The assessment methods and rating mechanisms that provided the benchmarks are discussed and the results of the pilot study are presented that are to form the basis of benchmarking procedures for the grid regulation authorities under the planned Switzerland's electricity market law. Examples of the practical use of the benchmarking methods are given and cost-efficiency questions still open in the area of investment and operating costs are listed. Prefaces by the Swiss Association of Electricity Enterprises and the Swiss Federal Office of Energy complete the article

  15. Benchmarking and Regulation

    DEFF Research Database (Denmark)

    Agrell, Per J.; Bogetoft, Peter

    . The application of benchmarking in regulation, however, requires specific steps in terms of data validation, model specification and outlier detection that are not systematically documented in open publications, leading to discussions about regulatory stability and economic feasibility of these techniques...

  16. Financial Integrity Benchmarks

    Data.gov (United States)

    City of Jackson, Mississippi — This data compiles standard financial integrity benchmarks that allow the City to measure its financial standing. It measure the City's debt ratio and bond ratings....

  17. Benchmarking in Foodservice Operations

    National Research Council Canada - National Science Library

    Johnson, Bonnie

    1998-01-01

    .... The design of this study included two parts: (1) eleven expert panelists involved in a Delphi technique to identify and rate importance of foodservice performance measures and rate the importance of benchmarking activities, and (2...

  18. International handbook of evaluated criticality safety benchmark experiments

    International Nuclear Information System (INIS)

    2010-01-01

    The Criticality Safety Benchmark Evaluation Project (CSBEP) was initiated in October of 1992 by the United States Department of Energy. The project quickly became an international effort as scientists from other interested countries became involved. The International Criticality Safety Benchmark Evaluation Project (ICSBEP) became an official activity of the Organization for Economic Cooperation and Development - Nuclear Energy Agency (OECD-NEA) in 1995. This handbook contains criticality safety benchmark specifications that have been derived from experiments performed at various nuclear critical facilities around the world. The benchmark specifications are intended for use by criticality safety engineers to validate calculational techniques used to establish minimum subcritical margins for operations with fissile material and to determine criticality alarm requirement and placement. Many of the specifications are also useful for nuclear data testing. Example calculations are presented; however, these calculations do not constitute a validation of the codes or cross section data. The evaluated criticality safety benchmark data are given in nine volumes. These volumes span over 55,000 pages and contain 516 evaluations with benchmark specifications for 4,405 critical, near critical, or subcritical configurations, 24 criticality alarm placement / shielding configurations with multiple dose points for each, and 200 configurations that have been categorized as fundamental physics measurements that are relevant to criticality safety applications. Experiments that are found unacceptable for use as criticality safety benchmark experiments are discussed in these evaluations; however, benchmark specifications are not derived for such experiments (in some cases models are provided in an appendix). Approximately 770 experimental configurations are categorized as unacceptable for use as criticality safety benchmark experiments. Additional evaluations are in progress and will be

  19. REVISED STREAM CODE AND WASP5 BENCHMARK

    International Nuclear Information System (INIS)

    Chen, K

    2005-01-01

    STREAM is an emergency response code that predicts downstream pollutant concentrations for releases from the SRS area to the Savannah River. The STREAM code uses an algebraic equation to approximate the solution of the one dimensional advective transport differential equation. This approach generates spurious oscillations in the concentration profile when modeling long duration releases. To improve the capability of the STREAM code to model long-term releases, its calculation module was replaced by the WASP5 code. WASP5 is a US EPA water quality analysis program that simulates one-dimensional pollutant transport through surface water. Test cases were performed to compare the revised version of STREAM with the existing version. For continuous releases, results predicted by the revised STREAM code agree with physical expectations. The WASP5 code was benchmarked with the US EPA 1990 and 1991 dye tracer studies, in which the transport of the dye was measured from its release at the New Savannah Bluff Lock and Dam downstream to Savannah. The peak concentrations predicted by the WASP5 agreed with the measurements within ±20.0%. The transport times of the dye concentration peak predicted by the WASP5 agreed with the measurements within ±3.6%. These benchmarking results demonstrate that STREAM should be capable of accurately modeling releases from SRS outfalls

  20. Accelerator shielding benchmark problems

    International Nuclear Information System (INIS)

    Hirayama, H.; Ban, S.; Nakamura, T.

    1993-01-01

    Accelerator shielding benchmark problems prepared by Working Group of Accelerator Shielding in the Research Committee on Radiation Behavior in the Atomic Energy Society of Japan were compiled by Radiation Safety Control Center of National Laboratory for High Energy Physics. Twenty-five accelerator shielding benchmark problems are presented for evaluating the calculational algorithm, the accuracy of computer codes and the nuclear data used in codes. (author)

  1. Shielding benchmark problems

    International Nuclear Information System (INIS)

    Tanaka, Shun-ichi; Sasamoto, Nobuo; Oka, Yoshiaki; Kawai, Masayoshi; Nakazawa, Masaharu.

    1978-09-01

    Shielding benchmark problems were prepared by the Working Group of Assessment of Shielding Experiments in the Research Comittee on Shielding Design of the Atomic Energy Society of Japan, and compiled by the Shielding Laboratory of Japan Atomic Energy Research Institute. Twenty-one kinds of shielding benchmark problems are presented for evaluating the calculational algorithm and the accuracy of computer codes based on the discrete ordinates method and the Monte Carlo method and for evaluating the nuclear data used in the codes. (author)

  2. Benchmarking electricity distribution

    Energy Technology Data Exchange (ETDEWEB)

    Watts, K. [Department of Justice and Attorney-General, QLD (Australia)

    1995-12-31

    Benchmarking has been described as a method of continuous improvement that involves an ongoing and systematic evaluation and incorporation of external products, services and processes recognised as representing best practice. It is a management tool similar to total quality management (TQM) and business process re-engineering (BPR), and is best used as part of a total package. This paper discusses benchmarking models and approaches and suggests a few key performance indicators that could be applied to benchmarking electricity distribution utilities. Some recent benchmarking studies are used as examples and briefly discussed. It is concluded that benchmarking is a strong tool to be added to the range of techniques that can be used by electricity distribution utilities and other organizations in search of continuous improvement, and that there is now a high level of interest in Australia. Benchmarking represents an opportunity for organizations to approach learning from others in a disciplined and highly productive way, which will complement the other micro-economic reforms being implemented in Australia. (author). 26 refs.

  3. Tourism Destination Benchmarking: Evaluation and Selection of the Benchmarking Partners

    Directory of Open Access Journals (Sweden)

    Luštický Martin

    2012-03-01

    Full Text Available Tourism development has an irreplaceable role in regional policy of almost all countries. This is due to its undeniable benefits for the local population with regards to the economic, social and environmental sphere. Tourist destinations compete for visitors at tourism market and subsequently get into a relatively sharp competitive struggle. The main goal of regional governments and destination management institutions is to succeed in this struggle by increasing the competitiveness of their destination. The quality of strategic planning and final strategies is a key factor of competitiveness. Even though the tourism sector is not the typical field where the benchmarking methods are widely used, such approaches could be successfully applied. The paper focuses on key phases of the benchmarking process which lies in the search for suitable referencing partners. The partners are consequently selected to meet general requirements to ensure the quality if strategies. Following from this, some specific characteristics are developed according to the SMART approach. The paper tests this procedure with an expert evaluation of eight selected regional tourism strategies of regions in the Czech Republic, Slovakia and Great Britain. In this way it validates the selected criteria in the frame of the international environment. Hence, it makes it possible to find strengths and weaknesses of selected strategies and at the same time facilitates the discovery of suitable benchmarking partners.

  4. Co-ordinated research programme on benchmark study for the seismic analysis and testing of WWER-type nuclear power plants. V. 1. Data related to sites and plants: Paks NPP, Kozloduy NPP. Working material

    International Nuclear Information System (INIS)

    1995-01-01

    The Co-ordinated research programme on the benchmark study for the seismic analysis and testing of WWER-type nuclear power plants was initiated subsequent to the request from representatives of Member States. The conclusions adopted at the Technical Committee Meeting on Seismic Issues related to existing nuclear power plants held in Tokyo in 1991 called for the harmonization of methods and criteria used in Member States in issues related to seismic safety. The Consulltants' Meeting which followed resulted in producing a working document for CRP. It was decided that a benchmark study is the most effective way to achieve the principal objective. Two types of WWER reactors (WWER-440/213 and WWER-1000) were selected as prototypes for the benchmark exercise to be tested on a full scale using explosions and/or vibration generators. The two prototypes are Kozloduy Units 5/6 for WWER-1000 and Paks for WWER-440/213 nuclear power plants. This volume of Working material contains reports on data related to sites and NPPs Paks and Kozloduy

  5. Co-ordinated research programme on benchmark study for the seismic analysis and testing of WWER-type nuclear power plants. V. 1. Data related to sites and plants: Paks NPP, Kozloduy NPP. Working material

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-07-01

    The Co-ordinated research programme on the benchmark study for the seismic analysis and testing of WWER-type nuclear power plants was initiated subsequent to the request from representatives of Member States. The conclusions adopted at the Technical Committee Meeting on Seismic Issues related to existing nuclear power plants held in Tokyo in 1991 called for the harmonization of methods and criteria used in Member States in issues related to seismic safety. The Consulltants' Meeting which followed resulted in producing a working document for CRP. It was decided that a benchmark study is the most effective way to achieve the principal objective. Two types of WWER reactors (WWER-440/213 and WWER-1000) were selected as prototypes for the benchmark exercise to be tested on a full scale using explosions and/or vibration generators. The two prototypes are Kozloduy Units 5/6 for WWER-1000 and Paks for WWER-440/213 nuclear power plants. This volume of Working material contains reports on data related to sites and NPPs Paks and Kozloduy.

  6. Computerized Adaptive Testing. A Case Study.

    Science.gov (United States)

    1980-12-01

    Vocational Interest Blank in 1927 [Dubois 1970]. 3. German Contributions Cattell also studied for a period of three years in Leipzig under Wilhelm Wundt in the...world’s first psychological laboratory, 2 founded by Wundt in 1879 [Heidbreder 1933]. 2William James’ laboratory, established at Harvard in 1875, did...have become important parts of psychological test theory. Under Wundt , Spearman’s principal endeavor was experimental psychology, but he also found time

  7. EPRI depletion benchmark calculations using PARAGON

    International Nuclear Information System (INIS)

    Kucukboyaci, Vefa N.

    2015-01-01

    Highlights: • PARAGON depletion calculations are benchmarked against the EPRI reactivity decrement experiments. • Benchmarks cover a wide range of enrichments, burnups, cooling times, and burnable absorbers, and different depletion and storage conditions. • Results from PARAGON-SCALE scheme are more conservative relative to the benchmark data. • ENDF/B-VII based data reduces the excess conservatism and brings the predictions closer to benchmark reactivity decrement values. - Abstract: In order to conservatively apply burnup credit in spent fuel pool criticality analyses, code validation for both fresh and used fuel is required. Fresh fuel validation is typically done by modeling experiments from the “International Handbook.” A depletion validation can determine a bias and bias uncertainty for the worth of the isotopes not found in the fresh fuel critical experiments. Westinghouse’s burnup credit methodology uses PARAGON™ (Westinghouse 2-D lattice physics code) and its 70-group cross-section library, which have been benchmarked, qualified, and licensed both as a standalone transport code and as a nuclear data source for core design simulations. A bias and bias uncertainty for the worth of depletion isotopes, however, are not available for PARAGON. Instead, the 5% decrement approach for depletion uncertainty is used, as set forth in the Kopp memo. Recently, EPRI developed a set of benchmarks based on a large set of power distribution measurements to ascertain reactivity biases. The depletion reactivity has been used to create 11 benchmark cases for 10, 20, 30, 40, 50, and 60 GWd/MTU and 3 cooling times 100 h, 5 years, and 15 years. These benchmark cases are analyzed with PARAGON and the SCALE package and sensitivity studies are performed using different cross-section libraries based on ENDF/B-VI.3 and ENDF/B-VII data to assess that the 5% decrement approach is conservative for determining depletion uncertainty

  8. The KMAT: Benchmarking Knowledge Management.

    Science.gov (United States)

    de Jager, Martha

    Provides an overview of knowledge management and benchmarking, including the benefits and methods of benchmarking (e.g., competitive, cooperative, collaborative, and internal benchmarking). Arthur Andersen's KMAT (Knowledge Management Assessment Tool) is described. The KMAT is a collaborative benchmarking tool, designed to help organizations make…

  9. Test case for a near-surface repository

    Energy Technology Data Exchange (ETDEWEB)

    Elert, M.; Jones, C. [Kemakta Konsult AB, Stockholm (Sweden); Nilsson, L.B. [Swedish Nuclear Fuel and Waste Co, Stockholm (Sweden); Skagius, K.; Wiborgh, M. [Kemakta Konsult AB, Stockholm (Sweden)

    1998-09-01

    A test case is presented for assessment of a near-surface disposal facility for radioactive waste. The case includes waste characterization and repository design, requirements and constraints in an assessment context, scenario development, model description and test calculations 6 refs, 12 tabs, 16 figs

  10. Benchmarking the Netherlands. Benchmarking for growth

    International Nuclear Information System (INIS)

    2003-01-01

    This is the fourth edition of the Ministry of Economic Affairs' publication 'Benchmarking the Netherlands', which aims to assess the competitiveness of the Dutch economy. The methodology and objective of the benchmarking remain the same. The basic conditions for economic activity (institutions, regulation, etc.) in a number of benchmark countries are compared in order to learn from the solutions found by other countries for common economic problems. This publication is devoted entirely to the potential output of the Dutch economy. In other words, its ability to achieve sustainable growth and create work over a longer period without capacity becoming an obstacle. This is important because economic growth is needed to increase prosperity in the broad sense and meeting social needs. Prosperity in both a material (per capita GDP) and immaterial (living environment, environment, health, etc) sense, in other words. The economy's potential output is determined by two structural factors: the growth of potential employment and the structural increase in labour productivity. Analysis by the Netherlands Bureau for Economic Policy Analysis (CPB) shows that in recent years the increase in the capacity for economic growth has been realised mainly by increasing the supply of labour and reducing the equilibrium unemployment rate. In view of the ageing of the population in the coming years and decades the supply of labour is unlikely to continue growing at the pace we have become accustomed to in recent years. According to a number of recent studies, to achieve a respectable rate of sustainable economic growth the aim will therefore have to be to increase labour productivity. To realise this we have to focus on for six pillars of economic policy: (1) human capital, (2) functioning of markets, (3) entrepreneurship, (4) spatial planning, (5) innovation, and (6) sustainability. These six pillars determine the course for economic policy aiming at higher productivity growth. Throughout

  11. Benchmarking the Netherlands. Benchmarking for growth

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2003-01-01

    This is the fourth edition of the Ministry of Economic Affairs' publication 'Benchmarking the Netherlands', which aims to assess the competitiveness of the Dutch economy. The methodology and objective of the benchmarking remain the same. The basic conditions for economic activity (institutions, regulation, etc.) in a number of benchmark countries are compared in order to learn from the solutions found by other countries for common economic problems. This publication is devoted entirely to the potential output of the Dutch economy. In other words, its ability to achieve sustainable growth and create work over a longer period without capacity becoming an obstacle. This is important because economic growth is needed to increase prosperity in the broad sense and meeting social needs. Prosperity in both a material (per capita GDP) and immaterial (living environment, environment, health, etc) sense, in other words. The economy's potential output is determined by two structural factors: the growth of potential employment and the structural increase in labour productivity. Analysis by the Netherlands Bureau for Economic Policy Analysis (CPB) shows that in recent years the increase in the capacity for economic growth has been realised mainly by increasing the supply of labour and reducing the equilibrium unemployment rate. In view of the ageing of the population in the coming years and decades the supply of labour is unlikely to continue growing at the pace we have become accustomed to in recent years. According to a number of recent studies, to achieve a respectable rate of sustainable economic growth the aim will therefore have to be to increase labour productivity. To realise this we have to focus on for six pillars of economic policy: (1) human capital, (2) functioning of markets, (3) entrepreneurship, (4) spatial planning, (5) innovation, and (6) sustainability. These six pillars determine the course for economic policy aiming at higher productivity

  12. Co-ordinated research programme on benchmark study for the seismic analysis and testing of WWER-type nuclear power plants. V. 2B. General material: codes, standards, criteria. Working material

    International Nuclear Information System (INIS)

    1999-01-01

    In August 1991, following the SMiRT-11 Conference in Tokyo, a Technical Committee Meeting was held on the 'Seismic safety issues relating to existing NPPs'. The Proceedings of this TCM was subsequently compiled in an IAEA Working Material. One of the main recommendations of this TCM, called for the harmonization of criteria and methods used in Member States in seismic reassessment and upgrading of existing NPPs. Twenty four institutions from thirteen countries participated in the CRP named 'Benchmark study for the seismic analysis and testing of WWER type NPPs'. Two types of WWER reactors (WWER-1000 and WWER-440/213) selected for benchmarking. Kozloduy NPP Units 5/6 and Paks NPP represented these respectively as prototypes. Consistent with the recommendations of the TCM and the working paper prepared by the subsequent Consultants' Meeting, the focal activity of the CRP was the benchmarking exercises. A similar methodology was followed both for Paks NPP and Kozloduy NPP Unit 5. Firstly, the NPP (mainly the reactor building) was tested using a blast loading generated by a series of explosions from buried TNT charges. Records from this test were obtained at several free field locations (both downhole and surface), foundation mat, various elevations of structures as well as some tanks and the stack. Then the benchmark participants were provided with structural drawings, soil data and the free field record of the blast experiment. Their task was to make a blind prediction of the response at preselected locations. The analytical results from these participants were then compared with the results from the test. Although the benchmarking exercises constituted the focus of the CRP, there were many other interesting problems related to the seismic safety of WWER type NPPs which were addressed by the participants. These involved generic studies, i.e. codes and standards used in original WWER designs and their comparison with current international practice; seismic analysis

  13. Higgs Pair Production: Choosing Benchmarks With Cluster Analysis

    CERN Document Server

    Carvalho, Alexandra; Dorigo, Tommaso; Goertz, Florian; Gottardo, Carlo A.; Tosi, Mia

    2016-01-01

    New physics theories often depend on a large number of free parameters. The precise values of those parameters in some cases drastically affect the resulting phenomenology of fundamental physics processes, while in others finite variations can leave it basically invariant at the level of detail experimentally accessible. When designing a strategy for the analysis of experimental data in the search for a signal predicted by a new physics model, it appears advantageous to categorize the parameter space describing the model according to the corresponding kinematical features of the final state. A multi-dimensional test statistic can be used to gauge the degree of similarity in the kinematics of different models; a clustering algorithm using that metric may then allow the division of the space into homogeneous regions, each of which can be successfully represented by a benchmark point. Searches targeting those benchmark points are then guaranteed to be sensitive to a large area of the parameter space. In this doc...

  14. XWeB: The XML Warehouse Benchmark

    Science.gov (United States)

    Mahboubi, Hadj; Darmont, Jérôme

    With the emergence of XML as a standard for representing business data, new decision support applications are being developed. These XML data warehouses aim at supporting On-Line Analytical Processing (OLAP) operations that manipulate irregular XML data. To ensure feasibility of these new tools, important performance issues must be addressed. Performance is customarily assessed with the help of benchmarks. However, decision support benchmarks do not currently support XML features. In this paper, we introduce the XML Warehouse Benchmark (XWeB), which aims at filling this gap. XWeB derives from the relational decision support benchmark TPC-H. It is mainly composed of a test data warehouse that is based on a unified reference model for XML warehouses and that features XML-specific structures, and its associate XQuery decision support workload. XWeB's usage is illustrated by experiments on several XML database management systems.

  15. Benchmarking in Mobarakeh Steel Company

    Directory of Open Access Journals (Sweden)

    Sasan Ghasemi

    2008-05-01

    Full Text Available Benchmarking is considered as one of the most effective ways of improving performance incompanies. Although benchmarking in business organizations is a relatively new concept and practice, ithas rapidly gained acceptance worldwide. This paper introduces the benchmarking project conducted in Esfahan’s Mobarakeh Steel Company, as the first systematic benchmarking project conducted in Iran. It aimsto share the process deployed for the benchmarking project in this company and illustrate how the projectsystematic implementation led to succes.

  16. Benchmarking in Mobarakeh Steel Company

    OpenAIRE

    Sasan Ghasemi; Mohammad Nazemi; Mehran Nejati

    2008-01-01

    Benchmarking is considered as one of the most effective ways of improving performance in companies. Although benchmarking in business organizations is a relatively new concept and practice, it has rapidly gained acceptance worldwide. This paper introduces the benchmarking project conducted in Esfahan's Mobarakeh Steel Company, as the first systematic benchmarking project conducted in Iran. It aims to share the process deployed for the benchmarking project in this company and illustrate how th...

  17. Benchmarking, benchmarks, or best practices? Applying quality improvement principles to decrease surgical turnaround time.

    Science.gov (United States)

    Mitchell, L

    1996-01-01

    The processes of benchmarking, benchmark data comparative analysis, and study of best practices are distinctly different. The study of best practices is explained with an example based on the Arthur Andersen & Co. 1992 "Study of Best Practices in Ambulatory Surgery". The results of a national best practices study in ambulatory surgery were used to provide our quality improvement team with the goal of improving the turnaround time between surgical cases. The team used a seven-step quality improvement problem-solving process to improve the surgical turnaround time. The national benchmark for turnaround times between surgical cases in 1992 was 13.5 minutes. The initial turnaround time at St. Joseph's Medical Center was 19.9 minutes. After the team implemented solutions, the time was reduced to an average of 16.3 minutes, an 18% improvement. Cost-benefit analysis showed a potential enhanced revenue of approximately $300,000, or a potential savings of $10,119. Applying quality improvement principles to benchmarking, benchmarks, or best practices can improve process performance. Understanding which form of benchmarking the institution wishes to embark on will help focus a team and use appropriate resources. Communicating with professional organizations that have experience in benchmarking will save time and money and help achieve the desired results.

  18. Validation test case generation based on safety analysis ontology

    International Nuclear Information System (INIS)

    Fan, Chin-Feng; Wang, Wen-Shing

    2012-01-01

    Highlights: ► Current practice in validation test case generation for nuclear system is mainly ad hoc. ► This study designs a systematic approach to generate validation test cases from a Safety Analysis Report. ► It is based on a domain-specific ontology. ► Test coverage criteria have been defined and satisfied. ► A computerized toolset has been implemented to assist the proposed approach. - Abstract: Validation tests in the current nuclear industry practice are typically performed in an ad hoc fashion. This study presents a systematic and objective method of generating validation test cases from a Safety Analysis Report (SAR). A domain-specific ontology was designed and used to mark up a SAR; relevant information was then extracted from the marked-up document for use in automatically generating validation test cases that satisfy the proposed test coverage criteria; namely, single parameter coverage, use case coverage, abnormal condition coverage, and scenario coverage. The novelty of this technique is its systematic rather than ad hoc test case generation from a SAR to achieve high test coverage.

  19. Deviating From the Benchmarks

    DEFF Research Database (Denmark)

    Rocha, Vera; Van Praag, Mirjam; Carneiro, Anabela

    This paper studies three related questions: To what extent otherwise similar startups employ different quantities and qualities of human capital at the moment of entry? How persistent are initial human capital choices over time? And how does deviating from human capital benchmarks influence firm......, founders human capital, and the ownership structure of startups (solo entrepreneurs versus entrepreneurial teams). We then study the survival implications of exogenous deviations from these benchmarks, based on spline models for survival data. Our results indicate that (especially negative) deviations from...... the benchmark can be substantial, are persistent over time, and hinder the survival of firms. The implications may, however, vary according to the sector and the ownership structure at entry. Given the stickiness of initial choices, wrong human capital decisions at entry turn out to be a close to irreversible...

  20. An isomeric reaction benchmark set to test if the performance of state-of-the-art density functionals can be regarded as independent of the external potential.

    Science.gov (United States)

    Schwabe, Tobias

    2014-07-28

    Some representative density functionals are assessed for isomerization reactions in which heteroatoms are systematically substituted with heavier members of the same element group. By this, it is investigated if the functional performance depends on the elements involved, i.e. on the external potential imposed by the atomic nuclei. Special emphasis is placed on reliable theoretical reference data and the attempt to minimize basis set effects. Both issues are challenging for molecules including heavy elements. The data suggest that no general bias can be identified for the functionals under investigation except for one case - M11-L. Nevertheless, large deviations from the reference data can be found for all functional approximations in some cases. The average error range for the nine functionals in this test is 17.6 kcal mol(-1). These outliers depreciate the general reliability of density functional approximations.

  1. HPCG Benchmark Technical Specification

    Energy Technology Data Exchange (ETDEWEB)

    Heroux, Michael Allen [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Dongarra, Jack [Univ. of Tennessee, Knoxville, TN (United States); Luszczek, Piotr [Univ. of Tennessee, Knoxville, TN (United States)

    2013-10-01

    The High Performance Conjugate Gradient (HPCG) benchmark [cite SNL, UTK reports] is a tool for ranking computer systems based on a simple additive Schwarz, symmetric Gauss-Seidel preconditioned conjugate gradient solver. HPCG is similar to the High Performance Linpack (HPL), or Top 500, benchmark [1] in its purpose, but HPCG is intended to better represent how today’s applications perform. In this paper we describe the technical details of HPCG: how it is designed and implemented, what code transformations are permitted and how to interpret and report results.

  2. Benchmarking Danish Industries

    DEFF Research Database (Denmark)

    Gammelgaard, Britta; Bentzen, Eric; Aagaard Andreassen, Mette

    2003-01-01

    compatible survey. The International Manufacturing Strategy Survey (IMSS) doesbring up the question of supply chain management, but unfortunately, we did not have access to thedatabase. Data from the members of the SCOR-model, in the form of benchmarked performance data,may exist, but are nonetheless...... not public. The survey is a cooperative project "Benchmarking DanishIndustries" with CIP/Aalborg University, the Danish Technological University, the DanishTechnological Institute and Copenhagen Business School as consortia partners. The project has beenfunded by the Danish Agency for Trade and Industry...

  3. RB reactor benchmark cores

    International Nuclear Information System (INIS)

    Pesic, M.

    1998-01-01

    A selected set of the RB reactor benchmark cores is presented in this paper. The first results of validation of the well-known Monte Carlo MCNP TM code and adjoining neutron cross section libraries are given. They confirm the idea for the proposal of the new U-D 2 O criticality benchmark system and support the intention to include this system in the next edition of the recent OECD/NEA Project: International Handbook of Evaluated Criticality Safety Experiment, in near future. (author)

  4. Benchmarking for Cost Improvement. Final report

    Energy Technology Data Exchange (ETDEWEB)

    1993-09-01

    The US Department of Energy`s (DOE) Office of Environmental Restoration and Waste Management (EM) conducted the Benchmarking for Cost Improvement initiative with three objectives: Pilot test benchmarking as an EM cost improvement tool; identify areas for cost improvement and recommend actions to address these areas; provide a framework for future cost improvement. The benchmarking initiative featured the use of four principal methods (program classification, nationwide cost improvement survey, paired cost comparison and component benchmarking). Interested parties contributed during both the design and execution phases. The benchmarking initiative was conducted on an accelerated basis. Of necessity, it considered only a limited set of data that may not be fully representative of the diverse and complex conditions found at the many DOE installations. The initiative generated preliminary data about cost differences and it found a high degree of convergence on several issues. Based on this convergence, the report recommends cost improvement strategies and actions. This report describes the steps taken as part of the benchmarking initiative and discusses the findings and recommended actions for achieving cost improvement. The results and summary recommendations, reported below, are organized by the study objectives.

  5. A framework for benchmarking land models

    Directory of Open Access Journals (Sweden)

    Y. Q. Luo

    2012-10-01

    Full Text Available Land models, which have been developed by the modeling community in the past few decades to predict future states of ecosystems and climate, have to be critically evaluated for their performance skills of simulating ecosystem responses and feedback to climate change. Benchmarking is an emerging procedure to measure performance of models against a set of defined standards. This paper proposes a benchmarking framework for evaluation of land model performances and, meanwhile, highlights major challenges at this infant stage of benchmark analysis. The framework includes (1 targeted aspects of model performance to be evaluated, (2 a set of benchmarks as defined references to test model performance, (3 metrics to measure and compare performance skills among models so as to identify model strengths and deficiencies, and (4 model improvement. Land models are required to simulate exchange of water, energy, carbon and sometimes other trace gases between the atmosphere and land surface, and should be evaluated for their simulations of biophysical processes, biogeochemical cycles, and vegetation dynamics in response to climate change across broad temporal and spatial scales. Thus, one major challenge is to select and define a limited number of benchmarks to effectively evaluate land model performance. The second challenge is to develop metrics of measuring mismatches between models and benchmarks. The metrics may include (1 a priori thresholds of acceptable model performance and (2 a scoring system to combine data–model mismatches for various processes at different temporal and spatial scales. The benchmark analyses should identify clues of weak model performance to guide future development, thus enabling improved predictions of future states of ecosystems and climate. The near-future research effort should be on development of a set of widely acceptable benchmarks that can be used to objectively, effectively, and reliably evaluate fundamental properties

  6. Herbalife hepatotoxicity: Evaluation of cases with positive reexposure tests.

    Science.gov (United States)

    Teschke, Rolf; Frenzel, Christian; Schulze, Johannes; Schwarzenboeck, Alexander; Eickhoff, Axel

    2013-07-27

    To analyze the validity of applied test criteria and causality assessment methods in assumed Herbalife hepatotoxicity with positive reexposure tests. We searched the Medline database for suspected cases of Herbalife hepatotoxicity and retrieved 53 cases including eight cases with a positive unintentional reexposure and a high causality level for Herbalife. First, analysis of these eight cases focused on the data quality of the positive reexposure cases, requiring a baseline value of alanine aminotransferase (ALT) Herbalife in these eight cases were probable (n = 1), unlikely (n = 4), and excluded (n = 3). Confounding variables included low data quality, alternative diagnoses, poor exclusion of important other causes, and comedication by drugs and herbs in 6/8 cases. More specifically, problems were evident in some cases regarding temporal association, daily doses, exact start and end dates of product use, actual data of laboratory parameters such as ALT, and exact dechallenge characteristics. Shortcomings included scattered exclusion of hepatitis A-C, cytomegalovirus and Epstein Barr virus infection with only globally presented or lacking parameters. Hepatitis E virus infection was considered in one single patient and found positive, infections by herpes simplex virus and varicella zoster virus were excluded in none. Only one case fulfilled positive reexposure test criteria in initially assumed Herbalife hepatotoxicity, with lower CIOMS based causality gradings for the other cases than hitherto proposed.

  7. Benchmarking and Performance Management

    Directory of Open Access Journals (Sweden)

    Adrian TANTAU

    2010-12-01

    Full Text Available The relevance of the chosen topic is explained by the meaning of the firm efficiency concept - the firm efficiency means the revealed performance (how well the firm performs in the actual market environment given the basic characteristics of the firms and their markets that are expected to drive their profitability (firm size, market power etc.. This complex and relative performance could be due to such things as product innovation, management quality, work organization, some other factors can be a cause even if they are not directly observed by the researcher. The critical need for the management individuals/group to continuously improve their firm/company’s efficiency and effectiveness, the need for the managers to know which are the success factors and the competitiveness determinants determine consequently, what performance measures are most critical in determining their firm’s overall success. Benchmarking, when done properly, can accurately identify both successful companies and the underlying reasons for their success. Innovation and benchmarking firm level performance are critical interdependent activities. Firm level variables, used to infer performance, are often interdependent due to operational reasons. Hence, the managers need to take the dependencies among these variables into account when forecasting and benchmarking performance. This paper studies firm level performance using financial ratio and other type of profitability measures. It uses econometric models to describe and then propose a method to forecast and benchmark performance.

  8. Surveys and Benchmarks

    Science.gov (United States)

    Bers, Trudy

    2012-01-01

    Surveys and benchmarks continue to grow in importance for community colleges in response to several factors. One is the press for accountability, that is, for colleges to report the outcomes of their programs and services to demonstrate their quality and prudent use of resources, primarily to external constituents and governing boards at the state…

  9. Benchmarking and performance enhancement framework for multi-staging object-oriented languages

    Directory of Open Access Journals (Sweden)

    Ahmed H. Yousef

    2013-06-01

    Full Text Available This paper focuses on verifying the readiness, feasibility, generality and usefulness of multi-staging programming in software applications. We present a benchmark designed to evaluate the performance gain of different multi-staging programming (MSP languages implementations of object oriented languages. The benchmarks in this suite cover different tests that range from classic simple examples (like matrix algebra to advanced examples (like encryption and image processing. The benchmark is applied to compare the performance gain of two different MSP implementations (Mint and Metaphor that are built on object oriented languages (Java and C# respectively. The results concerning the application of this benchmark on these languages are presented and analysed. The measurement technique used in benchmarking leads to the development of a language independent performance enhancement framework that allows the programmer to select which code segments need staging. The framework also enables the programmer to verify the effectiveness of staging on the application performance. The framework is applied to a real case study. The case study results showed the effectiveness of the framework to achieve significant performance enhancement.

  10. Sample test cases using the environmental computer code NECTAR

    International Nuclear Information System (INIS)

    Ponting, A.C.

    1984-06-01

    This note demonstrates a few of the many different ways in which the environmental computer code NECTAR may be used. Four sample test cases are presented and described to show how NECTAR input data are structured. Edited output is also presented to illustrate the format of the results. Two test cases demonstrate how NECTAR may be used to study radio-isotopes not explicitly included in the code. (U.K.)

  11. Making System Dynamics Cool IV : Teaching & Testing with Cases & Quizzes

    NARCIS (Netherlands)

    Pruyt, E.

    2012-01-01

    This follow-up paper presents cases and multiple choice questions for teaching and testing System Dynamics modeling. These cases and multiple choice questions were developed and used between January 2012 and April 2012 a large System Dynamics course (250+ 2nd year BSc and 40+ MSc students per year)

  12. Making System Dynamics Cool III : New Hot Teaching & Testing Cases

    NARCIS (Netherlands)

    Pruyt, E.

    2011-01-01

    This follow-up paper presents seven actual cases for testing and teaching System Dynamics developed and used between January 2010 and January 2011 for one of the largest System Dynamics courses (250+ students per year) at Delft University of Technology in the Netherlands. The cases presented in this

  13. RANS analyses on erosion behavior of density stratification consisted of helium–air mixture gas by a low momentum vertical buoyant jet in the PANDA test facility, the third international benchmark exercise (IBE-3)

    Energy Technology Data Exchange (ETDEWEB)

    Abe, Satoshi, E-mail: abe.satoshi@jaea.go.jp; Ishigaki, Masahiro; Sibamoto, Yasuteru; Yonomoto, Taisuke

    2015-08-15

    Highlights: . • The third international benchmark exercise (IBE-3) focused on density stratification erosion by a vertical buoyant jet in the reactor containment vessel. • Two types turbulence model modification were applied in order to accurately simulate the turbulence helium transportation in the density stratification. • The analysis result in case with turbulence model modification is good agreement with the experimental data. • There is a major difference of turbulence helium–mass transportation between in case with and without the turbulence model modification. - Abstract: Density stratification in the reactor containment vessel is an important phenomenon on an issue of hydrogen safety. The Japan Atomic Energy Agency (JAEA) has started the ROSA-SA project on containment thermal hydraulics. As a part of the activity, we participated in the third international CFD benchmark exercise (IBE-3) focused on density stratification erosion by a vertical buoyant jet in containment vessel. This paper shows our approach for the IBE-3, focusing on the turbulence transport phenomena in eroding the density stratification and introducing modified turbulence models for improvement of the CFD analyses. For this analysis, we modified the CFD code OpenFOAM by using two turbulence models; the Kato and Launder modification to estimate turbulent kinetic energy production around a stagnation point, and the Katsuki model to consider turbulence damping in density stratification. As a result, the modified code predicted well the experimental data. The importance of turbulence transport modeling is also discussed using the calculation results.

  14. RANS analyses on erosion behavior of density stratification consisted of helium–air mixture gas by a low momentum vertical buoyant jet in the PANDA test facility, the third international benchmark exercise (IBE-3)

    International Nuclear Information System (INIS)

    Abe, Satoshi; Ishigaki, Masahiro; Sibamoto, Yasuteru; Yonomoto, Taisuke

    2015-01-01

    Highlights: . • The third international benchmark exercise (IBE-3) focused on density stratification erosion by a vertical buoyant jet in the reactor containment vessel. • Two types turbulence model modification were applied in order to accurately simulate the turbulence helium transportation in the density stratification. • The analysis result in case with turbulence model modification is good agreement with the experimental data. • There is a major difference of turbulence helium–mass transportation between in case with and without the turbulence model modification. - Abstract: Density stratification in the reactor containment vessel is an important phenomenon on an issue of hydrogen safety. The Japan Atomic Energy Agency (JAEA) has started the ROSA-SA project on containment thermal hydraulics. As a part of the activity, we participated in the third international CFD benchmark exercise (IBE-3) focused on density stratification erosion by a vertical buoyant jet in containment vessel. This paper shows our approach for the IBE-3, focusing on the turbulence transport phenomena in eroding the density stratification and introducing modified turbulence models for improvement of the CFD analyses. For this analysis, we modified the CFD code OpenFOAM by using two turbulence models; the Kato and Launder modification to estimate turbulent kinetic energy production around a stagnation point, and the Katsuki model to consider turbulence damping in density stratification. As a result, the modified code predicted well the experimental data. The importance of turbulence transport modeling is also discussed using the calculation results

  15. Teaching Benchmark Strategy for Fifth-Graders in Taiwan

    Science.gov (United States)

    Yang, Der-Ching; Lai, M. L.

    2013-01-01

    The key purpose of this study was how we taught the use of benchmark strategy when comparing fraction for fifth-graders in Taiwan. 26 fifth graders from a public elementary in south Taiwan were selected to join this study. Results of this case study showed that students had a much progress on the use of benchmark strategy when comparing fraction…

  16. Benchmarking i den offentlige sektor

    DEFF Research Database (Denmark)

    Bukh, Per Nikolaj; Dietrichson, Lars; Sandalgaard, Niels

    2008-01-01

    I artiklen vil vi kort diskutere behovet for benchmarking i fraværet af traditionelle markedsmekanismer. Herefter vil vi nærmere redegøre for, hvad benchmarking er med udgangspunkt i fire forskellige anvendelser af benchmarking. Regulering af forsyningsvirksomheder vil blive behandlet, hvorefter...

  17. Cloud benchmarking for performance

    OpenAIRE

    Varghese, Blesson; Akgun, Ozgur; Miguel, Ian; Thai, Long; Barker, Adam

    2014-01-01

    Date of Acceptance: 20/09/2014 How can applications be deployed on the cloud to achieve maximum performance? This question has become significant and challenging with the availability of a wide variety of Virtual Machines (VMs) with different performance capabilities in the cloud. The above question is addressed by proposing a six step benchmarking methodology in which a user provides a set of four weights that indicate how important each of the following groups: memory, processor, computa...

  18. Monte Carlo benchmark calculations for 400MWTH PBMR core

    International Nuclear Information System (INIS)

    Kim, H. C.; Kim, J. K.; Kim, S. Y.; Noh, J. M.

    2007-01-01

    A large interest in high-temperature gas-cooled reactors (HTGR) has been initiated in connection with hydrogen production in recent years. In this study, as a part of work for establishing Monte Carlo computation system for HTGR core analysis, some benchmark calculations for pebble-type HTGR were carried out using MCNP5 code. The core of the 400MW t h Pebble-bed Modular Reactor (PBMR) was selected as a benchmark model. Recently, the IAEA CRP5 neutronics and thermal-hydraulics benchmark problem was proposed for the testing of existing methods for HTGRs to analyze the neutronics and thermal-hydraulic behavior for the design and safety evaluations of the PBMR. This study deals with the neutronic benchmark problems, for fresh fuel and cold conditions (Case F-1), and first core loading with given number densities (Case F-2), proposed for PBMR. After the detailed MCNP modeling of the whole facility, benchmark calculations were performed. Spherical fuel region of a fuel pebble is divided into cubic lattice element in order to model a fuel pebble which contains, on average, 15000 CFPs (Coated Fuel Particles). Each element contains one CFP at its center. In this study, the side length of each cubic lattice element to have the same amount of fuel was calculated to be 0.1635 cm. The remaining volume of each lattice element was filled with graphite. All of different 5 concentric shells of CFP were modeled. The PBMR annular core consists of approximately 452000 pebbles in the benchmark problems. In Case F-1 where the core was filled with only fresh fuel pebble, a BCC(body-centered-cubic) lattice model was employed in order to achieve the random packing core with the packing fraction of 0.61. The BCC lattice was also employed with the size of the moderator pebble increased in a manner that reproduces the specified F/M ratio of 1:2 while preserving the packing fraction of 0.61 in Case F-2. The calculations were pursued with ENDF/B-VI cross-section library and used sab2002 S(α,

  19. Benchmarking criticality safety calculations with subcritical experiments

    International Nuclear Information System (INIS)

    Mihalczo, J.T.

    1984-06-01

    Calculation of the neutron multiplication factor at delayed criticality may be necessary for benchmarking calculations but it may not be sufficient. The use of subcritical experiments to benchmark criticality safety calculations could result in substantial savings in fuel material costs for experiments. In some cases subcritical configurations could be used to benchmark calculations where sufficient fuel to achieve delayed criticality is not available. By performing a variety of measurements with subcritical configurations, much detailed information can be obtained which can be compared directly with calculations. This paper discusses several measurements that can be performed with subcritical assemblies and presents examples that include comparisons between calculation and experiment where possible. Where not, examples from critical experiments have been used but the measurement methods could also be used for subcritical experiments

  20. Hospital benchmarking: are U.S. eye hospitals ready?

    Science.gov (United States)

    de Korne, Dirk F; van Wijngaarden, Jeroen D H; Sol, Kees J C A; Betz, Robert; Thomas, Richard C; Schein, Oliver D; Klazinga, Niek S

    2012-01-01

    Benchmarking is increasingly considered a useful management instrument to improve quality in health care, but little is known about its applicability in hospital settings. The aims of this study were to assess the applicability of a benchmarking project in U.S. eye hospitals and compare the results with an international initiative. We evaluated multiple cases by applying an evaluation frame abstracted from the literature to five U.S. eye hospitals that used a set of 10 indicators for efficiency benchmarking. Qualitative analysis entailed 46 semistructured face-to-face interviews with stakeholders, document analyses, and questionnaires. The case studies only partially met the conditions of the evaluation frame. Although learning and quality improvement were stated as overall purposes, the benchmarking initiative was at first focused on efficiency only. No ophthalmic outcomes were included, and clinicians were skeptical about their reporting relevance and disclosure. However, in contrast with earlier findings in international eye hospitals, all U.S. hospitals worked with internal indicators that were integrated in their performance management systems and supported benchmarking. Benchmarking can support performance management in individual hospitals. Having a certain number of comparable institutes provide similar services in a noncompetitive milieu seems to lay fertile ground for benchmarking. International benchmarking is useful only when these conditions are not met nationally. Although the literature focuses on static conditions for effective benchmarking, our case studies show that it is a highly iterative and learning process. The journey of benchmarking seems to be more important than the destination. Improving patient value (health outcomes per unit of cost) requires, however, an integrative perspective where clinicians and administrators closely cooperate on both quality and efficiency issues. If these worlds do not share such a relationship, the added

  1. Prioritizing Test Cases for Memory Leaks in Android Applications

    Institute of Scientific and Technical Information of China (English)

    Ju Qian; Di Zhou

    2016-01-01

    Mobile applications usually can only access limited amount of memory. Improper use of the memory can cause memory leaks, which may lead to performance slowdowns or even cause applications to be unexpectedly killed. Although a large body of research has been devoted into the memory leak diagnosing techniques after leaks have been discovered, it is still challenging to find out the memory leak phenomena at first. Testing is the most widely used technique for failure discovery. However, traditional testing techniques are not directed for the discovery of memory leaks. They may spend lots of time on testing unlikely leaking executions and therefore can be inefficient. To address the problem, we propose a novel approach to prioritize test cases according to their likelihood to cause memory leaks in a given test suite. It firstly builds a prediction model to determine whether each test can potentially lead to memory leaks based on machine learning on selected code features. Then, for each input test case, we partly run it to get its code features and predict its likelihood to cause leaks. The most suspicious test cases will be suggested to run at first in order to reveal memory leak faults as soon as possible. Experimental evaluation on several Android applications shows that our approach is effective.

  2. The International intraval project. Phase 1 test cases

    International Nuclear Information System (INIS)

    1992-01-01

    This report contains a description of the test cases adopted in Phase 1 of the international cooperation project INTRAVAL. Seventeen test cases based on bench-scale experiments in laboratory, field tests and natural analogue studies, have been included in the study. The test cases are described in terms of experimental design and types of available data. In addition, some quantitative examples of available data are given as well as references to more extensive documentation of the experiments on which the test cases are based. Fithteen test cases examples are given: 1 Mass transfer through clay by diffusion and advection. 2 Uranium migration in crystalline bore cores, small scale pressure infiltration experiments. 3 Radionuclide migration in single natural fractures in granite. 4 Tracer tests in a deep basalt flow top. 5 Flow and tracer experiment in crystalline rock based on the Stripa 3-D experiment. 6 Tracer experiment in a fracture zone at the Finnsjon research area. 7 Synthetic data base, based on single fracture migration experiments in Grimsel rock laboratory. 8 Natural analogue studies at Pocos de Caldas, Minais Gerais, Brazil. Redox-front and radionuclide movement in an open pit uranium mine. 9 Natural analogue studies at the Koongarra site in the Alligator Rivers area of the Northern Territory, Australia. 10 Large block migration experiments in a block of crystalline rock. 11 Unsaturated flow and transport experiments performed at Las Cruces, New Mexico. 12 Flow and transport experiment in unsaturated fractured rock performed at the Apache Leap Tuff site, Arizona. 13 Experiments in partially saturated tuffaceous rocks performed in the G-tunnel underground facility at the Nevada Test site, USA. 14 Experimental study of brine transport in porous media. 15 Groundwater flow in the vicinity of the Gorleben Salt Dome, Federal Republic of Germany

  3. Warehousing performance improvement using Frazelle Model and per group benchmarking: A case study in retail warehouse in Yogyakarta and Central Java

    Directory of Open Access Journals (Sweden)

    Kusrini Elisa

    2018-01-01

    Full Text Available Warehouse performance management has an important role in improving logistic's business activities. Good warehouse management could increase profit, time delivery, quality and customer service. This study is conducted to assess performance of retail warehouses in some supermarket located in Central Java and Yogyakarta. Performance improvement is proposed base on the warehouse measurement using Frazelle model (2002, that measure on five indicators, namely Financial, Productivity, Utility, Quality and Cycle time along five business process in warehousing, i.e. Receiving, Put Away, Storage, Order picking and shipping. In order to obtain more precise performance, the indicators are weighted using Analytic Hierarchy Analysis (AHP method. Then, warehouse performance are measured and final score is determined using SNORM method. From this study, it is found the final score of each warehouse and opportunity to improve warehouse performance using peer group benchmarking

  4. Benchmarking reference services: an introduction.

    Science.gov (United States)

    Marshall, J G; Buchanan, H S

    1995-01-01

    Benchmarking is based on the common sense idea that someone else, either inside or outside of libraries, has found a better way of doing certain things and that your own library's performance can be improved by finding out how others do things and adopting the best practices you find. Benchmarking is one of the tools used for achieving continuous improvement in Total Quality Management (TQM) programs. Although benchmarking can be done on an informal basis, TQM puts considerable emphasis on formal data collection and performance measurement. Used to its full potential, benchmarking can provide a common measuring stick to evaluate process performance. This article introduces the general concept of benchmarking, linking it whenever possible to reference services in health sciences libraries. Data collection instruments that have potential application in benchmarking studies are discussed and the need to develop common measurement tools to facilitate benchmarking is emphasized.

  5. Highly Automated Agile Testing Process: An Industrial Case Study

    Directory of Open Access Journals (Sweden)

    Jarosław Berłowski

    2016-09-01

    Full Text Available This paper presents a description of an agile testing process in a medium size software project that is developed using Scrum. The research methods used is the case study were as follows: surveys, quantifiable project data sources and qualitative project members opinions were used for data collection. Challenges related to the testing process regarding a complex project environment and unscheduled releases were identified. Based on the obtained results, we concluded that the described approach addresses well the aforementioned issues. Therefore, recommendations were made with regard to the employed principles of agility, specifically: continuous integration, responding to change, test automation and test driven development. Furthermore, an efficient testing environment that combines a number of test frameworks (e.g. JUnit, Selenium, Jersey Test with custom-developed simulators is presented.

  6. Calculus of a reactor VVER-1000 benchmark

    International Nuclear Information System (INIS)

    Dourougie, C.

    1998-01-01

    In the framework of the FMDP (Fissile Materials Disposition Program between the US and Russian, a benchmark was tested. The pin cells contain low enriched uranium (LEU) and mixed oxide fuels (MOX). The calculations are done for a wide range of temperatures and solute boron concentrations, in accidental conditions. (A.L.B.)

  7. OCL-BASED TEST CASE GENERATION USING CATEGORY PARTITIONING METHOD

    Directory of Open Access Journals (Sweden)

    A. Jalila

    2015-10-01

    Full Text Available The adoption of fault detection techniques during initial stages of software development life cycle urges to improve reliability of a software product. Specification-based testing is one of the major criterions to detect faults in the requirement specification or design of a software system. However, due to the non-availability of implementation details, test case generation from formal specifications become a challenging task. As a novel approach, the proposed work presents a methodology to generate test cases from OCL (Object constraint Language formal specification using Category Partitioning Method (CPM. The experiment results indicate that the proposed methodology is more effective in revealing specification based faults. Furthermore, it has been observed that OCL and CPM form an excellent combination for performing functional testing at the earliest to improve software quality with reduced cost.

  8. Benchmark study for the seismic analysis and testing of WWER type NPPs. Final report of a co-ordinated research project

    International Nuclear Information System (INIS)

    2000-10-01

    This report is the final results of a five-year IAEA Coordinated Research Project (CRP) launched in 1992. The main goal was the harmonization of criteria and methods used in Member States in seismic reassessment and upgrading of existing nuclear power plants. To this aim, most of the activities have been focused on a benchmarking exercise related to a mixed numerical and experimental dynamic analysis carried out on two reference units of WWER reactors (WWER-1000 and WWER-440/213): Kozloduy NPP Units 5/6 and PAKS NPP. Twenty-four institutions from 13 countries participated in the CRP, and two other institutions from Japan contributed to the CRP informally. The objective of this TECDOC is to provide a consistent and comprehensive summary of the results of the work performed in the CRP through the preparation of a ''self-standing'' report with the general conclusion of the programme: a great deal of information from the Background Documents has been included in this report with a set of recommendations for future work in this field

  9. Benchmark results in radiative transfer

    International Nuclear Information System (INIS)

    Garcia, R.D.M.; Siewert, C.E.

    1986-02-01

    Several aspects of the F N method are reported, and the method is used to solve accurately some benchmark problems in radiative transfer in the field of atmospheric physics. The method was modified to solve cases of pure scattering and an improved process was developed for computing the radiation intensity. An algorithms for computing several quantities used in the F N method was done. An improved scheme to evaluate certain integrals relevant to the method is done, and a two-term recursion relation that has proved useful for the numerical evaluation of matrix elements, basic for the method, is given. The methods used to solve the encountered linear algebric equations are discussed, and the numerical results are evaluated. (M.C.K.) [pt

  10. GENERATING TEST CASES FOR PLATFORM INDEPENDENT MODEL BY USING USE CASE MODEL

    OpenAIRE

    Hesham A. Hassan,; Zahraa. E. Yousif

    2010-01-01

    Model-based testing refers to testing and test case generation based on a model that describes the behavior of the system. Extensive use of models throughout all the phases of software development starting from the requirement engineering phase has led to increased importance of Model Based Testing. The OMG initiative MDA has revolutionized the way models would be used for software development. Ensuring that all user requirements are addressed in system design and the design is getting suffic...

  11. Benchmarking HIV health care

    DEFF Research Database (Denmark)

    Podlekareva, Daria; Reekie, Joanne; Mocroft, Amanda

    2012-01-01

    ABSTRACT: BACKGROUND: State-of-the-art care involving the utilisation of multiple health care interventions is the basis for an optimal long-term clinical prognosis for HIV-patients. We evaluated health care for HIV-patients based on four key indicators. METHODS: Four indicators of health care we...... document pronounced regional differences in adherence to guidelines and can help to identify gaps and direct target interventions. It may serve as a tool for assessment and benchmarking the clinical management of HIV-patients in any setting worldwide....

  12. Benchmarking Cloud Storage Systems

    OpenAIRE

    Wang, Xing

    2014-01-01

    With the rise of cloud computing, many cloud storage systems like Dropbox, Google Drive and Mega have been built to provide decentralized and reliable file storage. It is thus of prime importance to know their features, performance, and the best way to make use of them. In this context, we introduce BenchCloud, a tool designed as part of this thesis to conveniently and efficiently benchmark any cloud storage system. First, we provide a study of six commonly-used cloud storage systems to ident...

  13. The COST Benchmark

    DEFF Research Database (Denmark)

    Jensen, Christian Søndergaard; Tiesyte, Dalia; Tradisauskas, Nerius

    2006-01-01

    An infrastructure is emerging that enables the positioning of populations of on-line, mobile service users. In step with this, research in the management of moving objects has attracted substantial attention. In particular, quite a few proposals now exist for the indexing of moving objects...... takes into account that the available positions of the moving objects are inaccurate, an aspect largely ignored in previous indexing research. The concepts of data and query enlargement are introduced for addressing inaccuracy. As proof of concepts of the benchmark, the paper covers the application...

  14. An Iterative Procedure for Efficient Testing of B2B: A Case in Messaging Service Tests

    Energy Technology Data Exchange (ETDEWEB)

    Kulvatunyou, Boonserm [ORNL

    2007-03-01

    Testing is a necessary step in systems integration. Testing in the context of inter-enterprise, business-to-business (B2B) integration is more difficult and expensive than intra-enterprise integration. Traditionally, the difficulty is alleviated by conducting the testing in two stages: conformance testing and then interoperability testing. In conformance testing, systems are tested independently against a reference system. In interoperability testing, they are tested simultaneously against one another. In the traditional approach for testing, these two stages are performed sequentially with little feedback between them. In addition, test results and test traces are left only to human analysis or even discarded if the solution passes the test. This paper proposes an approach where test results and traces from both the conformance and interoperability tests are analyzed for potential interoperability issues; conformance test cases are then derived from the analysis. The result is that more interoperability issues can be resolved in the lower-cost conformance testing mode; consequently, time and cost required for achieving interoparble solutions are reduced.

  15. Making System Dynamics Cool? Using Hot Testing & Teaching Cases

    NARCIS (Netherlands)

    Pruyt, E.

    2009-01-01

    This paper deals with the use of ‘hot’ real-world cases for both testing and teaching purposes such as in the Introductory System Dynamics course at Delft University of Technology in the Netherlands. The paper starts with a brief overview of the System Dynamics curriculum. Then the problem-oriented

  16. Couplex1 test case nuclear - Waste disposal far field simulation

    International Nuclear Information System (INIS)

    2001-01-01

    This first COUPLEX test case is to compute a simplified Far Field model used in nuclear waste management simulation. From the mathematical point of view the problem is of convection diffusion type but the parameters are highly varying from one layer to another. Another particularity is the very concentrated nature of the source, both in space and in time. (author)

  17. Gifted and Talented Education: A National Test Case in Peoria.

    Science.gov (United States)

    Fetterman, David M.

    1986-01-01

    This article presents a study of a program in Peoria, Illinois, for the gifted and talented that serves as a national test case for gifted education and minority enrollment. It was concluded that referral, identification, and selection were appropriate for the program model but that inequalities resulted from socioeconomic variables. (Author/LMO)

  18. HYDROCOIN [HYDROlogic COde INtercomparison] Level 1: Benchmarking and verification test results with CFEST [Coupled Fluid, Energy, and Solute Transport] code: Draft report

    International Nuclear Information System (INIS)

    Yabusaki, S.; Cole, C.; Monti, A.M.; Gupta, S.K.

    1987-04-01

    Part of the safety analysis is evaluating groundwater flow through the repository and the host rock to the accessible environment by developing mathematical or analytical models and numerical computer codes describing the flow mechanisms. This need led to the establishment of an international project called HYDROCOIN (HYDROlogic COde INtercomparison) organized by the Swedish Nuclear Power Inspectorate, a forum for discussing techniques and strategies in subsurface hydrologic modeling. The major objective of the present effort, HYDROCOIN Level 1, is determining the numerical accuracy of the computer codes. The definition of each case includes the input parameters, the governing equations, the output specifications, and the format. The Coupled Fluid, Energy, and Solute Transport (CFEST) code was applied to solve cases 1, 2, 4, 5, and 7; the Finite Element Three-Dimensional Groundwater (FE3DGW) Flow Model was used to solve case 6. Case 3 has been ignored because unsaturated flow is not pertinent to SRP. This report presents the Level 1 results furnished by the project teams. The numerical accuracy of the codes is determined by (1) comparing the computational results with analytical solutions for cases that have analytical solutions (namely cases 1 and 4), and (2) intercomparing results from codes for cases which do not have analytical solutions (cases 2, 5, 6, and 7). Cases 1, 2, 6, and 7 relate to flow analyses, whereas cases 4 and 5 require nonlinear solutions. 7 refs., 71 figs., 9 tabs

  19. FARO base case post-test analysis by COMETA code

    Energy Technology Data Exchange (ETDEWEB)

    Annunziato, A.; Addabbo, C. [Joint Research Centre, Ispra (Italy)

    1995-09-01

    The paper analyzes the COMETA (Core Melt Thermal-Hydraulic Analysis) post test calculations of FARO Test L-11, the so-called Base Case Test. The FARO Facility, located at JRC Ispra, is used to simulate the consequences of Severe Accidents in Nuclear Power Plants under a variety of conditions. The COMETA Code has a 6 equations two phase flow field and a 3 phases corium field: the jet, the droplets and the fused-debris bed. The analysis shown that the code is able to pick-up all the major phenomena occurring during the fuel-coolant interaction pre-mixing phase.

  20. Test cases for interface tracking methods: methodology and current status

    International Nuclear Information System (INIS)

    Lebaigue, O.; Jamet, D.; Lemonnier, E.

    2004-01-01

    Full text of publication follows:In the past decade, a large number of new methods have been developed to deal with interfaces in the numerical simulation of two-phase flows. We have collected a set of 36 test cases, which can be seen as a tool to help engineers and researchers selecting the most appropriate method(s) for their specific fields of application. This set can be use: - To perform an initial evaluation of the capabilities of available methods with regard to the specificity of the final application and the most important features to be recovered from the simulation. - To measure the maximum mesh size to be used for a given physical problem in order to obtain an accurate enough solution. - To assess and quantify the performances of a selected method equipped with its set of physical models. The computation of a well-documented test case allows estimating the error due to the numerical technique by comparison with reference solutions. This process is compulsory to gain confidence and credibility on the prediction capabilities of a numerical method and its physical models. - To broaden the capabilities of a given numerical technique. The test cases may be used to identify the need for improvement of the overall numerical scheme or to determine the physical part of the model, which is responsible for the observed limitations. Each test case falls within one of the following categories: - Analytical solutions of well-known sets of equations corresponding to simple geometrical situations. - Reference numerical solutions of moderately complex problems, produced by accurate methods (e.g., boundary Fitted coordinate method) on refined meshes. - Separate effects analytical experiments. The presentation will suggest how to use the test cases for assessing the physical models and the numerical methods. The expected fallout of using test cases is indeed on the one hand to identify the merits of existing methods and on the other hand to orient further research towards

  1. Benchmark validation of statistical models: Application to mediation analysis of imagery and memory.

    Science.gov (United States)

    MacKinnon, David P; Valente, Matthew J; Wurpts, Ingrid C

    2018-03-29

    This article describes benchmark validation, an approach to validating a statistical model. According to benchmark validation, a valid model generates estimates and research conclusions consistent with a known substantive effect. Three types of benchmark validation-(a) benchmark value, (b) benchmark estimate, and (c) benchmark effect-are described and illustrated with examples. Benchmark validation methods are especially useful for statistical models with assumptions that are untestable or very difficult to test. Benchmark effect validation methods were applied to evaluate statistical mediation analysis in eight studies using the established effect that increasing mental imagery improves recall of words. Statistical mediation analysis led to conclusions about mediation that were consistent with established theory that increased imagery leads to increased word recall. Benchmark validation based on established substantive theory is discussed as a general way to investigate characteristics of statistical models and a complement to mathematical proof and statistical simulation. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  2. A test case of computer aided motion planning for nuclear maintenance operation

    Energy Technology Data Exchange (ETDEWEB)

    Schmitzberger, E.; Bouchet, J.L. [Electricite de France (EDF), Dept. Surveillance Diagnostic Maintenance, 78 - Chatou (France); Schmitzberger, E. [Institut National Polytechnique, CRAN, 54 - Vandoeuvre les Nancy (France)

    2001-07-01

    Needs for improved tools for nuclear power plant maintenance preparation are expressed by EDF engineering. These are an easier and better management of logistics constraints such as free spaces for motions or handling tasks. The lack of generic or well suited tools and the specificity of nuclear maintenance operation have led EDF R and D to develop its own motion planning tools in collaboration with LAAS-CNRS, Utrecht University and the software publisher CADCENTRE within the framework of the three years Esprit LTR project MOLOG. EDF users needs will be summed up in the first part of the paper under the title ''Motion feasibility studies for maintenance operation'' and then compared to the current industrial offer in the ''Software's background'''s part. The definition and objectives ''Towards motion planning tools'' follows. It explains why maintenance preparation pertains to automatic motion planning and how it makes studies much simpler. The ''MOLOG's Benchmark and first result'''s part describes the test-case used to evaluate the MOLOG project and gives an outlook at the results obtained so far. (author)

  3. A test case of computer aided motion planning for nuclear maintenance operation

    International Nuclear Information System (INIS)

    Schmitzberger, E.; Bouchet, J.L.; Schmitzberger, E.

    2001-01-01

    Needs for improved tools for nuclear power plant maintenance preparation are expressed by EDF engineering. These are an easier and better management of logistics constraints such as free spaces for motions or handling tasks. The lack of generic or well suited tools and the specificity of nuclear maintenance operation have led EDF R and D to develop its own motion planning tools in collaboration with LAAS-CNRS, Utrecht University and the software publisher CADCENTRE within the framework of the three years Esprit LTR project MOLOG. EDF users needs will be summed up in the first part of the paper under the title ''Motion feasibility studies for maintenance operation'' and then compared to the current industrial offer in the ''Software's background'''s part. The definition and objectives ''Towards motion planning tools'' follows. It explains why maintenance preparation pertains to automatic motion planning and how it makes studies much simpler. The ''MOLOG's Benchmark and first result'''s part describes the test-case used to evaluate the MOLOG project and gives an outlook at the results obtained so far. (author)

  4. Critical power prediction by CATHARE2 of the OECD/NRC BFBT benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Lutsanych, Sergii, E-mail: s.lutsanych@ing.unipi.it [San Piero a Grado Nuclear Research Group (GRNSPG), University of Pisa, Via Livornese 1291, 56122, San Piero a Grado, Pisa (Italy); Sabotinov, Luben, E-mail: luben.sabotinov@irsn.fr [Institut for Radiological Protection and Nuclear Safety (IRSN), 31 avenue de la Division Leclerc, 92262 Fontenay-aux-Roses (France); D’Auria, Francesco, E-mail: francesco.dauria@dimnp.unipi.it [San Piero a Grado Nuclear Research Group (GRNSPG), University of Pisa, Via Livornese 1291, 56122, San Piero a Grado, Pisa (Italy)

    2015-03-15

    Highlights: • We used CATHARE code to calculate the critical power exercises of the OECD/NRC BFBT benchmark. • We considered both steady-state and transient critical power tests of the benchmark. • We used both the 1D and 3D features of the CATHARE code to simulate the experiments. • Acceptable prediction of the critical power and its location in the bundle is obtained using appropriate modelling. - Abstract: This paper presents an application of the French best estimate thermal-hydraulic code CATHARE 2 to calculate the critical power and departure from nucleate boiling (DNB) exercises of the International OECD/NRC BWR Fuel Bundle Test (BFBT) benchmark. The assessment activity is performed comparing the code calculation results with available in the framework of the benchmark experimental data from Japanese Nuclear Power Engineering Corporation (NUPEC). Two-phase flow calculations on prediction of the critical power have been carried out both in steady state and transient cases, using one-dimensional and three-dimensional modelling. Results of the steady-state critical power tests calculation have shown the ability of CATHARE code to predict reasonably the critical power and its location, using appropriate modelling.

  5. The OECD/NEA/NSC PBMR coupled neutronics/thermal hydraulics transient benchmark: The PBMR-400 core design

    International Nuclear Information System (INIS)

    Reitsma, F.; Ivanov, K.; Downar, T.; De Haas, H.; Gougar, H. D.

    2006-01-01

    The Pebble Bed Modular Reactor (PBMR) is a High-Temperature Gas-cooled Reactor (HTGR) concept to be built in South Africa. As part of the verification and validation program the definition and execution of code-to-code benchmark exercises are important. The Nuclear Energy Agency (NEA) of the Organisation for Economic Cooperation and Development (OECD) has accepted, through the Nuclear Science Committee (NSC), the inclusion of the Pebble-Bed Modular Reactor (PBMR) coupled neutronics/thermal hydraulics transient benchmark problem in its program. The OECD benchmark defines steady-state and transients cases, including reactivity insertion transients. It makes use of a common set of cross sections (to eliminate uncertainties between different codes) and includes specific simplifications to the design to limit the need for participants to introduce approximations in their models. In this paper the detailed specification is explained, including the test cases to be calculated and the results required from participants. (authors)

  6. Negative Exercise Stress Test: Does it Mean Anything? Case study

    Directory of Open Access Journals (Sweden)

    Hassan A. Mohamed

    2007-01-01

    Full Text Available Despite its low sensitivity and specificity (67% and 72%, respectively, exercise testing has remained one of the most widely used noninvasive tests to determine the prognosis in patients with suspected or established coronary disease.As a screening test for coronary artery disease, the exercise stress test is useful in that it is relatively simple and inexpensive. It has been considered particularly helpful in patients with chest pain syndromes who have moderate probability for coronary artery disease, and in whom the resting electrocardiogram (ECG is normal. The following case presentation and discussion will question the predictive value of a negative stress testing in patients with moderate probability for coronary artery disease.

  7. Benchmarking Using Basic DBMS Operations

    Science.gov (United States)

    Crolotte, Alain; Ghazal, Ahmad

    The TPC-H benchmark proved to be successful in the decision support area. Many commercial database vendors and their related hardware vendors used these benchmarks to show the superiority and competitive edge of their products. However, over time, the TPC-H became less representative of industry trends as vendors keep tuning their database to this benchmark-specific workload. In this paper, we present XMarq, a simple benchmark framework that can be used to compare various software/hardware combinations. Our benchmark model is currently composed of 25 queries that measure the performance of basic operations such as scans, aggregations, joins and index access. This benchmark model is based on the TPC-H data model due to its maturity and well-understood data generation capability. We also propose metrics to evaluate single-system performance and compare two systems. Finally we illustrate the effectiveness of this model by showing experimental results comparing two systems under different conditions.

  8. Improving competitiveness of small medium Batik printing industry on quality & productivity using value chain benchmarking (Case study SME X, SME Y & SME Z)

    Science.gov (United States)

    Fauzi, Rizky Hanif; Liquiddanu, Eko; Suletra, I. Wayan

    2018-02-01

    Batik printing is made by way of night printing as well as conventional batik & through the dyeing process like batik making in general. One of the areas that support the batik industry in Karisidenan Surakarta is Kliwonan Village, Masaran District, Sragen. Masaran district is known as one of batik centers originated from batik workers in Laweyan Solo area from Masaran, they considered that it would be more economical to produce batik in their village which is Masaran Sragen because it is impossible to do production from upstream to downstream in Solo. SME X is one of SME batik in Kliwonan Village, Masaran, Sragen which has been able to produce batik printing with sales coverage to national. One of the key SME X in selling its products is by participating in various national & international exhibitions that are able to catapult its name. SME Y & SME Z are also SMEs in Kliwonan Village, Masaran, Sragen producing batik printing. From the observations made there are several problems that must be fixed in SME Y & SME Z. The production process is delayed from schedule, maintenance of used equipment, procedures for batik workmanship, supervision of operators as well as unknown SMEY & SMEZ products are the problems found. The purpose of this research is to improve the primary activity in SME Y & Z value chain on batik prioting product by benchmarking to small & medium scale industries (SME) X which have better competence.

  9. Benchmarking & European Sustainable Transport Policies

    DEFF Research Database (Denmark)

    Gudmundsson, H.

    2003-01-01

    , Benchmarking is one of the management tools that have recently been introduced in the transport sector. It is rapidly being applied to a wide range of transport operations, services and policies. This paper is a contribution to the discussion of the role of benchmarking in the future efforts to...... contribution to the discussions within the Eusponsored BEST Thematic Network (Benchmarking European Sustainable Transport) which ran from 2000 to 2003....

  10. Benchmarking in Czech Higher Education

    OpenAIRE

    Plaček Michal; Ochrana František; Půček Milan

    2015-01-01

    The first part of this article surveys the current experience with the use of benchmarking at Czech universities specializing in economics and management. The results indicate that collaborative benchmarking is not used on this level today, but most actors show some interest in its introduction. The expression of the need for it and the importance of benchmarking as a very suitable performance-management tool in less developed countries are the impetus for the second part of our article. Base...

  11. Power reactor pressure vessel benchmarks

    International Nuclear Information System (INIS)

    Rahn, F.J.

    1978-01-01

    A review is given of the current status of experimental and calculational benchmarks for use in understanding the radiation embrittlement effects in the pressure vessels of operating light water power reactors. The requirements of such benchmarks for application to pressure vessel dosimetry are stated. Recent developments in active and passive neutron detectors sensitive in the ranges of importance to embrittlement studies are summarized and recommendations for improvements in the benchmark are made. (author)

  12. Tests of spinning turbine fragment impact on casing models

    International Nuclear Information System (INIS)

    Wilbeck, J.S.

    1984-01-01

    Ten 1/11-scale model turbine missile impact tests were conducted at a Naval spin chamber test facility to assess turbine missile effects in nuclear plant design. The objective of the tests was to determine the effects of missile spin, blade crush, and target edge conditions on the impact of turbine disk fragments on the steel casing. The results were intended for use in making realistic estimates for the initial conditions of fragments that might escape the casing in the event of a disk burst in a nuclear plant. The burst of a modified gas turbine rotor in a high-speed spin chamber provided three missiles with the proper rotational and translational velocities of actual steam turbine fragments. Tests of bladed, spinning missiles were compared with previous tests of unbladed, nonspinning missiles. The total residual energy of the spinning missiles, as observed from high-speed photographs of disk burst, was the same as that of the nonspinning missiles launched in a piercing orientation. Tests with bladed missiles showed that for equal burst speeds, the residual energy of bladed missiles is less than that of unbladed missiles. Impacts of missiles near the edge of targets resulted in residual missile velocities greater than for central impact. (orig.)

  13. Effect of flow leakage on the benchmarking of FLOWTRAN with Mark-22 mockup flow excursion test data from Babcock and Wilcox

    International Nuclear Information System (INIS)

    Chen, Kuo-Fu.

    1992-10-01

    This report presents a revised analysis of the Babcock and Wilcox (B and W) downflow flow excursion tests that accounts for leakage between flow channels in the test assembly. Leak rates were estimated by comparing results from the downflow tests with those for upflow tests conducted using an identical assembly with some minor modifications. The upflow test assembly did not contain leaks. This revised analyses shows that FLOWTRAN with the SRS working criterion conservatively predicts onset of flow instability without using a local peaking factor to model heat transfer variations near the ribs

  14. Linking Errors between Two Populations and Tests: A Case Study in International Surveys in Education

    Directory of Open Access Journals (Sweden)

    Dirk Hastedt

    2015-06-01

    Full Text Available This simulation study was prompted by the current increased interest in linking national studies to international large-scale assessments (ILSAs such as IEA's TIMSS, IEA's PIRLS, and OECD's PISA. Linkage in this scenario is achieved by including items from the international assessments in the national assessments on the premise that the average achievement scores from the latter can be linked to the international metric. In addition to raising issues associated with different testing conditions, administrative procedures, and the like, this approach also poses psychometric challenges. This paper endeavors to shed some light on the effects that can be expected, the linkage errors in particular, by countries using this practice. The ILSA selected for this simulation study was IEA TIMSS 2011, and the three countries used as the national assessment cases were Botswana, Honduras, and Tunisia, all of which participated in TIMSS 2011. The items selected as items common to the simulated national tests and the international test came from the Grade 4 TIMSS 2011 mathematics items that IEA released into the public domain after completion of this assessment. The findings of the current study show that linkage errors seemed to achieve acceptable levels if 30 or more items were used for the linkage, although the errors were still significantly higher compared to the TIMSS' cutoffs. Comparison of the estimated country averages based on the simulated national surveys and the averages based on the international TIMSS assessment revealed only one instance across the three countries of the estimates approaching parity. Also, the percentages of students in these countries who actually reached the defined benchmarks on the TIMSS achievement scale differed significantly from the results based on TIMSS and the results for the simulated national assessments. As a conclusion, we advise against using groups of released items from international assessments in national

  15. MOx Depletion Calculation Benchmark

    International Nuclear Information System (INIS)

    San Felice, Laurence; Eschbach, Romain; Dewi Syarifah, Ratna; Maryam, Seif-Eddine; Hesketh, Kevin

    2016-01-01

    Under the auspices of the NEA Nuclear Science Committee (NSC), the Working Party on Scientific Issues of Reactor Systems (WPRS) has been established to study the reactor physics, fuel performance, radiation transport and shielding, and the uncertainties associated with modelling of these phenomena in present and future nuclear power systems. The WPRS has different expert groups to cover a wide range of scientific issues in these fields. The Expert Group on Reactor Physics and Advanced Nuclear Systems (EGRPANS) was created in 2011 to perform specific tasks associated with reactor physics aspects of present and future nuclear power systems. EGRPANS provides expert advice to the WPRS and the nuclear community on the development needs (data and methods, validation experiments, scenario studies) for different reactor systems and also provides specific technical information regarding: core reactivity characteristics, including fuel depletion effects; core power/flux distributions; Core dynamics and reactivity control. In 2013 EGRPANS published a report that investigated fuel depletion effects in a Pressurised Water Reactor (PWR). This was entitled 'International Comparison of a Depletion Calculation Benchmark on Fuel Cycle Issues' NEA/NSC/DOC(2013) that documented a benchmark exercise for UO 2 fuel rods. This report documents a complementary benchmark exercise that focused on PuO 2 /UO 2 Mixed Oxide (MOX) fuel rods. The results are especially relevant to the back-end of the fuel cycle, including irradiated fuel transport, reprocessing, interim storage and waste repository. Saint-Laurent B1 (SLB1) was the first French reactor to use MOx assemblies. SLB1 is a 900 MWe PWR, with 30% MOx fuel loading. The standard MOx assemblies, used in Saint-Laurent B1 reactor, include three zones with different plutonium enrichments, high Pu content (5.64%) in the center zone, medium Pu content (4.42%) in the intermediate zone and low Pu content (2.91%) in the peripheral zone

  16. Benchmarking Academic Anatomic Pathologists

    Directory of Open Access Journals (Sweden)

    Barbara S. Ducatman MD

    2016-10-01

    Full Text Available The most common benchmarks for faculty productivity are derived from Medical Group Management Association (MGMA or Vizient-AAMC Faculty Practice Solutions Center ® (FPSC databases. The Association of Pathology Chairs has also collected similar survey data for several years. We examined the Association of Pathology Chairs annual faculty productivity data and compared it with MGMA and FPSC data to understand the value, inherent flaws, and limitations of benchmarking data. We hypothesized that the variability in calculated faculty productivity is due to the type of practice model and clinical effort allocation. Data from the Association of Pathology Chairs survey on 629 surgical pathologists and/or anatomic pathologists from 51 programs were analyzed. From review of service assignments, we were able to assign each pathologist to a specific practice model: general anatomic pathologists/surgical pathologists, 1 or more subspecialties, or a hybrid of the 2 models. There were statistically significant differences among academic ranks and practice types. When we analyzed our data using each organization’s methods, the median results for the anatomic pathologists/surgical pathologists general practice model compared to MGMA and FPSC results for anatomic and/or surgical pathology were quite close. Both MGMA and FPSC data exclude a significant proportion of academic pathologists with clinical duties. We used the more inclusive FPSC definition of clinical “full-time faculty” (0.60 clinical full-time equivalent and above. The correlation between clinical full-time equivalent effort allocation, annual days on service, and annual work relative value unit productivity was poor. This study demonstrates that effort allocations are variable across academic departments of pathology and do not correlate well with either work relative value unit effort or reported days on service. Although the Association of Pathology Chairs–reported median work relative

  17. BENCHMARKING ORTEC ISOTOPIC MEASUREMENTS AND CALCULATIONS

    Energy Technology Data Exchange (ETDEWEB)

    Dewberry, R; Raymond Sigg, R; Vito Casella, V; Nitin Bhatt, N

    2008-09-29

    This report represents a description of compiled benchmark tests conducted to probe and to demonstrate the extensive utility of the Ortec ISOTOPIC {gamma}-ray analysis computer program. The ISOTOPIC program performs analyses of {gamma}-ray spectra applied to specific acquisition configurations in order to apply finite-geometry correction factors and sample-matrix-container photon absorption correction factors. The analysis program provides an extensive set of preset acquisition configurations to which the user can add relevant parameters in order to build the geometry and absorption correction factors that the program determines from calculus and from nuclear g-ray absorption and scatter data. The Analytical Development Section field nuclear measurement group of the Savannah River National Laboratory uses the Ortec ISOTOPIC analysis program extensively for analyses of solid waste and process holdup applied to passive {gamma}-ray acquisitions. Frequently the results of these {gamma}-ray acquisitions and analyses are to determine compliance with facility criticality safety guidelines. Another use of results is to designate 55-gallon drum solid waste as qualified TRU waste3 or as low-level waste. Other examples of the application of the ISOTOPIC analysis technique to passive {gamma}-ray acquisitions include analyses of standard waste box items and unique solid waste configurations. In many passive {gamma}-ray acquisition circumstances the container and sample have sufficient density that the calculated energy-dependent transmission correction factors have intrinsic uncertainties in the range 15%-100%. This is frequently the case when assaying 55-gallon drums of solid waste with masses of up to 400 kg and when assaying solid waste in extensive unique containers. Often an accurate assay of the transuranic content of these containers is not required, but rather a good defensible designation as >100 nCi/g (TRU waste) or <100 nCi/g (low level solid waste) is required. In

  18. Self-benchmarking Guide for Cleanrooms: Metrics, Benchmarks, Actions

    Energy Technology Data Exchange (ETDEWEB)

    Mathew, Paul; Sartor, Dale; Tschudi, William

    2009-07-13

    This guide describes energy efficiency metrics and benchmarks that can be used to track the performance of and identify potential opportunities to reduce energy use in laboratory buildings. This guide is primarily intended for personnel who have responsibility for managing energy use in existing laboratory facilities - including facilities managers, energy managers, and their engineering consultants. Additionally, laboratory planners and designers may also use the metrics and benchmarks described in this guide for goal-setting in new construction or major renovation. This guide provides the following information: (1) A step-by-step outline of the benchmarking process. (2) A set of performance metrics for the whole building as well as individual systems. For each metric, the guide provides a definition, performance benchmarks, and potential actions that can be inferred from evaluating this metric. (3) A list and descriptions of the data required for computing the metrics. This guide is complemented by spreadsheet templates for data collection and for computing the benchmarking metrics. This guide builds on prior research supported by the national Laboratories for the 21st Century (Labs21) program, supported by the U.S. Department of Energy and the U.S. Environmental Protection Agency. Much of the benchmarking data are drawn from the Labs21 benchmarking database and technical guides. Additional benchmark data were obtained from engineering experts including laboratory designers and energy managers.

  19. Self-benchmarking Guide for Laboratory Buildings: Metrics, Benchmarks, Actions

    Energy Technology Data Exchange (ETDEWEB)

    Mathew, Paul; Greenberg, Steve; Sartor, Dale

    2009-07-13

    This guide describes energy efficiency metrics and benchmarks that can be used to track the performance of and identify potential opportunities to reduce energy use in laboratory buildings. This guide is primarily intended for personnel who have responsibility for managing energy use in existing laboratory facilities - including facilities managers, energy managers, and their engineering consultants. Additionally, laboratory planners and designers may also use the metrics and benchmarks described in this guide for goal-setting in new construction or major renovation. This guide provides the following information: (1) A step-by-step outline of the benchmarking process. (2) A set of performance metrics for the whole building as well as individual systems. For each metric, the guide provides a definition, performance benchmarks, and potential actions that can be inferred from evaluating this metric. (3) A list and descriptions of the data required for computing the metrics. This guide is complemented by spreadsheet templates for data collection and for computing the benchmarking metrics. This guide builds on prior research supported by the national Laboratories for the 21st Century (Labs21) program, supported by the U.S. Department of Energy and the U.S. Environmental Protection Agency. Much of the benchmarking data are drawn from the Labs21 benchmarking database and technical guides. Additional benchmark data were obtained from engineering experts including laboratory designers and energy managers.

  20. Ernst Julius Öpik's (1916) note on the theory of explosion cratering on the Moon's surface—The complex case of a long-overlooked benchmark paper

    Science.gov (United States)

    Racki, Grzegorz; Koeberl, Christian; Viik, Tõnu; Jagt-Yazykova, Elena A.; Jagt, John W. M.

    2014-10-01

    High-velocity impact as a common phenomenon in planetary evolution was ignored until well into the twentieth century, mostly because of inadequate understanding of cratering processes. An eight-page note, published in Russian by the young Ernst Julius Öpik, a great Estonian astronomer, was among the key selenological papers, but due to the language barrier, it was barely known and mostly incorrectly cited. This particular paper is here intended to serve as an explanatory supplement to an English translation of Öpik's article, but also to document an early stage in our understanding of cratering. First, we outline the historical-biographical background of this benchmark paper, and second, a comprehensive discussion of its merits is presented, from past and present perspectives alike. In his theoretical research, Öpik analyzed the explosive formation of craters numerically, albeit in a very simple way. For the first time, he approximated relationships among minimal meteorite size, impact energy, and crater diameter; this scaling focused solely on the gravitational energy of excavating the crater (a "useful" working approach). This initial physical model, with a rational mechanical basis, was developed in a series of papers up to 1961. Öpik should certainly be viewed as the founder of the numerical simulation approach in planetary sciences. In addition, the present note also briefly describes Nikolai A. Morozov as a remarkable man, a forgotten Russian scientist and, surprisingly, the true initiator of Öpik's explosive impact theory. In fact, already between 1909 and 1911, Morozov probably was the first to consider conclusively that explosion craters would be circular, bowl-shaped depressions even when formed under different impact angles.

  1. Using the Tuning Methodology to design the founding benchmark competences for a new academic professional field: the case of Advanced Rehabilitation Technologies

    Directory of Open Access Journals (Sweden)

    Ann-Marie Hughes

    2016-05-01

    Full Text Available    Designing innovative high quality educational programmes to meet the workforce needs in emerging interdisciplinary areas of practice can present challenges to academics, students, employers and industrial partners. This paper demonstrates how the Tuning Process successfully helped to construct benchmark learning outcomes and competences in the area of healthcare practice where advanced technologies are used to improve movement namely Rehabilitation Technologies (RTs. The paper also discusses the engagement of patients, carers and carer organisations within the development of competences.    Due to changing demographics, limited resources and the availability of technology, rehabilitation technologies are starting to be used for the assessment and treatment of patients. However there are currently no European transnational Bachelor or Master programmes targeted at educating people for the design, development, use and evaluation of these technologies. The contemporary field is predominantly staffed and resourced by engineering scientists and clinicians who were primarily educated in their primary discipline. The first generations of rehabilitation technologists have established this specialist field through invention, perseverance, and collaborative working. However, there is now a recognition that new and complementary skill sets are required by future graduates, whether engineering scientists or clinicians, so as to better meet the needs of clients and the employment market whether in the domains of industry, research, academia or clinical practice.   This project demonstrates how a group of European specialist rehabilitation technologists, supported by educationalists, collaborated to identify and develop the core competences and learning outcomes required by future Master’s (second cycle graduates in this new discipline. Building on the work of the Tuning Process and applying the principles embedded in the Bologna Process, future

  2. TRUST. I. A 3D externally illuminated slab benchmark for dust radiative transfer

    Science.gov (United States)

    Gordon, K. D.; Baes, M.; Bianchi, S.; Camps, P.; Juvela, M.; Kuiper, R.; Lunttila, T.; Misselt, K. A.; Natale, G.; Robitaille, T.; Steinacker, J.

    2017-07-01

    Context. The radiative transport of photons through arbitrary three-dimensional (3D) structures of dust is a challenging problem due to the anisotropic scattering of dust grains and strong coupling between different spatial regions. The radiative transfer problem in 3D is solved using Monte Carlo or Ray Tracing techniques as no full analytic solution exists for the true 3D structures. Aims: We provide the first 3D dust radiative transfer benchmark composed of a slab of dust with uniform density externally illuminated by a star. This simple 3D benchmark is explicitly formulated to provide tests of the different components of the radiative transfer problem including dust absorption, scattering, and emission. Methods: The details of the external star, the slab itself, and the dust properties are provided. This benchmark includes models with a range of dust optical depths fully probing cases that are optically thin at all wavelengths to optically thick at most wavelengths. The dust properties adopted are characteristic of the diffuse Milky Way interstellar medium. This benchmark includes solutions for the full dust emission including single photon (stochastic) heating as well as two simplifying approximations: One where all grains are considered in equilibrium with the radiation field and one where the emission is from a single effective grain with size-distribution-averaged properties. A total of six Monte Carlo codes and one Ray Tracing code provide solutions to this benchmark. Results: The solution to this benchmark is given as global spectral energy distributions (SEDs) and images at select diagnostic wavelengths from the ultraviolet through the infrared. Comparison of the results revealed that the global SEDs are consistent on average to a few percent for all but the scattered stellar flux at very high optical depths. The image results are consistent within 10%, again except for the stellar scattered flux at very high optical depths. The lack of agreement between

  3. Paternity tests in Mexico: Results obtained in 3005 cases.

    Science.gov (United States)

    García-Aceves, M E; Romero Rentería, O; Díaz-Navarro, X X; Rangel-Villalobos, H

    2018-04-01

    National and international reports regarding the paternity testing activity scarcely include information from Mexico and other Latin American countries. Therefore, we report different results from the analysis of 3005 paternity cases analyzed during a period of five years in a Mexican paternity testing laboratory. Motherless tests were the most frequent (77.27%), followed by trio cases (20.70%); the remaining 2.04% included different cases of kinship reconstruction. The paternity exclusion rate was 29.58%, higher but into the range reported by the American Association of Blood Banks (average 24.12%). We detected 65 mutations, most of them involving one-step (93.8% and the remaining were two-step mutations (6.2%) thus, we were able to estimate the paternal mutation rate for 17 different STR loci: 0.0018 (95% CI 0.0005-0.0047). Five triallelic patterns and 12 suspected null alleles were detected during this period; however, re-amplification of these samples with a different Human Identification (HID) kit confirmed the homozygous genotypes, which suggests that most of these exclusions actually are one-step mutations. HID kits with ≥20 STRs detected more exclusions, diminishing the rate of inconclusive results with isolated exclusions (Powerplex 21 kit (20 STRs) and Powerplex Fusion kit (22 STRs) offered similar PI (p = 0.379) and average number of exclusions (PE) (p = 0.339) when a daughter was involved in motherless tests. In brief, besides to report forensic parameters from paternity tests in Mexico, results describe improvements to solve motherless paternity tests using HID kits with ≥20 STRs instead of one including 15 STRs. Copyright © 2018 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  4. Benchmarking monthly homogenization algorithms

    Science.gov (United States)

    Venema, V. K. C.; Mestre, O.; Aguilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertacnik, G.; Szentimrey, T.; Stepanek, P.; Zahradnicek, P.; Viarre, J.; Müller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratianni, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Prohom Duran, M.; Likso, T.; Esteban, P.; Brandsma, T.

    2011-08-01

    The COST (European Cooperation in Science and Technology) Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative). The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random break-type inhomogeneities were added to the simulated datasets modeled as a Poisson process with normally distributed breakpoint sizes. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide) trend was added. Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including (i) the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii) the error in linear trend estimates and (iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data

  5. Benchmarking foreign electronics technologies

    Energy Technology Data Exchange (ETDEWEB)

    Bostian, C.W.; Hodges, D.A.; Leachman, R.C.; Sheridan, T.B.; Tsang, W.T.; White, R.M.

    1994-12-01

    This report has been drafted in response to a request from the Japanese Technology Evaluation Center`s (JTEC) Panel on Benchmarking Select Technologies. Since April 1991, the Competitive Semiconductor Manufacturing (CSM) Program at the University of California at Berkeley has been engaged in a detailed study of quality, productivity, and competitiveness in semiconductor manufacturing worldwide. The program is a joint activity of the College of Engineering, the Haas School of Business, and the Berkeley Roundtable on the International Economy, under sponsorship of the Alfred P. Sloan Foundation, and with the cooperation of semiconductor producers from Asia, Europe and the United States. Professors David A. Hodges and Robert C. Leachman are the project`s Co-Directors. The present report for JTEC is primarily based on data and analysis drawn from that continuing program. The CSM program is being conducted by faculty, graduate students and research staff from UC Berkeley`s Schools of Engineering and Business, and Department of Economics. Many of the participating firms are represented on the program`s Industry Advisory Board. The Board played an important role in defining the research agenda. A pilot study was conducted in 1991 with the cooperation of three semiconductor plants. The research plan and survey documents were thereby refined. The main phase of the CSM benchmarking study began in mid-1992 and will continue at least through 1997. reports are presented on the manufacture of integrated circuits; data storage; wireless technology; human-machine interfaces; and optoelectronics. Selected papers are indexed separately for inclusion in the Energy Science and Technology Database.

  6. Full scale turbine-missile casing exit tests

    International Nuclear Information System (INIS)

    Yoshimura, H.R.; Schamaun, J.T.; Sliter, G.E.

    1979-01-01

    Two full-scale tests have simulated the impact of a fragment from a failed turbine disk upon the steel casing of a low-pressure steam turbine with the objective of providing data for making more realistic assessments of turbine missile effects for nuclear power plant designers. Data were obtained on both the energy-absorbing mechanisms of the impact process and the post-impact trajectory of the fragment. (orig.)

  7. DWPF PCCS version 2.0 test case

    International Nuclear Information System (INIS)

    Brown, K.G.; Pickett, M.A.

    1992-01-01

    To verify the operation of the Product Composition Control System (PCCS), a test case specific to DWPF operation was developed. The values and parameters necessary to demonstrate proper DWPF product composition control have been determined and are presented in this paper. If this control information (i.e., for transfers and analyses) is entered into the PCCS as illustrated in this paper, and the results obtained correspond to the independently-generated results, it can safely be said that the PCCS is operating correctly and can thus be used to control the DWPF. The independent results for this test case will be generated and enumerated in a future report. This test case was constructed along the lines of the normal DWPF operation. Many essential parameters are internal to the PCCS (e.g., property constraint and variance information) and can only be manipulated by personnel knowledgeable of the Symbolics reg-sign hardware and software. The validity of these parameters will rely on induction from observed PCCS results. Key process control values are entered into the PCCS as they would during normal operation. Examples of the screens used to input specific process control information are provided. These inputs should be entered into the PCCS database, and the results generated should be checked against the independent, computed results to confirm the validity of the PCCS

  8. A Human Proximity Operations System test case validation approach

    Science.gov (United States)

    Huber, Justin; Straub, Jeremy

    A Human Proximity Operations System (HPOS) poses numerous risks in a real world environment. These risks range from mundane tasks such as avoiding walls and fixed obstacles to the critical need to keep people and processes safe in the context of the HPOS's situation-specific decision making. Validating the performance of an HPOS, which must operate in a real-world environment, is an ill posed problem due to the complexity that is introduced by erratic (non-computer) actors. In order to prove the HPOS's usefulness, test cases must be generated to simulate possible actions of these actors, so the HPOS can be shown to be able perform safely in environments where it will be operated. The HPOS must demonstrate its ability to be as safe as a human, across a wide range of foreseeable circumstances. This paper evaluates the use of test cases to validate HPOS performance and utility. It considers an HPOS's safe performance in the context of a common human activity, moving through a crowded corridor, and extrapolates (based on this) to the suitability of using test cases for AI validation in other areas of prospective application.

  9. Sensitivity analysis methods and a biosphere test case implemented in EIKOS

    International Nuclear Information System (INIS)

    Ekstroem, P.A.; Broed, R.

    2006-05-01

    Computer-based models can be used to approximate real life processes. These models are usually based on mathematical equations, which are dependent on several variables. The predictive capability of models is therefore limited by the uncertainty in the value of these. Sensitivity analysis is used to apportion the relative importance each uncertain input parameter has on the output variation. Sensitivity analysis is therefore an essential tool in simulation modelling and for performing risk assessments. Simple sensitivity analysis techniques based on fitting the output to a linear equation are often used, for example correlation or linear regression coefficients. These methods work well for linear models, but for non-linear models their sensitivity estimations are not accurate. Usually models of complex natural systems are non-linear. Within the scope of this work, various sensitivity analysis methods, which can cope with linear, non-linear, as well as non-monotone problems, have been implemented, in a software package, EIKOS, written in Matlab language. The following sensitivity analysis methods are supported by EIKOS: Pearson product moment correlation coefficient (CC), Spearman Rank Correlation Coefficient (RCC), Partial (Rank) Correlation Coefficients (PCC), Standardized (Rank) Regression Coefficients (SRC), Sobol' method, Jansen's alternative, Extended Fourier Amplitude Sensitivity Test (EFAST) as well as the classical FAST method and the Smirnov and the Cramer-von Mises tests. A graphical user interface has also been developed, from which the user easily can load or call the model and perform a sensitivity analysis as well as uncertainty analysis. The implemented sensitivity analysis methods has been benchmarked with well-known test functions and compared with other sensitivity analysis software, with successful results. An illustration of the applicability of EIKOS is added to the report. The test case used is a landscape model consisting of several linked

  10. Sensitivity analysis methods and a biosphere test case implemented in EIKOS

    Energy Technology Data Exchange (ETDEWEB)

    Ekstroem, P.A.; Broed, R. [Facilia AB, Stockholm, (Sweden)

    2006-05-15

    Computer-based models can be used to approximate real life processes. These models are usually based on mathematical equations, which are dependent on several variables. The predictive capability of models is therefore limited by the uncertainty in the value of these. Sensitivity analysis is used to apportion the relative importance each uncertain input parameter has on the output variation. Sensitivity analysis is therefore an essential tool in simulation modelling and for performing risk assessments. Simple sensitivity analysis techniques based on fitting the output to a linear equation are often used, for example correlation or linear regression coefficients. These methods work well for linear models, but for non-linear models their sensitivity estimations are not accurate. Usually models of complex natural systems are non-linear. Within the scope of this work, various sensitivity analysis methods, which can cope with linear, non-linear, as well as non-monotone problems, have been implemented, in a software package, EIKOS, written in Matlab language. The following sensitivity analysis methods are supported by EIKOS: Pearson product moment correlation coefficient (CC), Spearman Rank Correlation Coefficient (RCC), Partial (Rank) Correlation Coefficients (PCC), Standardized (Rank) Regression Coefficients (SRC), Sobol' method, Jansen's alternative, Extended Fourier Amplitude Sensitivity Test (EFAST) as well as the classical FAST method and the Smirnov and the Cramer-von Mises tests. A graphical user interface has also been developed, from which the user easily can load or call the model and perform a sensitivity analysis as well as uncertainty analysis. The implemented sensitivity analysis methods has been benchmarked with well-known test functions and compared with other sensitivity analysis software, with successful results. An illustration of the applicability of EIKOS is added to the report. The test case used is a landscape model consisting of several

  11. OWL2 benchmarking for the evaluation of knowledge based systems.

    Directory of Open Access Journals (Sweden)

    Sher Afgun Khan

    Full Text Available OWL2 semantics are becoming increasingly popular for the real domain applications like Gene engineering and health MIS. The present work identifies the research gap that negligible attention has been paid to the performance evaluation of Knowledge Base Systems (KBS using OWL2 semantics. To fulfil this identified research gap, an OWL2 benchmark for the evaluation of KBS is proposed. The proposed benchmark addresses the foundational blocks of an ontology benchmark i.e. data schema, workload and performance metrics. The proposed benchmark is tested on memory based, file based, relational database and graph based KBS for performance and scalability measures. The results show that the proposed benchmark is able to evaluate the behaviour of different state of the art KBS on OWL2 semantics. On the basis of the results, the end users (i.e. domain expert would be able to select a suitable KBS appropriate for his domain.

  12. Higgs pair production: choosing benchmarks with cluster analysis

    Energy Technology Data Exchange (ETDEWEB)

    Carvalho, Alexandra; Dall’Osso, Martino; Dorigo, Tommaso [Dipartimento di Fisica e Astronomia and INFN, Sezione di Padova,Via Marzolo 8, I-35131 Padova (Italy); Goertz, Florian [CERN,1211 Geneva 23 (Switzerland); Gottardo, Carlo A. [Physikalisches Institut, Universität Bonn,Nussallee 12, 53115 Bonn (Germany); Tosi, Mia [CERN,1211 Geneva 23 (Switzerland)

    2016-04-20

    New physics theories often depend on a large number of free parameters. The phenomenology they predict for fundamental physics processes is in some cases drastically affected by the precise value of those free parameters, while in other cases is left basically invariant at the level of detail experimentally accessible. When designing a strategy for the analysis of experimental data in the search for a signal predicted by a new physics model, it appears advantageous to categorize the parameter space describing the model according to the corresponding kinematical features of the final state. A multi-dimensional test statistic can be used to gauge the degree of similarity in the kinematics predicted by different models; a clustering algorithm using that metric may allow the division of the space into homogeneous regions, each of which can be successfully represented by a benchmark point. Searches targeting those benchmarks are then guaranteed to be sensitive to a large area of the parameter space. In this document we show a practical implementation of the above strategy for the study of non-resonant production of Higgs boson pairs in the context of extensions of the standard model with anomalous couplings of the Higgs bosons. A non-standard value of those couplings may significantly enhance the Higgs boson pair-production cross section, such that the process could be detectable with the data that the LHC will collect in Run 2.

  13. Review for session K - benchmarks

    International Nuclear Information System (INIS)

    McCracken, A.K.

    1980-01-01

    Eight of the papers to be considered in Session K are directly concerned, at least in part, with the Pool Critical Assembly (P.C.A.) benchmark at Oak Ridge. The remaining seven papers in this session, the subject of this review, are concerned with a variety of topics related to the general theme of Benchmarks and will be considered individually

  14. Internal Benchmarking for Institutional Effectiveness

    Science.gov (United States)

    Ronco, Sharron L.

    2012-01-01

    Internal benchmarking is an established practice in business and industry for identifying best in-house practices and disseminating the knowledge about those practices to other groups in the organization. Internal benchmarking can be done with structures, processes, outcomes, or even individuals. In colleges or universities with multicampuses or a…

  15. Entropy-based benchmarking methods

    NARCIS (Netherlands)

    Temurshoev, Umed

    2012-01-01

    We argue that benchmarking sign-volatile series should be based on the principle of movement and sign preservation, which states that a bench-marked series should reproduce the movement and signs in the original series. We show that the widely used variants of Denton (1971) method and the growth

  16. Benchmark simulation models, quo vadis?

    DEFF Research Database (Denmark)

    Jeppsson, U.; Alex, J; Batstone, D. J.

    2013-01-01

    As the work of the IWA Task Group on Benchmarking of Control Strategies for wastewater treatment plants (WWTPs) is coming to an end, it is essential to disseminate the knowledge gained. For this reason, all authors of the IWA Scientific and Technical Report on benchmarking have come together to p...

  17. EPA's Benchmark Dose Modeling Software

    Science.gov (United States)

    The EPA developed the Benchmark Dose Software (BMDS) as a tool to help Agency risk assessors facilitate applying benchmark dose (BMD) method’s to EPA’s human health risk assessment (HHRA) documents. The application of BMD methods overcomes many well know limitations ...

  18. Benchmark for Strategic Performance Improvement.

    Science.gov (United States)

    Gohlke, Annette

    1997-01-01

    Explains benchmarking, a total quality management tool used to measure and compare the work processes in a library with those in other libraries to increase library performance. Topics include the main groups of upper management, clients, and staff; critical success factors for each group; and benefits of benchmarking. (Author/LRW)

  19. Benchmarking: A Process for Improvement.

    Science.gov (United States)

    Peischl, Thomas M.

    One problem with the outcome-based measures used in higher education is that they measure quantity but not quality. Benchmarking, or the use of some external standard of quality to measure tasks, processes, and outputs, is partially solving that difficulty. Benchmarking allows for the establishment of a systematic process to indicate if outputs…

  20. ZZ ECN-BUBEBO, ECN-Petten Burnup Benchmark Book, Inventories, Afterheat

    International Nuclear Information System (INIS)

    Kloosterman, Jan Leen

    1999-01-01

    Description of program or function: Contains experimental benchmarks which can be used for the validation of burnup code systems and accompanied data libraries. Although the benchmarks presented here are thoroughly described in literature, it is in many cases not straightforward to retrieve unambiguously the correct input data and corresponding results from the benchmark Descriptions. Furthermore, results which can easily be measured, are sometimes difficult to calculate because of conversions to be made. Therefore, emphasis has been put to clarify the input of the benchmarks and to present the benchmark results in such a way that they can easily be calculated and compared. For more thorough Descriptions of the benchmarks themselves, the literature referred to here should be consulted. This benchmark book is divided in 11 chapters/files containing the following in text and tabular form: chapter 1: Introduction; chapter 2: Burnup Credit Criticality Benchmark Phase 1-B; chapter 3: Yankee-Rowe Core V Fuel Inventory Study; chapter 4: H.B. Robinson Unit 2 Fuel Inventory Study; chapter 5: Turkey Point Unit 3 Fuel Inventory Study; chapter 6: Turkey Point Unit 3 Afterheat Power Study; chapter 7: Dickens Benchmark on Fission Product Energy Release of U-235; chapter 8: Dickens Benchmark on Fission Product Energy Release of Pu-239; chapter 9: Yarnell Benchmark on Decay Heat Measurements of U-233; chapter 10: Yarnell Benchmark on Decay Heat Measurements of U-235; chapter 11: Yarnell Benchmark on Decay Heat Measurements of Pu-239

  1. X447 EBR-II Experiment Benchmark for Verification of Audit Code of SFR Metal Fuel

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Yong Won; Bae, Moo-Hoon; Shin, Andong; Suh, Namduk [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of)

    2016-10-15

    In KINS (Korea Institute of Nuclear Safety), to prepare audit calculation of PGSFR licensing review, the project has been started to develop the regulatory technology for SFR system including a fuel area. To evaluate the fuel integrity and safety during an irradiation, the fuel performance code must be used for audit calculation. In this study, to verify the new code system, the benchmark analysis is performed. In the benchmark, X447 EBR-II experiment data are used. Additionally, the sensitivity analysis according to mass flux change of coolant is performed. In case of LWR fuel performance modeling, various and advanced models have been proposed and validated based on sufficient in-reactor test results. However, due to the lack of experience of SFR operation, the current understanding of SFR fuel behavior is limited. In this study, X447 EBR-II Experiment data are used for benchmark. The fuel composition of X447 assembly is U-10Zr and PGSFR also uses this composition in initial phase. So we select X447 EBR-II experiment for benchmark analysis. Due to the lack of experience of SFR operation and data, the current understanding of SFR fuel behavior is limited. However, in order to prepare the licensing of PGSFR, regulatory audit technologies of SFR must be secured. So, in this study, to verify the new audit fuel performance analysis code, the benchmark analysis is performed using X447 EBR-II experiment data. Also, the sensitivity analysis with mass flux change of coolant is performed. In terms of verification, it is considered that the results of benchmark and sensitivity analysis are reasonable.

  2. X447 EBR-II Experiment Benchmark for Verification of Audit Code of SFR Metal Fuel

    International Nuclear Information System (INIS)

    Choi, Yong Won; Bae, Moo-Hoon; Shin, Andong; Suh, Namduk

    2016-01-01

    In KINS (Korea Institute of Nuclear Safety), to prepare audit calculation of PGSFR licensing review, the project has been started to develop the regulatory technology for SFR system including a fuel area. To evaluate the fuel integrity and safety during an irradiation, the fuel performance code must be used for audit calculation. In this study, to verify the new code system, the benchmark analysis is performed. In the benchmark, X447 EBR-II experiment data are used. Additionally, the sensitivity analysis according to mass flux change of coolant is performed. In case of LWR fuel performance modeling, various and advanced models have been proposed and validated based on sufficient in-reactor test results. However, due to the lack of experience of SFR operation, the current understanding of SFR fuel behavior is limited. In this study, X447 EBR-II Experiment data are used for benchmark. The fuel composition of X447 assembly is U-10Zr and PGSFR also uses this composition in initial phase. So we select X447 EBR-II experiment for benchmark analysis. Due to the lack of experience of SFR operation and data, the current understanding of SFR fuel behavior is limited. However, in order to prepare the licensing of PGSFR, regulatory audit technologies of SFR must be secured. So, in this study, to verify the new audit fuel performance analysis code, the benchmark analysis is performed using X447 EBR-II experiment data. Also, the sensitivity analysis with mass flux change of coolant is performed. In terms of verification, it is considered that the results of benchmark and sensitivity analysis are reasonable

  3. Benchmark problems for numerical implementations of phase field models

    International Nuclear Information System (INIS)

    Jokisaari, A. M.; Voorhees, P. W.; Guyer, J. E.; Warren, J.; Heinonen, O. G.

    2016-01-01

    Here, we present the first set of benchmark problems for phase field models that are being developed by the Center for Hierarchical Materials Design (CHiMaD) and the National Institute of Standards and Technology (NIST). While many scientific research areas use a limited set of well-established software, the growing phase field community continues to develop a wide variety of codes and lacks benchmark problems to consistently evaluate the numerical performance of new implementations. Phase field modeling has become significantly more popular as computational power has increased and is now becoming mainstream, driving the need for benchmark problems to validate and verify new implementations. We follow the example set by the micromagnetics community to develop an evolving set of benchmark problems that test the usability, computational resources, numerical capabilities and physical scope of phase field simulation codes. In this paper, we propose two benchmark problems that cover the physics of solute diffusion and growth and coarsening of a second phase via a simple spinodal decomposition model and a more complex Ostwald ripening model. We demonstrate the utility of benchmark problems by comparing the results of simulations performed with two different adaptive time stepping techniques, and we discuss the needs of future benchmark problems. The development of benchmark problems will enable the results of quantitative phase field models to be confidently incorporated into integrated computational materials science and engineering (ICME), an important goal of the Materials Genome Initiative.

  4. Concept Test of a Smoking Cessation Smart Case.

    Science.gov (United States)

    Comello, Maria Leonora G; Porter, Jeannette H

    2018-04-05

    Wearable/portable devices that unobtrusively detect smoking and contextual data offer the potential to provide Just-In-Time Adaptive Intervention (JITAI) support for mobile cessation programs. Little has been reported on the development of these technologies. To address this gap, we offer a case report of users' experiences with a prototype "smart" cigarette case that automatically tracks time and location of smoking. Small-scale user-experience studies are typical of iterative product design and are especially helpful when proposing novel ideas. The purpose of the study was to assess concept acceptability and potential for further development. We tested the prototype case with a small sample of potential users (n = 7). Participants used the hardware/software for 2 weeks and reconvened for a 90-min focus group to discuss experiences and provide feedback. Participants liked the smart case in principle but found the prototype too bulky for easy portability. The potential for the case to convey positive messages about self also emerged as a finding. Participants indicated willingness to pay for improved technology (USD $15-$60 on a one-time basis). The smart case is a viable concept, but design detail is critical to user acceptance. Future research should examine designs that maximize convenience and that explore the device's ability to cue intentions and other cognitions that would support cessation. This study is the first to our knowledge to report formative research on the smart case concept. This initial exploration provides insights that may be helpful to other developers of JITAI-support technology.

  5. Developing Benchmarking Criteria for CO2 Emissions

    Energy Technology Data Exchange (ETDEWEB)

    Neelis, M.; Worrell, E.; Mueller, N.; Angelini, T. [Ecofys, Utrecht (Netherlands); Cremer, C.; Schleich, J.; Eichhammer, W. [The Fraunhofer Institute for Systems and Innovation research, Karlsruhe (Germany)

    2009-02-15

    A European Union (EU) wide greenhouse gas (GHG) allowance trading scheme (EU ETS) was implemented in the EU in 2005. In the first two trading periods of the scheme (running up to 2012), free allocation based on historical emissions was the main methodology for allocation of allowances to existing installations. For the third trading period (2013 - 2020), the European Commission proposed in January 2008 a more important role of auctioning of allowances rather then free allocation. (Transitional) free allocation of allowances to industrial sectors will be determined via harmonized allocation rules, where feasible based on benchmarking. In general terms, a benchmark based method allocates allowances based on a certain amount of emissions per unit of productive output (i.e. the benchmark). This study aims to derive criteria for an allocation methodology for the EU Emission Trading Scheme based on benchmarking for the period 2013 - 2020. To test the feasibility of the criteria, we apply them to four example product groups: iron and steel, pulp and paper, lime and glass. The basis for this study is the Commission proposal for a revised ETS directive put forward on 23 January 2008 and does not take into account any changes to this proposal in the co-decision procedure that resulted in the adoption of the Energy and Climate change package in December 2008.

  6. Large block migration experiments: INTRAVAL phase 1, Test Case 9

    Energy Technology Data Exchange (ETDEWEB)

    Gureghian, A.B.; Noronha, C.J. (Battelle, Willowbrook, IL (USA). Office of Waste Technology Development); Vandergraaf, T.T. (Atomic Energy of Canada Ltd., Ottawa, ON (Canada))

    1990-08-01

    The development of INTRAVAL Test Case 9, as presented in this report, was made possible by a past subsidiary agreement to the bilateral cooperative agreement between the US Department of Energy (DOE) and Atomic Energy of Canada Limited (AECL) encompassing various aspects of nuclear waste disposal research. The experimental aspect of this test case, which included a series of laboratory experiments designed to quantify the migration of tracers in a single, natural fracture, was undertaken by AECL. The numerical simulation of the results of these experiments was performed by the Battelle Office of Waste Technology Development (OWTD) by calibrating an in-house analytical code, FRACFLO, which is capable of predicting radionuclide transport in an idealized fractured rock. Three tracer migration experiments were performed, using nonsorbing uranine dye for two of them and sorbing Cs-137 for the third. In addition, separate batch experiments were performed to determine the fracture surface and rock matrix sorption coefficients for Cs-137. The two uranine tracer migration experiment were used to calculate the average fracture aperture and to calibrate the model for the fracture dispersivity and matrix diffusion coefficient. The predictive capability of the model was then tested by simulating the third, Cs-137, tracer test without changing the parameter values determined from the other experiments. Breakthrough curves of both the experimental and numerical results obtained at the outlet face of the fracture are presented for each experiment. The reported spatial concentration profiles for the rock matrix are based solely on numerical predictions. 22 refs., 12 figs., 8 tabs.

  7. NASA Software Engineering Benchmarking Study

    Science.gov (United States)

    Rarick, Heather L.; Godfrey, Sara H.; Kelly, John C.; Crumbley, Robert T.; Wifl, Joel M.

    2013-01-01

    To identify best practices for the improvement of software engineering on projects, NASA's Offices of Chief Engineer (OCE) and Safety and Mission Assurance (OSMA) formed a team led by Heather Rarick and Sally Godfrey to conduct this benchmarking study. The primary goals of the study are to identify best practices that: Improve the management and technical development of software intensive systems; Have a track record of successful deployment by aerospace industries, universities [including research and development (R&D) laboratories], and defense services, as well as NASA's own component Centers; and Identify candidate solutions for NASA's software issues. Beginning in the late fall of 2010, focus topics were chosen and interview questions were developed, based on the NASA top software challenges. Between February 2011 and November 2011, the Benchmark Team interviewed a total of 18 organizations, consisting of five NASA Centers, five industry organizations, four defense services organizations, and four university or university R and D laboratory organizations. A software assurance representative also participated in each of the interviews to focus on assurance and software safety best practices. Interviewees provided a wealth of information on each topic area that included: software policy, software acquisition, software assurance, testing, training, maintaining rigor in small projects, metrics, and use of the Capability Maturity Model Integration (CMMI) framework, as well as a number of special topics that came up in the discussions. NASA's software engineering practices compared favorably with the external organizations in most benchmark areas, but in every topic, there were ways in which NASA could improve its practices. Compared to defense services organizations and some of the industry organizations, one of NASA's notable weaknesses involved communication with contractors regarding its policies and requirements for acquired software. One of NASA's strengths

  8. Benchmarking: applications to transfusion medicine.

    Science.gov (United States)

    Apelseth, Torunn Oveland; Molnar, Laura; Arnold, Emmy; Heddle, Nancy M

    2012-10-01

    Benchmarking is as a structured continuous collaborative process in which comparisons for selected indicators are used to identify factors that, when implemented, will improve transfusion practices. This study aimed to identify transfusion medicine studies reporting on benchmarking, summarize the benchmarking approaches used, and identify important considerations to move the concept of benchmarking forward in the field of transfusion medicine. A systematic review of published literature was performed to identify transfusion medicine-related studies that compared at least 2 separate institutions or regions with the intention of benchmarking focusing on 4 areas: blood utilization, safety, operational aspects, and blood donation. Forty-five studies were included: blood utilization (n = 35), safety (n = 5), operational aspects of transfusion medicine (n = 5), and blood donation (n = 0). Based on predefined criteria, 7 publications were classified as benchmarking, 2 as trending, and 36 as single-event studies. Three models of benchmarking are described: (1) a regional benchmarking program that collects and links relevant data from existing electronic sources, (2) a sentinel site model where data from a limited number of sites are collected, and (3) an institutional-initiated model where a site identifies indicators of interest and approaches other institutions. Benchmarking approaches are needed in the field of transfusion medicine. Major challenges include defining best practices and developing cost-effective methods of data collection. For those interested in initiating a benchmarking program, the sentinel site model may be most effective and sustainable as a starting point, although the regional model would be the ideal goal. Copyright © 2012 Elsevier Inc. All rights reserved.

  9. Benchmarking school nursing practice: the North West Regional Benchmarking Group

    OpenAIRE

    Littler, Nadine; Mullen, Margaret; Beckett, Helen; Freshney, Alice; Pinder, Lynn

    2016-01-01

    It is essential that the quality of care is reviewed regularly through robust processes such as benchmarking to ensure all outcomes and resources are evidence-based so that children and young people’s needs are met effectively. This article provides an example of the use of benchmarking in school nursing practice. Benchmarking has been defined as a process for finding, adapting and applying best practices (Camp, 1994). This concept was first adopted in the 1970s ‘from industry where it was us...

  10. Numisheet2005 Benchmark Analysis on Forming of an Automotive Deck Lid Inner Panel: Benchmark 1

    International Nuclear Information System (INIS)

    Buranathiti, Thaweepat; Cao Jian

    2005-01-01

    Numerical simulations in sheet metal forming processes have been a very challenging topic in industry. There are many computer codes and modeling techniques existing today. However, there are many unknowns affecting the prediction accuracy. Systematic benchmark tests are needed to accelerate the future implementations and to provide as a reference. This report presents an international cooperative benchmark effort for an automotive deck lid inner panel. Predictions from simulations are analyzed and discussed against the corresponding experimental results. The correlations between accuracy of each parameter of interest are discussed in this report

  11. Calculus of a reactor VVER-1000 benchmark; Calcul d'un benchmark de reacteur VVER-1000

    Energy Technology Data Exchange (ETDEWEB)

    Dourougie, C

    1998-07-01

    In the framework of the FMDP (Fissile Materials Disposition Program between the US and Russian, a benchmark was tested. The pin cells contain low enriched uranium (LEU) and mixed oxide fuels (MOX). The calculations are done for a wide range of temperatures and solute boron concentrations, in accidental conditions. (A.L.B.)

  12. Towards a suite of test cases and a pycomodo library to assess and improve numerical methods in ocean models

    Science.gov (United States)

    Garnier, Valérie; Honnorat, Marc; Benshila, Rachid; Boutet, Martial; Cambon, Gildas; Chanut, Jérome; Couvelard, Xavier; Debreu, Laurent; Ducousso, Nicolas; Duhaut, Thomas; Dumas, Franck; Flavoni, Simona; Gouillon, Flavien; Lathuilière, Cyril; Le Boyer, Arnaud; Le Sommer, Julien; Lyard, Florent; Marsaleix, Patrick; Marchesiello, Patrick; Soufflet, Yves

    2016-04-01

    The COMODO group (http://www.comodo-ocean.fr) gathers developers of global and limited-area ocean models (NEMO, ROMS_AGRIF, S, MARS, HYCOM, S-TUGO) with the aim to address well-identified numerical issues. In order to evaluate existing models, to improve numerical approaches and methods or concept (such as effective resolution) to assess the behavior of numerical model in complex hydrodynamical regimes and to propose guidelines for the development of future ocean models, a benchmark suite that covers both idealized test cases dedicated to targeted properties of numerical schemes and more complex test case allowing the evaluation of the kernel coherence is proposed. The benchmark suite is built to study separately, then together, the main components of an ocean model : the continuity and momentum equations, the advection-diffusion of the tracers, the vertical coordinate design and the time stepping algorithms. The test cases are chosen for their simplicity of implementation (analytic initial conditions), for their capacity to focus on a (few) scheme or part of the kernel, for the availability of analytical solutions or accurate diagnoses and lastly to simulate a key oceanic processus in a controlled environment. Idealized test cases allow to verify properties of numerical schemes advection-diffusion of tracers, - upwelling, - lock exchange, - baroclinic vortex, - adiabatic motion along bathymetry, and to put into light numerical issues that remain undetected in realistic configurations - trajectory of barotropic vortex, - interaction current - topography. When complexity in the simulated dynamics grows up, - internal wave, - unstable baroclinic jet, the sharing of the same experimental designs by different existing models is useful to get a measure of the model sensitivity to numerical choices (Soufflet et al., 2016). Lastly, test cases help in understanding the submesoscale influence on the dynamics (Couvelard et al., 2015). Such a benchmark suite is an interesting

  13. Evaluation of the medical devices benchmark materials in the controlled human patch testing and in the RhE in vitro skin irritation protocol.

    NARCIS (Netherlands)

    Kandárová, Helena; Bendova, Hana; Letasiova, Silvia; Coleman, Kelly P; De Jong, Wim H; Jírova, Dagmar

    2018-01-01

    Several irritants were used in the in vitro irritation medical device round robin. The objective of this study was to verify their irritation potential using the human patch test (HPT), an in vitro assay, and in vivo data. The irritants were lactic acid (LA), heptanoic acid (HA), sodium dodecyl

  14. ANN-Benchmarks: A Benchmarking Tool for Approximate Nearest Neighbor Algorithms

    DEFF Research Database (Denmark)

    Aumüller, Martin; Bernhardsson, Erik; Faithfull, Alexander

    2017-01-01

    This paper describes ANN-Benchmarks, a tool for evaluating the performance of in-memory approximate nearest neighbor algorithms. It provides a standard interface for measuring the performance and quality achieved by nearest neighbor algorithms on different standard data sets. It supports several...... visualise these as images, Open image in new window plots, and websites with interactive plots. ANN-Benchmarks aims to provide a constantly updated overview of the current state of the art of k-NN algorithms. In the short term, this overview allows users to choose the correct k-NN algorithm and parameters...... for their similarity search task; in the longer term, algorithm designers will be able to use this overview to test and refine automatic parameter tuning. The paper gives an overview of the system, evaluates the results of the benchmark, and points out directions for future work. Interestingly, very different...

  15. Automated Test Case Generation for an Autopilot Requirement Prototype

    Science.gov (United States)

    Giannakopoulou, Dimitra; Rungta, Neha; Feary, Michael

    2011-01-01

    Designing safety-critical automation with robust human interaction is a difficult task that is susceptible to a number of known Human-Automation Interaction (HAI) vulnerabilities. It is therefore essential to develop automated tools that provide support both in the design and rapid evaluation of such automation. The Automation Design and Evaluation Prototyping Toolset (ADEPT) enables the rapid development of an executable specification for automation behavior and user interaction. ADEPT supports a number of analysis capabilities, thus enabling the detection of HAI vulnerabilities early in the design process, when modifications are less costly. In this paper, we advocate the introduction of a new capability to model-based prototyping tools such as ADEPT. The new capability is based on symbolic execution that allows us to automatically generate quality test suites based on the system design. Symbolic execution is used to generate both user input and test oracles user input drives the testing of the system implementation, and test oracles ensure that the system behaves as designed. We present early results in the context of a component in the Autopilot system modeled in ADEPT, and discuss the challenges of test case generation in the HAI domain.

  16. Experimental impact testing and analysis of composite fan cases

    Science.gov (United States)

    Vander Klok, Andrew Joe

    For aircraft engine certification, one of the requirements is to demonstrate the ability of the engine to withstand a fan blade-out (FBO) event. A FBO event may be caused by fatigue failure of the fan blade itself or by impact damage of foreign objects such as bird strike. An un-contained blade can damage flight critical engine components or even the fuselage. The design of a containment structure is related to numerous parameters such as the blade tip speed; blade material, size and shape; hub/tip diameter; fan case material, configuration, rigidity, etc. To investigate all parameters by spin experiments with a full size rotor assembly can be prohibitively expensive. Gas gun experiments can generate useful data for the design of engine containment cases at much lower costs. To replicate the damage modes similar to that on a fan case in FBO testing, the gas gun experiment has to be carefully designed. To investigate the experimental procedure and data acquisition techniques for FBO test, a low cost, small spin rig was first constructed. FBO tests were carried out with the small rig. The observed blade-to-fan case interactions were similar to those reported using larger spin rigs. The small rig has the potential in a variety of applications from investigating FBO events, verifying concept designs of rotors, to developing spin testing techniques. This rig was used in the developments of the notched blade releasing mechanism, a wire trigger method for synchronized data acquisition, high speed video imaging and etc. A relationship between the notch depth and the release speed was developed and verified. Next, an original custom designed spin testing facility was constructed. Driven by a 40HP, 40,000rpm air turbine, the spin rig is housed in a vacuum chamber of phi72inx40in (1829mmx1016mm). The heavily armored chamber is furnished with 9 viewports. This facility enables unprecedented investigations of FBO events. In parallel, a 15.4ft (4.7m) long phi4.1inch (105mm

  17. Benchmarking Nuclear Power Plants

    International Nuclear Information System (INIS)

    Jakic, I.

    2016-01-01

    One of the main tasks an owner have is to keep its business competitive on the market while delivering its product. Being owner of nuclear power plant bear the same (or even more complex and stern) responsibility due to safety risks and costs. In the past, nuclear power plant managements could (partly) ignore profit or it was simply expected and to some degree assured through the various regulatory processes governing electricity rate design. It is obvious now that, with the deregulation, utility privatization and competitive electricity market, key measure of success used at nuclear power plants must include traditional metrics of successful business (return on investment, earnings and revenue generation) as well as those of plant performance, safety and reliability. In order to analyze business performance of (specific) nuclear power plant, benchmarking, as one of the well-established concept and usual method was used. Domain was conservatively designed, with well-adjusted framework, but results have still limited application due to many differences, gaps and uncertainties. (author).

  18. Virtual machine performance benchmarking.

    Science.gov (United States)

    Langer, Steve G; French, Todd

    2011-10-01

    The attractions of virtual computing are many: reduced costs, reduced resources and simplified maintenance. Any one of these would be compelling for a medical imaging professional attempting to support a complex practice on limited resources in an era of ever tightened reimbursement. In particular, the ability to run multiple operating systems optimized for different tasks (computational image processing on Linux versus office tasks on Microsoft operating systems) on a single physical machine is compelling. However, there are also potential drawbacks. High performance requirements need to be carefully considered if they are to be executed in an environment where the running software has to execute through multiple layers of device drivers before reaching the real disk or network interface. Our lab has attempted to gain insight into the impact of virtualization on performance by benchmarking the following metrics on both physical and virtual platforms: local memory and disk bandwidth, network bandwidth, and integer and floating point performance. The virtual performance metrics are compared to baseline performance on "bare metal." The results are complex, and indeed somewhat surprising.

  19. AER benchmark specification sheet

    International Nuclear Information System (INIS)

    Aszodi, A.; Toth, S.

    2009-01-01

    In the VVER-440/213 type reactors, the core outlet temperature field is monitored with in-core thermocouples, which are installed above 210 fuel assemblies. These measured temperatures are used in determination of the fuel assembly powers and they have important role in the reactor power limitation. For these reasons, correct interpretation of the thermocouple signals is an important question. In order to interpret the signals in correct way, knowledge of the coolant mixing in the assembly heads is necessary. Computational Fluid Dynamics (CFD) codes and experiments can help to understand better these mixing processes and they can provide information which can support the more adequate interpretation of the thermocouple signals. This benchmark deals with the 3D CFD modeling of the coolant mixing in the heads of the profiled fuel assemblies with 12.2 mm rod pitch. Two assemblies of the 23rd cycle of the Paks NPP's Unit 3 are investigated. One of them has symmetrical pin power profile and another possesses inclined profile. (authors)

  20. AER Benchmark Specification Sheet

    International Nuclear Information System (INIS)

    Aszodi, A.; Toth, S.

    2009-01-01

    In the WWER-440/213 type reactors, the core outlet temperature field is monitored with in-core thermocouples, which are installed above 210 fuel assemblies. These measured temperatures are used in determination of the fuel assembly powers and they have important role in the reactor power limitation. For these reasons, correct interpretation of the thermocouple signals is an important question. In order to interpret the signals in correct way, knowledge of the coolant mixing in the assembly heads is necessary. Computational fluid dynamics codes and experiments can help to understand better these mixing processes and they can provide information which can support the more adequate interpretation of the thermocouple signals. This benchmark deals with the 3D computational fluid dynamics modeling of the coolant mixing in the heads of the profiled fuel assemblies with 12.2 mm rod pitch. Two assemblies of the twenty third cycle of the Paks NPPs Unit 3 are investigated. One of them has symmetrical pin power profile and another possesses inclined profile. (Authors)