WorldWideScience

Sample records for preliminary benchmarking comparisons

  1. RISKIND verification and benchmark comparisons

    International Nuclear Information System (INIS)

    Biwer, B.M.; Arnish, J.J.; Chen, S.Y.; Kamboj, S.

    1997-08-01

    This report presents verification calculations and benchmark comparisons for RISKIND, a computer code designed to estimate potential radiological consequences and health risks to individuals and the population from exposures associated with the transportation of spent nuclear fuel and other radioactive materials. Spreadsheet calculations were performed to verify the proper operation of the major options and calculational steps in RISKIND. The program is unique in that it combines a variety of well-established models into a comprehensive treatment for assessing risks from the transportation of radioactive materials. Benchmark comparisons with other validated codes that incorporate similar models were also performed. For instance, the external gamma and neutron dose rate curves for a shipping package estimated by RISKIND were compared with those estimated by using the RADTRAN 4 code and NUREG-0170 methodology. Atmospheric dispersion of released material and dose estimates from the GENII and CAP88-PC codes. Verification results have shown the program to be performing its intended function correctly. The benchmark results indicate that the predictions made by RISKIND are within acceptable limits when compared with predictions from similar existing models

  2. RISKIND verification and benchmark comparisons

    Energy Technology Data Exchange (ETDEWEB)

    Biwer, B.M.; Arnish, J.J.; Chen, S.Y.; Kamboj, S.

    1997-08-01

    This report presents verification calculations and benchmark comparisons for RISKIND, a computer code designed to estimate potential radiological consequences and health risks to individuals and the population from exposures associated with the transportation of spent nuclear fuel and other radioactive materials. Spreadsheet calculations were performed to verify the proper operation of the major options and calculational steps in RISKIND. The program is unique in that it combines a variety of well-established models into a comprehensive treatment for assessing risks from the transportation of radioactive materials. Benchmark comparisons with other validated codes that incorporate similar models were also performed. For instance, the external gamma and neutron dose rate curves for a shipping package estimated by RISKIND were compared with those estimated by using the RADTRAN 4 code and NUREG-0170 methodology. Atmospheric dispersion of released material and dose estimates from the GENII and CAP88-PC codes. Verification results have shown the program to be performing its intended function correctly. The benchmark results indicate that the predictions made by RISKIND are within acceptable limits when compared with predictions from similar existing models.

  3. Computational Chemistry Comparison and Benchmark Database

    Science.gov (United States)

    SRD 101 NIST Computational Chemistry Comparison and Benchmark Database (Web, free access)   The NIST Computational Chemistry Comparison and Benchmark Database is a collection of experimental and ab initio thermochemical properties for a selected set of molecules. The goals are to provide a benchmark set of molecules for the evaluation of ab initio computational methods and allow the comparison between different ab initio computational methods for the prediction of thermochemical properties.

  4. Benchmarking

    OpenAIRE

    Meylianti S., Brigita

    1999-01-01

    Benchmarking has different meaning to different people. There are five types of benchmarking, namely internal benchmarking, competitive benchmarking, industry / functional benchmarking, process / generic benchmarking and collaborative benchmarking. Each type of benchmarking has its own advantages as well as disadvantages. Therefore it is important to know what kind of benchmarking is suitable to a specific application. This paper will discuss those five types of benchmarking in detail, includ...

  5. HEU benchmark calculations and LEU preliminary calculations for IRR-1

    International Nuclear Information System (INIS)

    Caner, M.; Shapira, M.; Bettan, M.; Nagler, A.; Gilat, J.

    2004-01-01

    We performed neutronics calculations for the Soreq Research Reactor, IRR-1. The calculations were done for the purpose of upgrading and benchmarking our codes and methods. The codes used were mainly WIMS-D/4 for cell calculations and the three dimensional diffusion code CITATION for full core calculations. The experimental flux was obtained by gold wire activation methods and compared with our calculated flux profile. The IRR-1 is loaded with highly enriched uranium fuel assemblies, of the plate type. In the framework of preparation for conversion to low enrichment fuel, additional calculations were done assuming the presence of LEU fresh fuel. In these preliminary calculations we investigated the effect on the criticality and flux distributions of the increase of U-238 loading, and the corresponding uranium density.(author)

  6. Benchmark comparisons of evaluated nuclear data files

    International Nuclear Information System (INIS)

    Resler, D.A.; Howerton, R.J.; White, R.M.

    1994-05-01

    With the availability and maturity of several evaluated nuclear data files, it is timely to compare the results of integral tests with calculations using these different files. We discuss here our progress in making integral benchmark tests of the following nuclear data files: ENDL-94, ENDF/B-V and -VI, JENDL-3, JEF-2, and BROND-2. The methods used to process these evaluated libraries in a consistent way into applications files for use in Monte Carlo calculations is presented. Using these libraries, we are calculating and comparing to experiment k eff for 68 fast critical assemblies of 233,235 U and 239 Pu with reflectors of various material and thickness

  7. Criticality benchmark comparisons leading to cross-section upgrades

    International Nuclear Information System (INIS)

    Alesso, H.P.; Annese, C.E.; Heinrichs, D.P.; Lloyd, W.R.; Lent, E.M.

    1993-01-01

    For several years criticality benchmark calculations with COG. COG is a point-wise Monte Carlo code developed at Lawrence Livermore National Laboratory (LLNL). It solves the Boltzmann equation for the transport of neutrons and photons. The principle consideration in developing COG was that the resulting calculation would be as accurate as the point-wise cross-sectional data, since no physics computational approximations were used. The objective of this paper is to report on COG results for criticality benchmark experiments in concert with MCNP comparisons which are resulting in corrections an upgrades to the point-wise ENDL cross-section data libraries. Benchmarking discrepancies reported here indicated difficulties in the Evaluated Nuclear Data Livermore (ENDL) cross-sections for U-238 at thermal neutron energy levels. This led to a re-evaluation and selection of the appropriate cross-section values from several cross-section sets available (ENDL, ENDF/B-V). Further cross-section upgrades anticipated

  8. Benchmarking Reference Desk Service in Academic Health Science Libraries: A Preliminary Survey.

    Science.gov (United States)

    Robbins, Kathryn; Daniels, Kathleen

    2001-01-01

    This preliminary study was designed to benchmark patron perceptions of reference desk services at academic health science libraries, using a standard questionnaire. Responses were compared to determine the library that provided the highest-quality service overall and along five service dimensions. All libraries were rated very favorably, but none…

  9. Preliminary analysis of the proposed BN-600 benchmark core

    International Nuclear Information System (INIS)

    John, T.M.

    2000-01-01

    The Indira Gandhi Centre for Atomic Research is actively involved in the design of Fast Power Reactors in India. The core physics calculations are performed by the computer codes that are developed in-house or by the codes obtained from other laboratories and suitably modified to meet the computational requirements. The basic philosophy of the core physics calculations is to use the diffusion theory codes with the 25 group nuclear cross sections. The parameters that are very sensitive is the core leakage, like the power distribution at the core blanket interface etc. are calculated using transport theory codes under the DSN approximations. All these codes use the finite difference approximation as the method to treat the spatial variation of the neutron flux. Criticality problems having geometries that are irregular to be represented by the conventional codes are solved using Monte Carlo methods. These codes and methods have been validated by the analysis of various critical assemblies and calculational benchmarks. Reactor core design procedure at IGCAR consists of: two and three dimensional diffusion theory calculations (codes ALCIALMI and 3DB); auxiliary calculations, (neutron balance, power distributions, etc. are done by codes that are developed in-house); transport theory corrections from two dimensional transport calculations (DOT); irregular geometry treated by Monte Carlo method (KENO); cross section data library used CV2M (25 group)

  10. Benchmarking

    OpenAIRE

    Beretta Sergio; Dossi Andrea; Grove Hugh

    2000-01-01

    Due to their particular nature, the benchmarking methodologies tend to exceed the boundaries of management techniques, and to enter the territories of managerial culture. A culture that is also destined to break into the accounting area not only strongly supporting the possibility of fixing targets, and measuring and comparing the performance (an aspect that is already innovative and that is worthy of attention), but also questioning one of the principles (or taboos) of the accounting or...

  11. Actinides transmutation - a comparison of results for PWR benchmark

    International Nuclear Information System (INIS)

    Claro, Luiz H.

    2009-01-01

    The physical aspects involved in the Partitioning and Transmutation (P and T) of minor actinides (MA) and fission products (FP) generated by reactors PWR are of great interest in the nuclear industry. Besides these the reduction in the storage of radioactive wastes are related with the acceptability of the nuclear electric power. From the several concepts for partitioning and transmutation suggested in literature, one of them involves PWR reactors to burn the fuel containing plutonium and minor actinides reprocessed of UO 2 used in previous stages. In this work are presented the results of the calculations of a benchmark in P and T carried with WIMSD5B program using its new cross sections library generated from the ENDF-B-VII and the comparison with the results published in literature by other calculations. For comparison, was used the benchmark transmutation concept based in a typical PWR cell and the analyzed results were the k∞ and the atomic density of the isotopes Np-239, Pu-241, Pu-242 and Am-242m, as function of burnup considering discharge of 50 GWd/tHM. (author)

  12. Theory comparison and numerical benchmarking on neoclassical toroidal viscosity torque

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Zhirui; Park, Jong-Kyu; Logan, Nikolas; Kim, Kimin; Menard, Jonathan E. [Princeton Plasma Physics Laboratory, Princeton, New Jersey 08543 (United States); Liu, Yueqiang [Euratom/CCFE Association, Culham Science Centre, Abingdon OX14 3DB (United Kingdom)

    2014-04-15

    Systematic comparison and numerical benchmarking have been successfully carried out among three different approaches of neoclassical toroidal viscosity (NTV) theory and the corresponding codes: IPEC-PENT is developed based on the combined NTV theory but without geometric simplifications [Park et al., Phys. Rev. Lett. 102, 065002 (2009)]; MARS-Q includes smoothly connected NTV formula [Shaing et al., Nucl. Fusion 50, 025022 (2010)] based on Shaing's analytic formulation in various collisionality regimes; MARS-K, originally computing the drift kinetic energy, is upgraded to compute the NTV torque based on the equivalence between drift kinetic energy and NTV torque [J.-K. Park, Phys. Plasma 18, 110702 (2011)]. The derivation and numerical results both indicate that the imaginary part of drift kinetic energy computed by MARS-K is equivalent to the NTV torque in IPEC-PENT. In the benchmark of precession resonance between MARS-Q and MARS-K/IPEC-PENT, the agreement and correlation between the connected NTV formula and the combined NTV theory in different collisionality regimes are shown for the first time. Additionally, both IPEC-PENT and MARS-K indicate the importance of the bounce harmonic resonance which can greatly enhance the NTV torque when E×B drift frequency reaches the bounce resonance condition.

  13. Benchmarking comparison and validation of MCNP photon interaction data

    Directory of Open Access Journals (Sweden)

    Colling Bethany

    2017-01-01

    Full Text Available The objective of the research was to test available photoatomic data libraries for fusion relevant applications, comparing against experimental and computational neutronics benchmarks. Photon flux and heating was compared using the photon interaction data libraries (mcplib 04p, 05t, 84p and 12p. Suitable benchmark experiments (iron and water were selected from the SINBAD database and analysed to compare experimental values with MCNP calculations using mcplib 04p, 84p and 12p. In both the computational and experimental comparisons, the majority of results with the 04p, 84p and 12p photon data libraries were within 1σ of the mean MCNP statistical uncertainty. Larger differences were observed when comparing computational results with the 05t test photon library. The Doppler broadening sampling bug in MCNP-5 is shown to be corrected for fusion relevant problems through use of the 84p photon data library. The recommended libraries for fusion neutronics are 84p (or 04p with MCNP6 and 84p if using MCNP-5.

  14. Benchmarking comparison and validation of MCNP photon interaction data

    Science.gov (United States)

    Colling, Bethany; Kodeli, I.; Lilley, S.; Packer, L. W.

    2017-09-01

    The objective of the research was to test available photoatomic data libraries for fusion relevant applications, comparing against experimental and computational neutronics benchmarks. Photon flux and heating was compared using the photon interaction data libraries (mcplib 04p, 05t, 84p and 12p). Suitable benchmark experiments (iron and water) were selected from the SINBAD database and analysed to compare experimental values with MCNP calculations using mcplib 04p, 84p and 12p. In both the computational and experimental comparisons, the majority of results with the 04p, 84p and 12p photon data libraries were within 1σ of the mean MCNP statistical uncertainty. Larger differences were observed when comparing computational results with the 05t test photon library. The Doppler broadening sampling bug in MCNP-5 is shown to be corrected for fusion relevant problems through use of the 84p photon data library. The recommended libraries for fusion neutronics are 84p (or 04p) with MCNP6 and 84p if using MCNP-5.

  15. Benchmarking Benchmarks

    NARCIS (Netherlands)

    D.C. Blitz (David)

    2011-01-01

    textabstractBenchmarking benchmarks is a bundle of six studies that are inspired by the prevalence of benchmarking in academic finance research as well as in investment practice. Three studies examine if current benchmark asset pricing models adequately describe the cross-section of stock returns.

  16. Benchmark Problems of the Geothermal Technologies Office Code Comparison Study

    Energy Technology Data Exchange (ETDEWEB)

    White, Mark D. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Podgorney, Robert [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Kelkar, Sharad M. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); McClure, Mark W. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Danko, George [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Ghassemi, Ahmad [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Fu, Pengcheng [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Bahrami, Davood [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Barbier, Charlotte [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Cheng, Qinglu [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Chiu, Kit-Kwan [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Detournay, Christine [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Elsworth, Derek [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Fang, Yi [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Furtney, Jason K. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Gan, Quan [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Gao, Qian [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Guo, Bin [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Hao, Yue [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Horne, Roland N. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Huang, Kai [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Im, Kyungjae [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Norbeck, Jack [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Rutqvist, Jonny [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Safari, M. R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Sesetty, Varahanaresh [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Sonnenthal, Eric [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Tao, Qingfeng [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); White, Signe K. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Wong, Yang [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Xia, Yidong [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2016-12-02

    A diverse suite of numerical simulators is currently being applied to predict or understand the performance of enhanced geothermal systems (EGS). To build confidence and identify critical development needs for these analytical tools, the United States Department of Energy, Geothermal Technologies Office has sponsored a Code Comparison Study (GTO-CCS), with participants from universities, industry, and national laboratories. A principal objective for the study was to create a community forum for improvement and verification of numerical simulators for EGS modeling. Teams participating in the study were those representing U.S. national laboratories, universities, and industries, and each team brought unique numerical simulation capabilities to bear on the problems. Two classes of problems were developed during the study, benchmark problems and challenge problems. The benchmark problems were structured to test the ability of the collection of numerical simulators to solve various combinations of coupled thermal, hydrologic, geomechanical, and geochemical processes. This class of problems was strictly defined in terms of properties, driving forces, initial conditions, and boundary conditions. Study participants submitted solutions to problems for which their simulation tools were deemed capable or nearly capable. Some participating codes were originally developed for EGS applications whereas some others were designed for different applications but can simulate processes similar to those in EGS. Solution submissions from both were encouraged. In some cases, participants made small incremental changes to their numerical simulation codes to address specific elements of the problem, and in other cases participants submitted solutions with existing simulation tools, acknowledging the limitations of the code. The challenge problems were based on the enhanced geothermal systems research conducted at Fenton Hill, near Los Alamos, New Mexico, between 1974 and 1995. The problems

  17. Aagesta-BR3 Decommissioning Cost. Comparison and Benchmarking Analysis

    International Nuclear Information System (INIS)

    Varley, Geoff

    2002-11-01

    equipment. The BR3 work packages described in this report add up to something like 83,000 labour hours plus about MSEK 13 of investments and consumables costs. At Swedish average team labour rates 83,000 hours would equate to about MSEK 52. Adding the investment cost of MSEK 13 gives a total of about MSEK 65. This of course is quite close to the Aagesta figure but it would be wrong to draw immediate, firm conclusions based on these data. Such a comparison should take into account, inter alia: The number and relative sizes of the equipment decontaminated and dismantled at Aagesta and BR3. The assumed productivity in the Aagesta estimate compared to the actual BR3 figures. The physical scale of the Aagesta reactor is somewhat larger than the BR3 reactor, so all other things being equal, one might expect the Aagesta decommissioning cost estimate to be higher than for BR3. Aagesta has better access overall, which should help to constrain costs. The productivity ratio for workers at BR3 on average was high - generally 80 per cent or more, so this is unlikely to be exceeded at Aagesta and might not be equalled, which would tend to push the Aagesta cost up relative to the BR3 situation. There is an additional question of the possible extra work performed at BR3 due to the R and D nature of the project. The BR3 data analysed has tried to strip away any such 'extra' work but nevertheless there may be some residual effect on the final numbers. Analysis and comparison of individual work packages has raised several conclusions, as follows: The constructed cost for Aagesta using BR3 benchmark data is encouragingly close to the Aagesta estimate value but it is not clear that the way of deriving the Aagesta estimate for decontamination was entirely rigorous. The reliability of the Aagesta estimate on these grounds therefore might reasonably be questioned. A significant discrepancy between the BR3 and Aagesta cases appears to exist in respect of the volumes of waste arising from the

  18. Benchmark Comparison of Cloud Analytics Methods Applied to Earth Observations

    Science.gov (United States)

    Lynnes, Chris; Little, Mike; Huang, Thomas; Jacob, Joseph; Yang, Phil; Kuo, Kwo-Sen

    2016-01-01

    Cloud computing has the potential to bring high performance computing capabilities to the average science researcher. However, in order to take full advantage of cloud capabilities, the science data used in the analysis must often be reorganized. This typically involves sharding the data across multiple nodes to enable relatively fine-grained parallelism. This can be either via cloud-based file systems or cloud-enabled databases such as Cassandra, Rasdaman or SciDB. Since storing an extra copy of data leads to increased cost and data management complexity, NASA is interested in determining the benefits and costs of various cloud analytics methods for real Earth Observation cases. Accordingly, NASA's Earth Science Technology Office and Earth Science Data and Information Systems project have teamed with cloud analytics practitioners to run a benchmark comparison on cloud analytics methods using the same input data and analysis algorithms. We have particularly looked at analysis algorithms that work over long time series, because these are particularly intractable for many Earth Observation datasets which typically store data with one or just a few time steps per file. This post will present side-by-side cost and performance results for several common Earth observation analysis operations.

  19. Benchmark Comparison of Cloud Analytics Methods Applied to Earth Observations

    Science.gov (United States)

    Lynnes, C.; Little, M. M.; Huang, T.; Jacob, J. C.; Yang, C. P.; Kuo, K. S.

    2016-12-01

    Cloud computing has the potential to bring high performance computing capabilities to the average science researcher. However, in order to take full advantage of cloud capabilities, the science data used in the analysis must often be reorganized. This typically involves sharding the data across multiple nodes to enable relatively fine-grained parallelism. This can be either via cloud-based filesystems or cloud-enabled databases such as Cassandra, Rasdaman or SciDB. Since storing an extra copy of data leads to increased cost and data management complexity, NASA is interested in determining the benefits and costs of various cloud analytics methods for real Earth Observation cases. Accordingly, NASA's Earth Science Technology Office and Earth Science Data and Information Systems project have teamed with cloud analytics practitioners to run a benchmark comparison on cloud analytics methods using the same input data and analysis algorithms. We have particularly looked at analysis algorithms that work over long time series, because these are particularly intractable for many Earth Observation datasets which typically store data with one or just a few time steps per file. This post will present side-by-side cost and performance results for several common Earth observation analysis operations.

  20. Numerical simulations of concrete flow: A benchmark comparison

    DEFF Research Database (Denmark)

    Roussel, Nicolas; Gram, Annika; Cremonesi, Massimiliano

    2016-01-01

    First, we define in this paper two benchmark flows readily usable by anyone calibrating a numerical tool for concrete flow prediction. Such benchmark flows shall allow anyone to check the validity of their computational tools no matter the numerical methods and parameters they choose. Second, we ...

  1. Aagesta-BR3 Decommissioning Cost. Comparison and Benchmarking Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Varley, Geoff [NAC International, Henley on Thames (United Kingdom)

    2002-11-01

    25 is equipment. The BR3 work packages described in this report add up to something like 83,000 labour hours plus about MSEK 13 of investments and consumables costs. At Swedish average team labour rates 83,000 hours would equate to about MSEK 52. Adding the investment cost of MSEK 13 gives a total of about MSEK 65. This of course is quite close to the Aagesta figure but it would be wrong to draw immediate, firm conclusions based on these data. Such a comparison should take into account, inter alia: The number and relative sizes of the equipment decontaminated and dismantled at Aagesta and BR3. The assumed productivity in the Aagesta estimate compared to the actual BR3 figures. The physical scale of the Aagesta reactor is somewhat larger than the BR3 reactor, so all other things being equal, one might expect the Aagesta decommissioning cost estimate to be higher than for BR3. Aagesta has better access overall, which should help to constrain costs. The productivity ratio for workers at BR3 on average was high - generally 80 per cent or more, so this is unlikely to be exceeded at Aagesta and might not be equalled, which would tend to push the Aagesta cost up relative to the BR3 situation. There is an additional question of the possible extra work performed at BR3 due to the R and D nature of the project. The BR3 data analysed has tried to strip away any such 'extra' work but nevertheless there may be some residual effect on the final numbers. Analysis and comparison of individual work packages has raised several conclusions, as follows: The constructed cost for Aagesta using BR3 benchmark data is encouragingly close to the Aagesta estimate value but it is not clear that the way of deriving the Aagesta estimate for decontamination was entirely rigorous. The reliability of the Aagesta estimate on these grounds therefore might reasonably be questioned. A significant discrepancy between the BR3 and Aagesta cases appears to exist in respect of the volumes of waste

  2. PMLB: a large benchmark suite for machine learning evaluation and comparison.

    Science.gov (United States)

    Olson, Randal S; La Cava, William; Orzechowski, Patryk; Urbanowicz, Ryan J; Moore, Jason H

    2017-01-01

    The selection, development, or comparison of machine learning methods in data mining can be a difficult task based on the target problem and goals of a particular study. Numerous publicly available real-world and simulated benchmark datasets have emerged from different sources, but their organization and adoption as standards have been inconsistent. As such, selecting and curating specific benchmarks remains an unnecessary burden on machine learning practitioners and data scientists. The present study introduces an accessible, curated, and developing public benchmark resource to facilitate identification of the strengths and weaknesses of different machine learning methodologies. We compare meta-features among the current set of benchmark datasets in this resource to characterize the diversity of available data. Finally, we apply a number of established machine learning methods to the entire benchmark suite and analyze how datasets and algorithms cluster in terms of performance. From this study, we find that existing benchmarks lack the diversity to properly benchmark machine learning algorithms, and there are several gaps in benchmarking problems that still need to be considered. This work represents another important step towards understanding the limitations of popular benchmarking suites and developing a resource that connects existing benchmarking standards to more diverse and efficient standards in the future.

  3. PORTFOLIO COMPOSITION WITH MINIMUM VARIANCE: COMPARISON WITH MARKET BENCHMARKS

    Directory of Open Access Journals (Sweden)

    Daniel Menezes Cavalcante

    2016-07-01

    Full Text Available Portfolio optimization strategies are advocated as being able to allow the composition of stocks portfolios that provide returns above market benchmarks. This study aims to determine whether, in fact, portfolios based on the minimum variance strategy, optimized by the Modern Portfolio Theory, are able to achieve earnings above market benchmarks in Brazil. Time series of 36 securities traded on the BM&FBOVESPA have been analyzed in a long period of time (1999-2012, with sample windows of 12, 36, 60 and 120 monthly observations. The results indicated that the minimum variance portfolio performance is superior to market benchmarks (CDI and IBOVESPA in terms of return and risk-adjusted return, especially in medium and long-term investment horizons.

  4. Monte Carlo code criticality benchmark comparisons for waste packaging

    International Nuclear Information System (INIS)

    Alesso, H.P.; Annese, C.E.; Buck, R.M.; Pearson, J.S.; Lloyd, W.R.

    1992-07-01

    COG is a new point-wise Monte Carlo code being developed and tested at Lawrence Livermore National Laboratory (LLNL). It solves the Boltzmann equation for the transport of neutrons and photons. The objective of this paper is to report on COG results for criticality benchmark experiments both on a Cray mainframe and on a HP 9000 workstation. COG has been recently ported to workstations to improve its accessibility to a wider community of users. COG has some similarities to a number of other computer codes used in the shielding and criticality community. The recently introduced high performance reduced instruction set (RISC) UNIX workstations provide computational power that approach mainframes at a fraction of the cost. A version of COG is currently being developed for the Hewlett Packard 9000/730 computer with a UNIX operating system. Subsequent porting operations will move COG to SUN, DEC, and IBM workstations. In addition, a CAD system for preparation of the geometry input for COG is being developed. In July 1977, Babcock ampersand Wilcox Co. (B ampersand W) was awarded a contract to conduct a series of critical experiments that simulated close-packed storage of LWR-type fuel. These experiments provided data for benchmarking and validating calculational methods used in predicting K-effective of nuclear fuel storage in close-packed, neutron poisoned arrays. Low enriched UO2 fuel pins in water-moderated lattices in fuel storage represent a challenging criticality calculation for Monte Carlo codes particularly when the fuel pins extend out of the water. COG and KENO calculational results of these criticality benchmark experiments are presented

  5. Preliminary uncertainty analysis of OECD/UAM benchmark for the TMI-1 reactor

    International Nuclear Information System (INIS)

    Cardoso, Fabiano S.; Faria, Rochkhudson B.; Silva, Lucas M.C.; Pereira, Claubia; Fortini, Angela

    2015-01-01

    Nowadays the demand from nuclear research centers for safety, regulation and better-estimated predictions provided with confidence bounds has been increasing. On that way, studies have pointed out that present uncertainties in the nuclear data should be significantly reduced, to get the full benefit from the advanced modeling and simulation initiatives. The major outcome of NEA/OECD (UAM) workshop took place Italy on 2006, was the preparation of a benchmark work program with steps (exercises) that would be needed to define the uncertainty and modeling tasks. On that direction, this work was performed within the framework of UAM Exercise 1 (I-1) 'Cell Physics' to validate the study, and to be able estimated the accuracies of the model. The objectives of this study were to make a preliminary analysis of criticality values of TMI-1 PWR and the biases of the results from two different nuclear codes multiplication factor. The range of the bias was obtained using the deterministic codes: NEWT (New ESC-based Weighting Transport code), the two-dimensional transport module that uses AMPX-formatted cross-sections processed by other SCALE; and WIMSD5 (Winfrith Improved Multi-Group Scheme) code. The WIMSD5 system consists of a simplified geometric representation of heterogeneous space zones that are coupled with each other and with the boundaries, while the properties of each spacing element are obtained from Carlson DSN method or Collision Probability method. (author)

  6. A comparison and benchmark of two electron cloud packages

    Energy Technology Data Exchange (ETDEWEB)

    Lebrun, Paul L.G.; Amundson, James F; Spentzouris, Panagiotis G; Veitzer, Seth A

    2012-01-01

    We present results from precision simulations of the electron cloud (EC) problem in the Fermilab Main Injector using two distinct codes. These two codes are (i)POSINST, a F90 2D+ code, and (ii)VORPAL, a 2D/3D electrostatic and electromagnetic code used for self-consistent simulations of plasma and particle beam problems. A specific benchmark has been designed to demonstrate the strengths of both codes that are relevant to the EC problem in the Main Injector. As differences between results obtained from these two codes were bigger than the anticipated model uncertainties, a set of changes to the POSINST code were implemented. These changes are documented in this note. This new version of POSINST now gives EC densities that agree with those predicted by VORPAL, within {approx}20%, in the beam region. The root cause of remaining differences are most likely due to differences in the electrostatic Poisson solvers. From a software engineering perspective, these two codes are very different. We comment on the pros and cons of both approaches. The design(s) for a new EC package are briefly discussed.

  7. A cross-benchmark comparison of 87 learning to rank methods

    NARCIS (Netherlands)

    Tax, N.; Bockting, S.; Hiemstra, D.

    2015-01-01

    Learning to rank is an increasingly important scientific field that comprises the use of machine learning for the ranking task. New learning to rank methods are generally evaluated on benchmark test collections. However, comparison of learning to rank methods based on evaluation results is hindered

  8. Preliminary Results for the OECD/NEA Time Dependent Benchmark using Rattlesnake, Rattlesnake-IQS and TDKENO

    Energy Technology Data Exchange (ETDEWEB)

    DeHart, Mark D. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Mausolff, Zander [Univ. of Florida, Gainesville, FL (United States); Weems, Zach [Univ. of Florida, Gainesville, FL (United States); Popp, Dustin [Univ. of Florida, Gainesville, FL (United States); Smith, Kristin [Univ. of Florida, Gainesville, FL (United States); Shriver, Forrest [Univ. of Florida, Gainesville, FL (United States); Goluoglu, Sedat [Univ. of Florida, Gainesville, FL (United States); Prince, Zachary [Texas A & M Univ., College Station, TX (United States); Ragusa, Jean [Texas A & M Univ., College Station, TX (United States)

    2016-08-01

    One goal of the MAMMOTH M&S project is to validate the analysis capabilities within MAMMOTH. Historical data has shown limited value for validation of full three-dimensional (3D) multi-physics methods. Initial analysis considered the TREAT startup minimum critical core and one of the startup transient tests. At present, validation is focusing on measurements taken during the M8CAL test calibration series. These exercises will valuable in preliminary assessment of the ability of MAMMOTH to perform coupled multi-physics calculations; calculations performed to date are being used to validate the neutron transport solver Rattlesnake\\cite{Rattlesnake} and the fuels performance code BISON. Other validation projects outside of TREAT are available for single-physics benchmarking. Because the transient solution capability of Rattlesnake is one of the key attributes that makes it unique for TREAT transient simulations, validation of the transient solution of Rattlesnake using other time dependent kinetics benchmarks has considerable value. The Nuclear Energy Agency (NEA) of the Organization for Economic Cooperation and Development (OECD) has recently developed a computational benchmark for transient simulations. This benchmark considered both two-dimensional (2D) and 3D configurations for a total number of 26 different transients. All are negative reactivity insertions, typically returning to the critical state after some time.

  9. Preliminary Results for the OECD/NEA Time Dependent Benchmark using Rattlesnake, Rattlesnake-IQS and TDKENO

    International Nuclear Information System (INIS)

    DeHart, Mark D.; Mausolff, Zander; Weems, Zach; Popp, Dustin; Smith, Kristin; Shriver, Forrest; Goluoglu, Sedat; Prince, Zachary; Ragusa, Jean

    2016-01-01

    One goal of the MAMMOTH M&S project is to validate the analysis capabilities within MAMMOTH. Historical data has shown limited value for validation of full three-dimensional (3D) multi-physics methods. Initial analysis considered the TREAT startup minimum critical core and one of the startup transient tests. At present, validation is focusing on measurements taken during the M8CAL test calibration series. These exercises will valuable in preliminary assessment of the ability of MAMMOTH to perform coupled multi-physics calculations; calculations performed to date are being used to validate the neutron transport solver Rattlesnake\\citelesnake) and the fuels performance code BISON. Other validation projects outside of TREAT are available for single-physics benchmarking. Because the transient solution capability of Rattlesnake is one of the key attributes that makes it unique for TREAT transient simulations, validation of the transient solution of Rattlesnake using other time dependent kinetics benchmarks has considerable value. The Nuclear Energy Agency (NEA) of the Organization for Economic Cooperation and Development (OECD) has recently developed a computational benchmark for transient simulations. This benchmark considered both two-dimensional (2D) and 3D configurations for a total number of 26 different transients. All are negative reactivity insertions, typically returning to the critical state after some time.

  10. A GFR benchmark comparison of transient analysis codes based on the ETDR concept

    International Nuclear Information System (INIS)

    Bubelis, E.; Coddington, P.; Castelliti, D.; Dor, I.; Fouillet, C.; Geus, E. de; Marshall, T.D.; Van Rooijen, W.; Schikorr, M.; Stainsby, R.

    2007-01-01

    A GFR (Gas-cooled Fast Reactor) transient benchmark study was performed to investigate the ability of different code systems to calculate the transition in the core heat removal from the main circuit forced flow to natural circulation cooling using the Decay Heat Removal (DHR) system. This benchmark is based on a main blower failure in the Experimental Technology Demonstration Reactor (ETDR) with reactor scram. The codes taking part into the benchmark are: RELAP5, TRAC/AAA, CATHARE, SIM-ADS, MANTA and SPECTRA. For comparison purposes the benchmark was divided into several stages: the initial steady-state solution, the main blower flow run-down, the opening of the DHR loop and the transition to natural circulation and finally the 'quasi' steady heat removal from the core by the DHR system. The results submitted by the participants showed that all the codes gave consistent results for all four stages of the benchmark. In the steady-state the calculations revealed some differences in the clad and fuel temperatures, the core and main loop pressure drops and in the total Helium mass inventory. Also some disagreements were observed in the Helium and water flow rates in the DHR loop during the final natural circulation stage. Good agreement was observed for the total main blower flow rate and Helium temperature rise in the core, as well as for the Helium inlet temperature into the core. In order to understand the reason for the differences in the initial 'blind' calculations a second round of calculations was performed using a more precise set of boundary conditions

  11. Comparison of 250 MHz R10K Origin 2000 and 400 MHz Origin 2000 Using NAS Parallel Benchmarks

    Science.gov (United States)

    Turney, Raymond D.; Thigpen, William W. (Technical Monitor)

    2001-01-01

    This report describes results of benchmark tests on Steger, a 250 MHz Origin 2000 system with R10K processors, currently installed at the NASA Ames National Advanced Supercomputing (NAS) facility. For comparison purposes, the tests were also run on Lomax, a 400 MHz Origin 2000 with R12K processors. The BT, LU, and SP application benchmarks in the NAS Parallel Benchmark Suite and the kernel benchmark FT were chosen to measure system performance. Having been written to measure performance on Computational Fluid Dynamics applications, these benchmarks are assumed appropriate to represent the NAS workload. Since the NAS runs both message passing (MPI) and shared-memory, compiler directive type codes, both MPI and OpenMP versions of the benchmarks were used. The MPI versions used were the latest official release of the NAS Parallel Benchmarks, version 2.3. The OpenMP versions used were PBN3b2, a beta version that is in the process of being released. NPB 2.3 and PBN3b2 are technically different benchmarks, and NPB results are not directly comparable to PBN results.

  12. Performance comparison of OpenCL and CUDA by benchmarking an optimized perspective backprojection

    Energy Technology Data Exchange (ETDEWEB)

    Swall, Stefan; Ritschl, Ludwig; Knaup, Michael; Kachelriess, Marc [Erlangen-Nuernberg Univ., Erlangen (Germany). Inst. of Medical Physics (IMP)

    2011-07-01

    The increase in performance of Graphical Processing Units (GPUs) and the onward development of dedicated software tools within the last decade allows to transfer performance-demanding computations from the Central Processing Unit (CPU) to the GPU and to speed up certain tasks by utilizing the massiv parallel architecture of these devices. The Computate Unified Device Architecture (CUDA) developed by NVIDIA provides an easy hence effective way to develop application that target NVIDIA GPUs. It has become one of the cardinal software tools for this purpose. Recently the Open Computing Language (OpenCL) became available that is neither vendor-specific nor limited to GPUs only. As the benefits of CUDA-based image reconstruction are well known we aim at providing a comparison between the performance that can be achieved with CUDA in comparison to OpenCL by benchmarking the time required to perform a simple but computationally demanding task: the perspective backprojection. (orig.)

  13. Comparison of the results of the fifth dynamic AER benchmark-a benchmark for coupled thermohydraulic system/three-dimensional hexagonal kinetic core models

    International Nuclear Information System (INIS)

    Kliem, S.

    1998-01-01

    The fifth dynamic benchmark was defined at seventh AER-Symposium, held in Hoernitz, Germany in 1997. It is the first benchmark for coupled thermohydraulic system/three-dimensional hexagonal neutron kinetic core models. In this benchmark the interaction between the components of a WWER-440 NPP with the reactor core has been investigated. The initiating event is a symmetrical break of the main steam header at the end of the first fuel cycle and hot shutdown conditions with one control rod group stucking. This break causes an overcooling of the primary circuit. During this overcooling the scram reactivity is compensated and the scrammed reactor becomes re critical. The calculation was continued until the highly-borated water from the high pressure injection system terminated the power excursion. Each participant used own best-estimate nuclear cross section data. Only the initial subcriticality at the beginning of the transient was given. Solutions were received from Kurchatov Institute Russia with the code BIPR8/ATHLET, VTT Energy Finland with HEXTRAN/SMABRE, NRI Rez Czech Republic with DYN3/ATHLET, KFKI Budapest Hungary with KIKO3D/ATHLET and from FZR Germany with the code DYN3D/ATHLET.In this paper the results are compared. Beside the comparison of global results, the behaviour of several thermohydraulic and neutron kinetic parameters is presented to discuss the revealed differences between the solutions.(Authors)

  14. A computer code package for Monte Carlo photon-electron transport simulation Comparisons with experimental benchmarks

    International Nuclear Information System (INIS)

    Popescu, Lucretiu M.

    2000-01-01

    A computer code package (PTSIM) for particle transport Monte Carlo simulation was developed using object oriented techniques of design and programming. A flexible system for simulation of coupled photon, electron transport, facilitating development of efficient simulation applications, was obtained. For photons: Compton and photo-electric effects, pair production and Rayleigh interactions are simulated, while for electrons, a class II condensed history scheme was considered, in which catastrophic interactions (Moeller electron-electron interaction, bremsstrahlung, etc.) are treated in detail and all other interactions with reduced individual effect on electron history are grouped together using continuous slowing down approximation and energy straggling theories. Electron angular straggling is simulated using Moliere theory or a mixed model in which scatters at large angles are treated as distinct events. Comparisons with experimentally benchmarks for electron transmission and bremsstrahlung emissions energy and angular spectra, and for dose calculations are presented

  15. WLUP benchmarks

    International Nuclear Information System (INIS)

    Leszczynski, Francisco

    2002-01-01

    The IAEA-WIMS Library Update Project (WLUP) is on the end stage. The final library will be released on 2002. It is a result of research and development made by more than ten investigators during 10 years. The organization of benchmarks for testing and choosing the best set of data has been coordinated by the author of this paper. It is presented the organization, name conventions, contents and documentation of WLUP benchmarks, and an updated list of the main parameters for all cases. First, the benchmarks objectives and types are given. Then, comparisons of results from different WIMSD libraries are included. Finally it is described the program QVALUE for analysis and plot of results. Some examples are given. The set of benchmarks implemented on this work is a fundamental tool for testing new multigroup libraries. (author)

  16. Preliminary topical report on comparison reactor disassembly calculations

    International Nuclear Information System (INIS)

    McLaughlin, T.P.

    1975-11-01

    Preliminary results of comparison disassembly calculations for a representative LMFBR model (2100-l voided core) and arbitrary accident conditions are described. The analytical methods employed were the computer programs: FX2-POOL, PAD, and VENUS-II. The calculated fission energy depositions are in good agreement, as are measures of the destructive potential of the excursions, kinetic energy, and work. However, in some cases the resulting fuel temperatures are substantially divergent. Differences in the fission energy deposition appear to be attributable to residual inconsistencies in specifying the comparison cases. In contrast, temperature discrepancies probably stem from basic differences in the energy partition models inherent in the codes. Although explanations of the discrepancies are being pursued, the preliminary results indicate that all three computational methods provide a consistent, global characterization of the contrived disassembly accident

  17. Comparison of Processor Performance of SPECint2006 Benchmarks of some Intel Xeon Processors

    Directory of Open Access Journals (Sweden)

    Abdul Kareem PARCHUR

    2012-08-01

    Full Text Available High performance is a critical requirement to all microprocessors manufacturers. The present paper describes the comparison of performance in two main Intel Xeon series processors (Type A: Intel Xeon X5260, X5460, E5450 and L5320 and Type B: Intel Xeon X5140, 5130, 5120 and E5310. The microarchitecture of these processors is implemented using the basis of a new family of processors from Intel starting with the Pentium 4 processor. These processors can provide a performance boost for many key application areas in modern generation. The scaling of performance in two major series of Intel Xeon processors (Type A: Intel Xeon X5260, X5460, E5450 and L5320 and Type B: Intel Xeon X5140, 5130, 5120 and E5310 has been analyzed using the performance numbers of 12 CPU2006 integer benchmarks, performance numbers that exhibit significant differences in performance. The results and analysis can be used by performance engineers, scientists and developers to better understand the performance scaling in modern generation processors.

  18. A comparison of recent results from HONDO III with the JSME nuclear shipping cask benchmark calculations

    International Nuclear Information System (INIS)

    Key, S.W.

    1985-01-01

    The results of two calculations related to the impact response of spent nuclear fuel shipping casks are compared to the benchmark results reported in a recent study by the Japan Society of Mechanical Engineers Subcommittee on Structural Analysis of Nuclear Shipping Casks. Two idealized impacts are considered. The first calculation utilizes a right circular cylinder of lead subjected to a 9.0 m free fall onto a rigid target, while the second calculation utilizes a stainless steel clad cylinder of lead subjected to the same impact conditions. For the first problem, four calculations from graphical results presented in the original study have been singled out for comparison with HONDO III. The results from DYNA3D, STEALTH, PISCES, and ABAQUS are reproduced. In the second problem, the results from four separate computer programs in the original study, ABAQUS, ANSYS, MARC, and PISCES, are used and compared with HONDO III. The current version of HONDO III contains a fully automated implementation of the explicit-explicit partitioning procedure for the central difference method time integration which results in a reduction of computational effort by a factor in excess of 5. The results reported here further support the conclusion of the original study that the explicit time integration schemes with automated time incrementation are effective and efficient techniques for computing the transient dynamic response of nuclear fuel shipping casks subject to impact loading. (orig.)

  19. Intrinsic Radiation Source Generation with the ISC Package: Data Comparisons and Benchmarking

    International Nuclear Information System (INIS)

    Solomon, Clell J. Jr.

    2012-01-01

    be obtained from the user guide [Solomon, 2012]. The remainder of this report presents a discussion of the databases available to LIBISC and MISC, a discussion of the models employed by LIBISC, a comparison of the thick-target bremsstrahlung model employed, a benchmark comparison to plutonium and depleted-uranium spheres, and a comparison of the available particle-emission databases.

  20. Comparison of the AMDAHL 470V/6 and the IBM 370/195 using benchmarks

    Energy Technology Data Exchange (ETDEWEB)

    Snider, D.R.; Midlock, J.L.; Hinds, A.R.; Engert, D.E.

    1976-03-01

    Six groups of jobs were run on the IBM 370/195 at the Applied Mathematics Division (AMD) of Argonne National Laboratory using the current production versions of OS/MVT 21.7 and ASP 3.1. The same jobs were then run on an AMDAHL 470V/6 at the AMDAHL manufacturing facilities in Sunnyvale, California, using the identical operating systems. Performances of the two machines are compared. Differences in the configurations were minimized. The memory size on each machine was the same, all software which had an impact on run times was the same, and the I/O configurations were as similar as possible. This allowed the comparison to be based on the relative performance of the two CPU's. As part of the studies preliminary to the acquisition of the IBM 195 in 1972, two of the groups of jobs had been run on a CDC 7600 by CDC personnel in Arden Hills, Minnesota, on an IBM 360/195 by IBM personnel in Poughkeepsie, New York, and on the AMD 360/50/75 production system in June, 1971. 6 figures, 9 tables.

  1. Benchmark of Space Charge Simulations and Comparison with Experimental Results for High Intensity, Low Energy Accelerators

    CERN Document Server

    Cousineau, Sarah M

    2005-01-01

    Space charge effects are a major contributor to beam halo and emittance growth leading to beam loss in high intensity, low energy accelerators. As future accelerators strive towards unprecedented levels of beam intensity and beam loss control, a more comprehensive understanding of space charge effects is required. A wealth of simulation tools have been developed for modeling beams in linacs and rings, and with the growing availability of high-speed computing systems, computationally expensive problems that were inconceivable a decade ago are now being handled with relative ease. This has opened the field for realistic simulations of space charge effects, including detailed benchmarks with experimental data. A great deal of effort is being focused in this direction, and several recent benchmark studies have produced remarkably successful results. This paper reviews the achievements in space charge benchmarking in the last few years, and discusses the challenges that remain.

  2. Algorithm comparison and benchmarking using a parallel spectra transform shallow water model

    Energy Technology Data Exchange (ETDEWEB)

    Worley, P.H. [Oak Ridge National Lab., TN (United States); Foster, I.T.; Toonen, B. [Argonne National Lab., IL (United States)

    1995-04-01

    In recent years, a number of computer vendors have produced supercomputers based on a massively parallel processing (MPP) architecture. These computers have been shown to be competitive in performance with conventional vector supercomputers for some applications. As spectral weather and climate models are heavy users of vector supercomputers, it is interesting to determine how these models perform on MPPS, and which MPPs are best suited to the execution of spectral models. The benchmarking of MPPs is complicated by the fact that different algorithms may be more efficient on different architectures. Hence, a comprehensive benchmarking effort must answer two related questions: which algorithm is most efficient on each computer and how do the most efficient algorithms compare on different computers. In general, these are difficult questions to answer because of the high cost associated with implementing and evaluating a range of different parallel algorithms on each MPP platform.

  3. Comparison of different LMFBR primary containment codes applied to a Benchmark problem

    International Nuclear Information System (INIS)

    Benuzzi, A.

    1986-01-01

    The Cont Benchmark calculation exercise is a project sponsored by the Containment Loading and Response Group, a subgroup of the Safety Working Group of the Fast Reactor Coordinating Committee - CEC. A full-size typical Pool type LMFBR undergoing a postulated Core Disruptive Accident (CDA) has been defined by Belgonucleaire-Brussels under a study contract financed by the CEC and has been submitted to seven containment code calculations. The results of these calculations are presented and discussed in this paper

  4. Calculational benchmark comparisons for a low sodium void worth actinide burner core design

    International Nuclear Information System (INIS)

    Hill, R.N.; Kawashima, M.; Arie, K.; Suzuki, M.

    1992-01-01

    Recently, a number of low void worth core designs with non-conventional core geometries have been proposed. Since these designs lack a good experimental and computational database, benchmark calculations are useful for the identification of possible biases in performance characteristics predictions. In this paper, a simplified benchmark model of a metal fueled, low void worth actinide burner design is detailed; and two independent neutronic performance evaluations are compared. Calculated performance characteristics are evaluated for three spatially uniform compositions (fresh uranium/plutonium, batch-averaged uranium/transuranic, and batch-averaged uranium/transuranic with fission products) and a regional depleted distribution obtained from a benchmark depletion calculation. For each core composition, the flooded and voided multiplication factor, power peaking factor, sodium void worth (and its components), flooded Doppler coefficient and control rod worth predictions are compared. In addition, the burnup swing, average discharge burnup, peak linear power, and fresh fuel enrichment are calculated for the depletion case. In general, remarkably good agreement is observed between the evaluations. The most significant difference is predicted performance characteristics is a 0.3--0.5% Δk/(kk) bias in the sodium void worth. Significant differences in the transmutation rate of higher actinides are also observed; however, these differences do not cause discrepancies in the performing predictions

  5. R2/R0-WTR decommissioning cost. Comparison and benchmarking analysis

    International Nuclear Information System (INIS)

    Varley, Geoff; Rusch, Chris

    2001-10-01

    SKI charged NAC International with the task of determining whether or not the decommissioning cost estimates of R2/R0 (hereafter simply referred to as R2) and Aagesta research reactors are reasonable. The associated work was performed in two phases. The objective in Phase I was to make global comparisons of the R2 and Aagesta decommissioning estimates with the estimates/actual costs for the decommissioning of similar research reactors in other countries. This report presents the results of the Phase II investigations. Phase II focused on selected discrete work packages within the decommissioning program of the WTR reactor. To the extent possible a comparison of those tasks with estimates for the R2 reactor has been made, as a basis for providing an opinion on the reasonableness of the R2 estimate. The specific WTR packages include: reactor vessel and internals dismantling; biological shield dismantling; primary coolant piping dismantling; electrical equipment removal; waste packaging; transportation and disposal of radioactive concrete and reactor components; project management, licensing and engineering; and removal of ancillary facilities. The specific tasks were characterised and analysed in terms of fundamental parameters including: task definition; labour hours expended; labour cost; labour productivity; length of work week; working efficiency; working environment and impact on job execution; external costs (contract labour, materials and equipment); total cost; waste volumes; and waste packaging and transport costs. Based on such detailed raw data, normalised unit resources have been derived for selected parts of the decommissioning program, as a first step towards developing benchmarking data for D and D activities at research reactors. Several general conclusions emerged from the WTR decommissioning project. Site characterisation can confirm or negate major assumptions, quantify waste volumes, delineate obstacles to completing work, provide an understanding

  6. Benchmarking for Cost Improvement. Final report

    Energy Technology Data Exchange (ETDEWEB)

    1993-09-01

    The US Department of Energy`s (DOE) Office of Environmental Restoration and Waste Management (EM) conducted the Benchmarking for Cost Improvement initiative with three objectives: Pilot test benchmarking as an EM cost improvement tool; identify areas for cost improvement and recommend actions to address these areas; provide a framework for future cost improvement. The benchmarking initiative featured the use of four principal methods (program classification, nationwide cost improvement survey, paired cost comparison and component benchmarking). Interested parties contributed during both the design and execution phases. The benchmarking initiative was conducted on an accelerated basis. Of necessity, it considered only a limited set of data that may not be fully representative of the diverse and complex conditions found at the many DOE installations. The initiative generated preliminary data about cost differences and it found a high degree of convergence on several issues. Based on this convergence, the report recommends cost improvement strategies and actions. This report describes the steps taken as part of the benchmarking initiative and discusses the findings and recommended actions for achieving cost improvement. The results and summary recommendations, reported below, are organized by the study objectives.

  7. Comparison of three-dimensional ocean general circulation models on a benchmark problem

    International Nuclear Information System (INIS)

    Chartier, M.

    1990-12-01

    A french and an american Ocean General Circulation Models for deep-sea disposal of radioactive wastes are compared on a benchmark test problem. Both models are three-dimensional. They solve the hydrostatic primitive equations of the ocean with two different finite difference techniques. Results show that the dynamics simulated by both models are consistent. Several methods for the running of a model from a known state are tested in the French model: the diagnostic method, the prognostic method, the acceleration of convergence and the robust-diagnostic method

  8. Radcalc for windows benchmark study: A comparison of software results with Rocky Flats hydrogen gas generation data

    International Nuclear Information System (INIS)

    MCFADDEN, J.G.

    1999-01-01

    Radcalc for Windows Version 2.01 is a user-friendly software program developed by Waste Management Federal Services, Inc., Northwest Operations for the U.S. Department of Energy (McFadden et al. 1998). It is used for transportation and packaging applications in the shipment of radioactive waste materials. Among its applications are the classification of waste per the US. Department of Transportation regulations, the calculation of decay heat and daughter products, and the calculation of the radiolytic production of hydrogen gas. The Radcalc program has been extensively tested and validated (Green et al. 1995, McFadden et al. 1998) by comparison of each Radcalc algorithm to hand calculations. An opportunity to benchmark Radcalc hydrogen gas generation calculations to experimental data arose when the Rocky Flats Environmental Technology Site (RFETS) Residue Stabilization Program collected hydrogen gas generation data to determine compliance with requirements for shipment of waste in the TRUPACT-II (Schierloh 1998). The residue/waste drums tested at RFETS contain contaminated, solid, inorganic materials in polyethylene bags. The contamination is predominantly due to plutonium and americium isotopes. The information provided by Schierloh (1 998) of RFETS includes decay heat, hydrogen gas generation rates, calculated G eff values, and waste material type, making the experimental data ideal for benchmarking Radcalc. The following sections discuss the RFETS data and the Radcalc cases modeled with the data. Results are tabulated and also provided graphically

  9. Methods to stimulate national and sub-national benchmarking through international health system performance comparisons: a Canadian approach.

    Science.gov (United States)

    Veillard, Jeremy; Moses McKeag, Alexandra; Tipper, Brenda; Krylova, Olga; Reason, Ben

    2013-09-01

    This paper presents, discusses and evaluates methods used by the Canadian Institute for Health Information to present health system performance international comparisons in ways that facilitate their understanding by the public and health system policy-makers and can stimulate performance benchmarking. We used statistical techniques to normalize the results and present them on a standardized scale facilitating understanding of results. We compared results to the OECD average, and to benchmarks. We also applied various data quality rules to ensure the validity of results. In order to evaluate the impact of the public release of these results, we used quantitative and qualitative methods and documented other types of impact. We were able to present results for performance indicators and dimensions at national and sub-national levels; develop performance profiles for each Canadian province; and show pan-Canadian performance patterns for specific performance indicators. The results attracted significant media attention at national level and reactions from various stakeholders. Other impacts such as requests for additional analysis and improvement in data timeliness were observed. The methods used seemed attractive to various audiences in the Canadian context and achieved the objectives originally defined. These methods could be refined and applied in different contexts. Copyright © 2013 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  10. Preliminary evaluation of factors associated with premature trial closure and feasibility of accrual benchmarks in phase III oncology trials.

    Science.gov (United States)

    Schroen, Anneke T; Petroni, Gina R; Wang, Hongkun; Gray, Robert; Wang, Xiaofei F; Cronin, Walter; Sargent, Daniel J; Benedetti, Jacqueline; Wickerham, Donald L; Djulbegovic, Benjamin; Slingluff, Craig L

    2010-08-01

    A major challenge for randomized phase III oncology trials is the frequent low rates of patient enrollment, resulting in high rates of premature closure due to insufficient accrual. We conducted a pilot study to determine the extent of trial closure due to poor accrual, feasibility of identifying trial factors associated with sufficient accrual, impact of redesign strategies on trial accrual, and accrual benchmarks designating high failure risk in the clinical trials cooperative group (CTCG) setting. A subset of phase III trials opened by five CTCGs between August 1991 and March 2004 was evaluated. Design elements, experimental agents, redesign strategies, and pretrial accrual assessment supporting accrual predictions were abstracted from CTCG documents. Percent actual/predicted accrual rate averaged per month was calculated. Trials were categorized as having sufficient or insufficient accrual based on reason for trial termination. Analyses included univariate and bivariate summaries to identify potential trial factors associated with accrual sufficiency. Among 40 trials from one CTCG, 21 (52.5%) trials closed due to insufficient accrual. In 82 trials from five CTCGs, therapeutic trials accrued sufficiently more often than nontherapeutic trials (59% vs 27%, p = 0.05). Trials including pretrial accrual assessment more often achieved sufficient accrual than those without (67% vs 47%, p = 0.08). Fewer exclusion criteria, shorter consent forms, other CTCG participation, and trial design simplicity were not associated with achieving sufficient accrual. Trials accruing at a rate much lower than predicted (accrual rate) were consistently closed due to insufficient accrual. This trial subset under-represents certain experimental modalities. Data sources do not allow accounting for all factors potentially related to accrual success. Trial closure due to insufficient accrual is common. Certain trial design factors appear associated with attaining sufficient accrual. Defining

  11. Comparison of typical inelastic analysis predictions with benchmark problem experimental results

    International Nuclear Information System (INIS)

    Clinard, J.A.; Corum, J.M.; Sartory, W.K.

    1975-01-01

    The results of exemplary inelastic analyses are presented for a series of experimental benchmark problems. Consistent analytical procedures and constitutive relations were used in each of the analyses, and published material behavior data were used in all cases. Two finite-element inelastic computer programs were employed. These programs implement the analysis procedures and constitutive equations for Type 304 stainless steel that are currently used in many analyses of elevated-temperature nuclear reactor system components. The analysis procedures and constitutive relations are briefly discussed, and representative analytical results are presented and compared to the test data. The results that are presented demonstrate the feasibility of performing inelastic analyses, and they are indicative of the general level of agreement that the analyst might expect when using conventional inelastic analysis procedures. (U.S.)

  12. Proposal of an innovative benchmark for comparison of the performance of contactless digitizers

    Science.gov (United States)

    Iuliano, Luca; Minetola, Paolo; Salmi, Alessandro

    2010-10-01

    Thanks to the improving performances of 3D optical scanners, in terms of accuracy and repeatability, reverse engineering applications have extended from CAD model design or reconstruction to quality control. Today, contactless digitizing devices constitute a good alternative to coordinate measuring machines (CMMs) for the inspection of certain parts. The German guideline VDI/VDE 2634 is the only reference to evaluate whether 3D optical measuring systems comply with the declared or required performance specifications. Nevertheless it is difficult to compare the performance of different scanners referring to such a guideline. An adequate novel benchmark is proposed in this paper: focusing on the inspection of production tools (moulds), the innovative test piece was designed using common geometries and free-form surfaces. The reference part is intended to be employed for the evaluation of the performance of several contactless digitizing devices in computer-aided inspection, considering dimensional and geometrical tolerances as well as other quantitative and qualitative criteria.

  13. Proposal of an innovative benchmark for comparison of the performance of contactless digitizers

    International Nuclear Information System (INIS)

    Iuliano, Luca; Minetola, Paolo; Salmi, Alessandro

    2010-01-01

    Thanks to the improving performances of 3D optical scanners, in terms of accuracy and repeatability, reverse engineering applications have extended from CAD model design or reconstruction to quality control. Today, contactless digitizing devices constitute a good alternative to coordinate measuring machines (CMMs) for the inspection of certain parts. The German guideline VDI/VDE 2634 is the only reference to evaluate whether 3D optical measuring systems comply with the declared or required performance specifications. Nevertheless it is difficult to compare the performance of different scanners referring to such a guideline. An adequate novel benchmark is proposed in this paper: focusing on the inspection of production tools (moulds), the innovative test piece was designed using common geometries and free-form surfaces. The reference part is intended to be employed for the evaluation of the performance of several contactless digitizing devices in computer-aided inspection, considering dimensional and geometrical tolerances as well as other quantitative and qualitative criteria

  14. Comparison of typical inelastic analysis predictions with benchmark problem experimental results

    International Nuclear Information System (INIS)

    Clinard, J.A.; Corum, J.M.; Sartory, W.K.

    1975-01-01

    The results of exemplary inelastic analyses for experimental benchmark problems on reactor components are presented. Consistent analytical procedures and constitutive relations were used in each of the analyses, and the material behavior data presented in the Appendix were used in all cases. Two finite-element inelastic computer programs were employed. These programs implement the analysis procedures and constitutive equations for type 304 stainless steel that are currently used in many analyses of elevated-temperature nuclear reactor system components. The analysis procedures and constitutive relations are briefly discussed, and representative analytical results are presented and compared to the test data. The results that are presented demonstrate the feasibility of performing inelastic analyses for the types of problems discussed, and they are indicative of the general level of agreement that the analyst might expect when using conventional inelastic analysis procedures. (U.S.)

  15. The InterFrost benchmark of Thermo-Hydraulic codes for cold regions hydrology - first inter-comparison results

    Science.gov (United States)

    Grenier, Christophe; Roux, Nicolas; Anbergen, Hauke; Collier, Nathaniel; Costard, Francois; Ferrry, Michel; Frampton, Andrew; Frederick, Jennifer; Holmen, Johan; Jost, Anne; Kokh, Samuel; Kurylyk, Barret; McKenzie, Jeffrey; Molson, John; Orgogozo, Laurent; Rivière, Agnès; Rühaak, Wolfram; Selroos, Jan-Olof; Therrien, René; Vidstrand, Patrik

    2015-04-01

    The impacts of climate change in boreal regions has received considerable attention recently due to the warming trends that have been experienced in recent decades and are expected to intensify in the future. Large portions of these regions, corresponding to permafrost areas, are covered by water bodies (lakes, rivers) that interact with the surrounding permafrost. For example, the thermal state of the surrounding soil influences the energy and water budget of the surface water bodies. Also, these water bodies generate taliks (unfrozen zones below) that disturb the thermal regimes of permafrost and may play a key role in the context of climate change. Recent field studies and modeling exercises indicate that a fully coupled 2D or 3D Thermo-Hydraulic (TH) approach is required to understand and model the past and future evolution of landscapes, rivers, lakes and associated groundwater systems in a changing climate. However, there is presently a paucity of 3D numerical studies of permafrost thaw and associated hydrological changes, and the lack of study can be partly attributed to the difficulty in verifying multi-dimensional results produced by numerical models. Numerical approaches can only be validated against analytical solutions for a purely thermic 1D equation with phase change (e.g. Neumann, Lunardini). When it comes to the coupled TH system (coupling two highly non-linear equations), the only possible approach is to compare the results from different codes to provided test cases and/or to have controlled experiments for validation. Such inter-code comparisons can propel discussions to try to improve code performances. A benchmark exercise was initialized in 2014 with a kick-off meeting in Paris in November. Participants from USA, Canada, Germany, Sweden and France convened, representing altogether 13 simulation codes. The benchmark exercises consist of several test cases inspired by existing literature (e.g. McKenzie et al., 2007) as well as new ones. They

  16. Additive Manufacturing and Casting Technology Comparison: Mechanical Properties, Productivity and Cost Benchmark

    Science.gov (United States)

    Vevers, A.; Kromanis, A.; Gerins, E.; Ozolins, J.

    2018-04-01

    The casting technology is one of the oldest production technologies in the world but in the recent years metal additive manufacturing also known as metal 3D printing has been evolving with huge steps. Both technologies have capabilities to produce parts with internal holes and at first glance surface roughness is similar for both technologies, which means that for precise dimensions parts have to be machined in places where precise fit is necessary. Benchmark tests have been made to find out if parts which are produced with metal additive manufacturing can be used to replace parts which are produced with casting technology. Most of the comparative tests have been made with GJS-400-15 grade which is one of the most popular cast iron grades. To compare mechanical properties samples have been produced using additive manufacturing and tested for tensile strength, hardness, surface roughness and microstructure and then the results have been compared with the samples produced with casting technology. In addition, both technologies have been compared in terms of the production time and production costs to see if additive manufacturing is competitive with the casting technology. The original paper has been written in the Latvian language as part of the Master Thesis within the framework of the production technology study programme at Riga Technical University.

  17. Simulation of guided-wave ultrasound propagation in composite laminates: Benchmark comparisons of numerical codes and experiment.

    Science.gov (United States)

    Leckey, Cara A C; Wheeler, Kevin R; Hafiychuk, Vasyl N; Hafiychuk, Halyna; Timuçin, Doğan A

    2018-03-01

    Ultrasonic wave methods constitute the leading physical mechanism for nondestructive evaluation (NDE) and structural health monitoring (SHM) of solid composite materials, such as carbon fiber reinforced polymer (CFRP) laminates. Computational models of ultrasonic wave excitation, propagation, and scattering in CFRP composites can be extremely valuable in designing practicable NDE and SHM hardware, software, and methodologies that accomplish the desired accuracy, reliability, efficiency, and coverage. The development and application of ultrasonic simulation approaches for composite materials is an active area of research in the field of NDE. This paper presents comparisons of guided wave simulations for CFRP composites implemented using four different simulation codes: the commercial finite element modeling (FEM) packages ABAQUS, ANSYS, and COMSOL, and a custom code executing the Elastodynamic Finite Integration Technique (EFIT). Benchmark comparisons are made between the simulation tools and both experimental laser Doppler vibrometry data and theoretical dispersion curves. A pristine and a delamination type case (Teflon insert in the experimental specimen) is studied. A summary is given of the accuracy of simulation results and the respective computational performance of the four different simulation tools. Published by Elsevier B.V.

  18. Comparison of Processor Performance of SPECint2006 Benchmarks of some Intel Xeon Processors

    OpenAIRE

    Abdul Kareem PARCHUR; Ram Asaray SINGH

    2012-01-01

    High performance is a critical requirement to all microprocessors manufacturers. The present paper describes the comparison of performance in two main Intel Xeon series processors (Type A: Intel Xeon X5260, X5460, E5450 and L5320 and Type B: Intel Xeon X5140, 5130, 5120 and E5310). The microarchitecture of these processors is implemented using the basis of a new family of processors from Intel starting with the Pentium 4 processor. These processors can provide a performance boost for many ke...

  19. On the importance of adjusting for distorting factors in benchmarking analysis, as illustrated by a cost comparison of the different forms of implementation of the EU Packaging Directive.

    Science.gov (United States)

    Baum, Heinz-Georg; Schuch, Dieter

    2017-12-01

    Benchmarking is a proven and widely used business tool for identifying best practice. To produce robust results, the objects of comparison used in benchmarking analysis need to be structurally comparable and distorting factors need to be eliminated. We focus on a specific example - a benchmark study commissioned by the European Commission's Directorate-General for Environment on the implementation of Extended Producer Responsibility (EPR) for packaging at the national level - to discuss potential distorting factors and take them into account in the calculation. The cost of compliance per inhabitant and year, which is used as the key cost efficiency indicator in the study, is adjusted to take account of seven factors. The results clearly show that differences in performance may play a role, but the (legal) implementation of EPR - which is highly heterogeneous across countries - is the single most important cost determinant and must be taken into account to avoid misinterpretation and false conclusions.

  20. Calculations of the IAEA-CRP-6 Benchmark Cases by Using the ABAQUS FE Model for a Comparison with the COPA Results

    International Nuclear Information System (INIS)

    Cho, Moon-Sung; Kim, Y. M.; Lee, Y. W.; Jeong, K. C.; Kim, Y. K.; Oh, S. C.

    2006-01-01

    The fundamental design for a gas-cooled reactor relies on an understanding of the behavior of a coated particle fuel. KAERI, which has been carrying out the Korean VHTR (Very High Temperature modular gas cooled Reactor) Project since 2004, is developing a fuel performance analysis code for a VHTR named COPA (COated Particle fuel Analysis). COPA predicts temperatures, stresses, a fission gas release and failure probabilities of a coated particle fuel in normal operating conditions. Validation of COPA in the process of its development is realized partly by participating in the benchmark section of the international CRP-6 program led by IAEA which provides comprehensive benchmark problems and analysis results obtained from the CRP-6 member countries. Apart from the validation effort through the CRP-6, a validation of COPA was attempted by comparing its benchmark results with the visco-elastic solutions obtained from the ABAQUS code calculations for the same CRP-6 TRISO coated particle benchmark problems involving creep, swelling, and pressure. The study shows the calculation results of the IAEA-CRP-6 benchmark cases 5 through 7 by using the ABAQUS FE model for a comparison with the COPA results

  1. Library Benchmarking

    Directory of Open Access Journals (Sweden)

    Wiji Suwarno

    2017-02-01

    Full Text Available The term benchmarking has been encountered in the implementation of total quality (TQM or in Indonesian termed holistic quality management because benchmarking is a tool to look for ideas or learn from the library. Benchmarking is a processof measuring and comparing for continuous business process of systematic and continuous measurement, the process of measuring and comparing for continuous business process of an organization to get information that can help these organization improve their performance efforts.

  2. Benchmarking the Sandbox: Quantitative Comparisons of Numerical and Analogue Models of Brittle Wedge Dynamics (Invited)

    Science.gov (United States)

    Buiter, S.; Schreurs, G.; Geomod2008 Team

    2010-12-01

    When numerical and analogue models are used to investigate the evolution of deformation processes in crust and lithosphere, they face specific challenges related to, among others, large contrasts in material properties, the heterogeneous character of continental lithosphere, the presence of a free surface, the occurrence of large deformations including viscous flow and offset on shear zones, and the observation that several deformation mechanisms may be active simultaneously. These pose specific demands on numerical software and laboratory models. By combining the two techniques, we can utilize the strengths of each individual method and test the model-independence of our results. We can perhaps even consider our findings to be more robust if we find similar-to-same results irrespective of the modeling method that was used. To assess the role of modeling method and to quantify the variability among models with identical setups, we have performed a direct comparison of results of 11 numerical codes and 15 analogue experiments. We present three experiments that describe shortening of brittle wedges and that resemble setups frequently used by especially analogue modelers. Our first experiment translates a non-accreting wedge with a stable surface slope. In agreement with critical wedge theory, all models maintain their surface slope and do not show internal deformation. This experiment serves as a reference that allows for testing against analytical solutions for taper angle, root-mean-square velocity and gravitational rate of work. The next two experiments investigate an unstable wedge, which deforms by inward translation of a mobile wall. The models accommodate shortening by formation of forward and backward shear zones. We compare surface slope, rate of dissipation of energy, root-mean-square velocity, and the location, dip angle and spacing of shear zones. All models show similar cross-sectional evolutions that demonstrate reproducibility to first order. However

  3. Interactive benchmarking

    DEFF Research Database (Denmark)

    Lawson, Lartey; Nielsen, Kurt

    2005-01-01

    We discuss individual learning by interactive benchmarking using stochastic frontier models. The interactions allow the user to tailor the performance evaluation to preferences and explore alternative improvement strategies by selecting and searching the different frontiers using directional...... in the suggested benchmarking tool. The study investigates how different characteristics on dairy farms influences the technical efficiency....

  4. RUNE benchmarks

    DEFF Research Database (Denmark)

    Peña, Alfredo

    This report contains the description of a number of benchmarks with the purpose of evaluating flow models for near-shore wind resource estimation. The benchmarks are designed based on the comprehensive database of observations that the RUNE coastal experiment established from onshore lidar...

  5. Benchmark selection

    DEFF Research Database (Denmark)

    Hougaard, Jens Leth; Tvede, Mich

    2002-01-01

    Within a production theoretic framework, this paper considers an axiomatic approach to benchmark selection. It is shown that two simple and weak axioms; efficiency and comprehensive monotonicity characterize a natural family of benchmarks which typically becomes unique. Further axioms are added...... in order to obtain a unique selection...

  6. 2010 Criticality Accident Alarm System Benchmark Experiments At The CEA Valduc SILENE Facility

    International Nuclear Information System (INIS)

    Miller, Thomas Martin; Dunn, Michael E.; Wagner, John C.; McMahan, Kimberly L.; Authier, Nicolas; Jacquet, Xavier; Rousseau, Guillaume; Wolff, Herve; Piot, Jerome; Savanier, Laurence; Baclet, Nathalie; Lee, Yi-kang; Masse, Veronique; Trama, Jean-Christophe; Gagnier, Emmanuel; Naury, Sylvie; Lenain, Richard; Hunter, Richard; Kim, Soon; Dulik, George Michael; Reynolds, Kevin H.

    2011-01-01

    Several experiments were performed at the CEA Valduc SILENE reactor facility, which are intended to be published as evaluated benchmark experiments in the ICSBEP Handbook. These evaluated benchmarks will be useful for the verification and validation of radiation transport codes and evaluated nuclear data, particularly those that are used in the analysis of CAASs. During these experiments SILENE was operated in pulsed mode in order to be representative of a criticality accident, which is rare among shielding benchmarks. Measurements of the neutron flux were made with neutron activation foils and measurements of photon doses were made with TLDs. Also unique to these experiments was the presence of several detectors used in actual CAASs, which allowed for the observation of their behavior during an actual critical pulse. This paper presents the preliminary measurement data currently available from these experiments. Also presented are comparisons of preliminary computational results with Scale and TRIPOLI-4 to the preliminary measurement data.

  7. What is the best practice for benchmark regulation of electricity distribution? Comparison of DEA, SFA and StoNED methods

    International Nuclear Information System (INIS)

    Kuosmanen, Timo; Saastamoinen, Antti; Sipiläinen, Timo

    2013-01-01

    Electricity distribution is a natural local monopoly. In many countries, the regulators of this sector apply frontier methods such as data envelopment analysis (DEA) or stochastic frontier analysis (SFA) to estimate the efficient cost of operation. In Finland, a new StoNED method was adopted in 2012. This paper compares DEA, SFA and StoNED in the context of regulating electricity distribution. Using data from Finland, we compare the impacts of methodological choices on cost efficiency estimates and acceptable cost. While the efficiency estimates are highly correlated, the cost targets reveal major differences. In addition, we examine performance of the methods by Monte Carlo simulations. We calibrate the data generation process (DGP) to closely match the empirical data and the model specification of the regulator. We find that the StoNED estimator yields a root mean squared error (RMSE) of 4% with the sample size 100. Precision improves as the sample size increases. The DEA estimator yields an RMSE of approximately 10%, but performance deteriorates as the sample size increases. The SFA estimator has an RMSE of 144%. The poor performance of SFA is due to the wrong functional form and multicollinearity. - Highlights: • We compare DEA, SFA and StoNED methods in the context of regulation of electricity distribution. • Both empirical comparisons and Monte Carlo simulations are presented. • Choice of benchmarking method has a significant economic impact on the regulatory outcomes. • StoNED yields the most precise results in the Monte Carlo simulations. • Five lessons concerning heterogeneity, noise, frontier, simulations, and implementation

  8. Comparison of Standard Light Water Reactor Cross-Section Libraries using the United States Nuclear Regulatory Commission Boiling Water Reactor Benchmark Problem

    Directory of Open Access Journals (Sweden)

    Kulesza Joel A.

    2016-01-01

    Full Text Available This paper describes a comparison of contemporary and historical light water reactor shielding and pressure vessel dosimetry cross-section libraries for a boiling water reactor calculational benchmark problem. The calculational benchmark problem was developed at Brookhaven National Laboratory by the request of the U. S. Nuclear Regulatory Commission. The benchmark problem was originally evaluated by Brookhaven National Laboratory using the Oak Ridge National Laboratory discrete ordinates code DORT and the BUGLE-93 cross-section library. In this paper, the Westinghouse RAPTOR-M3G three-dimensional discrete ordinates code was used. A variety of cross-section libraries were used with RAPTOR-M3G including the BUGLE93, BUGLE-96, and BUGLE-B7 cross-section libraries developed at Oak Ridge National Laboratory and ALPAN-VII.0 developed at Westinghouse. In comparing the calculated fast reaction rates using the four aforementioned cross-section libraries in the pressure vessel capsule, for six dosimetry reaction rates, a maximum relative difference of 8% was observed. As such, it is concluded that the results calculated by RAPTOR-M3G are consistent with the benchmark and further that the different vintage BUGLE cross-section libraries investigated are largely self-consistent.

  9. Preliminary Process Theory does not validate the Comparison Question Test: A comment on Palmatier and Rovner

    NARCIS (Netherlands)

    Ben-Shakar, G.; Gamer, M.; Iacono, W.; Meijer, E.; Verschuere, B.

    2015-01-01

    Palmatier and Rovner (2015) attempt to establish the construct validity of the Comparison Question Test (CQT) by citing extensive research ranging from modern neuroscience to memory and psychophysiology. In this comment we argue that merely citing studies on the preliminary process theory (PPT) of

  10. Summary report on the international comparison of NEACRP burnup benchmark calculations for high conversion light water reactor lattices

    International Nuclear Information System (INIS)

    Akie, Hiroshi; Ishiguro, Yukio; Takano, Hideki

    1988-10-01

    The results of the NEACRP HCLWR cell burnup benchmark calculations are summarized in this report. Fifteen organizations from eight countries participated in this benchmark and submitted twenty solutions. Large differences are still observed among the calculated values of void reactivities and conversion ratios. These differences are mainly caused from the discrepancies in the reaction rates of U-238, Pu-239 and fission products. The physics problems related to these results are briefly investigated in the report. In the specialists' meeting on this benchmark calculations held in April 1988, it was recommended to perform continuous energy Monte Carlo calculations in order to obtain reference solutions for design codes. The conclusions resulted from the specialists' meeting are also presented. (author)

  11. [Do you mean benchmarking?].

    Science.gov (United States)

    Bonnet, F; Solignac, S; Marty, J

    2008-03-01

    The purpose of benchmarking is to settle improvement processes by comparing the activities to quality standards. The proposed methodology is illustrated by benchmark business cases performed inside medical plants on some items like nosocomial diseases or organization of surgery facilities. Moreover, the authors have built a specific graphic tool, enhanced with balance score numbers and mappings, so that the comparison between different anesthesia-reanimation services, which are willing to start an improvement program, is easy and relevant. This ready-made application is even more accurate as far as detailed tariffs of activities are implemented.

  12. Regulatory Benchmarking

    DEFF Research Database (Denmark)

    Agrell, Per J.; Bogetoft, Peter

    2017-01-01

    Benchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators. The appli......Benchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators....... The application of bench-marking in regulation, however, requires specific steps in terms of data validation, model specification and outlier detection that are not systematically documented in open publications, leading to discussions about regulatory stability and economic feasibility of these techniques...

  13. Regulatory Benchmarking

    DEFF Research Database (Denmark)

    Agrell, Per J.; Bogetoft, Peter

    2017-01-01

    Benchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators. The appli......Benchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators....... The application of benchmarking in regulation, however, requires specific steps in terms of data validation, model specification and outlier detection that are not systematically documented in open publications, leading to discussions about regulatory stability and economic feasibility of these techniques...

  14. CEC thermal-hydraulic benchmark exercise on Fiploc verification experiment F2 in Battelle model containment. Experimental phases 2, 3 and 4. Results of comparisons

    International Nuclear Information System (INIS)

    Fischer, K.; Schall, M.; Wolf, L.

    1993-01-01

    The present final report comprises the major results of Phase II of the CEC thermal-hydraulic benchmark exercise on Fiploc verification experiment F2 in the Battelle model containment, experimental phases 2, 3 and 4, which was organized and sponsored by the Commission of the European Communities for the purpose of furthering the understanding and analysis of long-term thermal-hydraulic phenomena inside containments during and after severe core accidents. This benchmark exercise received high European attention with eight organizations from six countries participating with eight computer codes during phase 2. Altogether 18 results from computer code runs were supplied by the participants and constitute the basis for comparisons with the experimental data contained in this publication. This reflects both the high technical interest in, as well as the complexity of, this CEC exercise. Major comparison results between computations and data are reported on all important quantities relevant for containment analyses during long-term transients. These comparisons comprise pressure, steam and air content, velocities and their directions, heat transfer coefficients and saturation ratios. Agreements and disagreements are discussed for each participating code/institution, conclusions drawn and recommendations provided. The phase 2 CEC benchmark exercise provided an up-to-date state-of-the-art status review of the thermal-hydraulic capabilities of present computer codes for containment analyses. This exercise has shown that all of the participating codes can simulate the important global features of the experiment correctly, like: temperature stratification, pressure and leakage, heat transfer to structures, relative humidity, collection of sump water. Several weaknesses of individual codes were identified, and this may help to promote their development. As a general conclusion it may be said that while there is still a wide area of necessary extensions and improvements, the

  15. Preliminary Comparison of the Response of LHC Tertiary Collimators to Proton and Ion Beam Impacts

    CERN Document Server

    Cauchi, M; Bertarelli, A; Carra, F; Cerutti, F; Lari, L; Mollicone, P; Sammut, N

    2013-01-01

    The CERN Large Hadron Collider is designed to bring into collision protons as well as heavy ions. Accidents involving impacts on collimators can happen for both species. The interaction of lead ions with matter differs to that of protons, thus making this scenario a new interesting case to study as it can result in different damage aspects on the collimator. This paper will present a preliminary comparison of the response of collimators to proton and ion beam impacts.

  16. Benchmarking clinical photography services in the NHS.

    Science.gov (United States)

    Arbon, Giles

    2015-01-01

    Benchmarking is used in services across the National Health Service (NHS) using various benchmarking programs. Clinical photography services do not have a program in place and services have to rely on ad hoc surveys of other services. A trial benchmarking exercise was undertaken with 13 services in NHS Trusts. This highlights valuable data and comparisons that can be used to benchmark and improve services throughout the profession.

  17. Solution of the 'MIDICORE' WWER-1000 core periphery power distribution benchmark by KARATE and MCNP

    International Nuclear Information System (INIS)

    Temesvari, E.; Hegyi, G.; Hordosy, G.; Maraczy, C.

    2011-01-01

    The 'MIDICORE' WWER-1000 core periphery power distribution benchmark was proposed by Mr. Mikolas on the twentieth Symposium of AER in Finland in 2010. This MIDICORE benchmark is a two-dimensional calculation benchmark based on the WWER-1000 reactor core cold state geometry with taking into account the geometry of explicit radial reflector. The main task of the benchmark is to test the pin by pin power distribution in selected fuel assemblies at the periphery of the WWER-1000 core. In this paper we present our results (k eff , integral fission power) calculated by MCNP and the KARATE code system in KFKI-AEKI and the comparison to the preliminary reference Monte Carlo calculation results made by NRI, Rez. (Authors)

  18. Benchmark Comparison of Dual- and Quad-Core Processor Linux Clusters with Two Global Climate Modeling Workloads

    Science.gov (United States)

    McGalliard, James

    2008-01-01

    This viewgraph presentation details the science and systems environments that NASA High End computing program serves. Included is a discussion of the workload that is involved in the processing for the Global Climate Modeling. The Goddard Earth Observing System Model, Version 5 (GEOS-5) is a system of models integrated using the Earth System Modeling Framework (ESMF). The GEOS-5 system was used for the Benchmark tests, and the results of the tests are shown and discussed. Tests were also run for the Cubed Sphere system, results for these test are also shown.

  19. Benchmarking Reactor Systems Studies by Comparison of EU and Japanese System Code Results for Different DEMO Concepts

    Energy Technology Data Exchange (ETDEWEB)

    Kemp, R.; Ward, D.J., E-mail: richard.kemp@ccfe.ac.uk [EURATOM/CCFE Association, Culham Centre for Fusion Energy, Abingdon (United Kingdom); Nakamura, M.; Tobita, K. [Japan Atomic Energy Agency, Rokkasho (Japan); Federici, G. [EFDA Garching, Max Plank Institut fur Plasmaphysik, Garching (Germany)

    2012-09-15

    Full text: Recent systems studies work within the Broader Approach framework has focussed on benchmarking the EU systems code PROCESS against the Japanese code TPC for conceptual DEMO designs. This paper describes benchmarking work for a conservative, pulsed DEMO and an advanced, steady-state, high-bootstrap fraction DEMO. The resulting former machine is an R{sub 0} = 10 m, a = 2.5 m, {beta}{sub N} < 2.0 device with no enhancement in energy confinement over IPB98. The latter machine is smaller (R{sub 0} = 8 m, a = 2.7 m), with {beta}{sub N} = 3.0, enhanced confinement, and high bootstrap fraction f{sub BS} = 0.8. These options were chosen to test the codes across a wide range of parameter space. While generally in good agreement, some of the code outputs differ. In particular, differences have been identified in the impurity radiation models and flux swing calculations. The global effects of these differences are described and approaches to identifying the best models, including future experiments, are discussed. Results of varying some of the assumptions underlying the modelling are also presented, demonstrating the sensitivity of the solutions to technological limitations and providing guidance for where further research could be focussed. (author)

  20. Comparison of the updated solutions of the 6th dynamic AER Benchmark - main steam line break in a NPP with WWER-440

    International Nuclear Information System (INIS)

    Kliem, S.

    2003-01-01

    The 6 th dynamic AER Benchmark is used for the systematic validation of coupled 3D neutron kinetic/thermal hydraulic system codes. It was defined at The 10 th AER-Symposium. In this benchmark, a hypothetical double ended break of one main steam line at full power in a WWER-440 plant is investigated. The main thermal hydraulic features are the consideration of incomplete coolant mixing in the lower and upper plenum of the reactor pressure vessel and an asymmetric operation of the feed water system. For the tuning of the different nuclear cross section data used by the participants, an isothermal re-criticality temperature was defined. The paper gives an overview on the behaviour of the main thermal hydraulic and neutron kinetic parameters in the provided solutions. The differences in the updated solution in comparison to the previous ones are described. Improvements in the modelling of the transient led to a better agreement of a part of the results while for another part the deviations rose up. The sensitivity of the core power behaviour on the secondary side modelling is discussed in detail (Authors)

  1. A comparison of two efficient nonlinear heat conduction methodologies using a two-dimensional time-dependent benchmark problem

    International Nuclear Information System (INIS)

    Wilson, G.L.; Rydin, R.A.; Orivuori, S.

    1988-01-01

    Two highly efficient nonlinear time-dependent heat conduction methodologies, the nonlinear time-dependent nodal integral technique (NTDNT) and IVOHEAT are compared using one- and two-dimensional time-dependent benchmark problems. The NTDNT is completely based on newly developed time-dependent nodal integral methods, whereas IVOHEAT is based on finite elements in space and Crank-Nicholson finite differences in time. IVOHEAT contains the geometric flexibility of the finite element approach, whereas the nodal integral method is constrained at present to Cartesian geometry. For test problems where both methods are equally applicable, the nodal integral method is approximately six times more efficient per dimension than IVOHEAT when a comparable overall accuracy is chosen. This translates to a factor of 200 for a three-dimensional problem having relatively homogeneous regions, and to a smaller advantage as the degree of heterogeneity increases

  2. The Isprs Benchmark on Indoor Modelling

    Science.gov (United States)

    Khoshelham, K.; Díaz Vilariño, L.; Peter, M.; Kang, Z.; Acharya, D.

    2017-09-01

    Automated generation of 3D indoor models from point cloud data has been a topic of intensive research in recent years. While results on various datasets have been reported in literature, a comparison of the performance of different methods has not been possible due to the lack of benchmark datasets and a common evaluation framework. The ISPRS benchmark on indoor modelling aims to address this issue by providing a public benchmark dataset and an evaluation framework for performance comparison of indoor modelling methods. In this paper, we present the benchmark dataset comprising several point clouds of indoor environments captured by different sensors. We also discuss the evaluation and comparison of indoor modelling methods based on manually created reference models and appropriate quality evaluation criteria. The benchmark dataset is available for download at: html"target="_blank">http://www2.isprs.org/commissions/comm4/wg5/benchmark-on-indoor-modelling.html.

  3. Preliminary results of the seventh three-dimensional AER dynamic benchmark problem calculation. Solution with DYN3D and RELAP5-3D codes

    International Nuclear Information System (INIS)

    Bencik, M.; Hadek, J.

    2011-01-01

    The paper gives a brief survey of the seventh three-dimensional AER dynamic benchmark calculation results received with the codes DYN3D and RELAP5-3D at Nuclear Research Institute Rez. This benchmark was defined at the twentieth AER Symposium in Hanassari (Finland). It is focused on investigation of transient behaviour in a WWER-440 nuclear power plant. Its initiating event is opening of the main isolation valve and re-connection of the loop with its main circulation pump in operation. The WWER-440 plant is at the end of the first fuel cycle and in hot full power conditions. Stationary and burnup calculations were performed with the code DYN3D. Transient calculation was made with the system code RELAP5-3D. The two-group homogenized cross sections library HELGD05 created by HELIOS code was used for the generation of reactor core neutronic parameters. The detailed six loops model of NPP Dukovany was adopted for the seventh AER dynamic benchmark purposes. The RELAP5-3D full core neutronic model was coupled with 49 core thermal-hydraulic channels and 8 reflector channels connected with the three-dimensional model of the reactor vessel. The detailed nodalization of reactor downcomer, lower and upper plenum was used. Mixing in lower and upper plenum was simulated. The first part of paper contains a brief characteristic of RELAP5-3D system code and a short description of NPP input deck and reactor core model. The second part shows the time dependencies of important global and local parameters. (Authors)

  4. Comparison of the PHISICS/RELAP5-3D Ring and Block Model Results for Phase I of the OECD MHTGR-350 Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Gerhard Strydom

    2014-04-01

    The INL PHISICS code system consists of three modules providing improved core simulation capability: INSTANT (performing 3D nodal transport core calculations), MRTAU (depletion and decay heat generation) and a perturbation/mixer module. Coupling of the PHISICS code suite to the thermal hydraulics system code RELAP5-3D has recently been finalized, and as part of the code verification and validation program the exercises defined for Phase I of the OECD/NEA MHTGR 350 MW Benchmark were completed. This paper provides an overview of the MHTGR Benchmark, and presents selected results of the three steady state exercises 1-3 defined for Phase I. For Exercise 1, a stand-alone steady-state neutronics solution for an End of Equilibrium Cycle Modular High Temperature Reactor (MHTGR) was calculated with INSTANT, using the provided geometry, material descriptions, and detailed cross-section libraries. Exercise 2 required the modeling of a stand-alone thermal fluids solution. The RELAP5-3D results of four sub-cases are discussed, consisting of various combinations of coolant bypass flows and material thermophysical properties. Exercise 3 combined the first two exercises in a coupled neutronics and thermal fluids solution, and the coupled code suite PHISICS/RELAP5-3D was used to calculate the results of two sub-cases. The main focus of the paper is a comparison of the traditional RELAP5-3D “ring” model approach vs. a much more detailed model that include kinetics feedback on individual block level and thermal feedbacks on a triangular sub-mesh. The higher fidelity of the block model is illustrated with comparison results on the temperature, power density and flux distributions, and the typical under-predictions produced by the ring model approach are highlighted.

  5. Benchmarking: applications to transfusion medicine.

    Science.gov (United States)

    Apelseth, Torunn Oveland; Molnar, Laura; Arnold, Emmy; Heddle, Nancy M

    2012-10-01

    Benchmarking is as a structured continuous collaborative process in which comparisons for selected indicators are used to identify factors that, when implemented, will improve transfusion practices. This study aimed to identify transfusion medicine studies reporting on benchmarking, summarize the benchmarking approaches used, and identify important considerations to move the concept of benchmarking forward in the field of transfusion medicine. A systematic review of published literature was performed to identify transfusion medicine-related studies that compared at least 2 separate institutions or regions with the intention of benchmarking focusing on 4 areas: blood utilization, safety, operational aspects, and blood donation. Forty-five studies were included: blood utilization (n = 35), safety (n = 5), operational aspects of transfusion medicine (n = 5), and blood donation (n = 0). Based on predefined criteria, 7 publications were classified as benchmarking, 2 as trending, and 36 as single-event studies. Three models of benchmarking are described: (1) a regional benchmarking program that collects and links relevant data from existing electronic sources, (2) a sentinel site model where data from a limited number of sites are collected, and (3) an institutional-initiated model where a site identifies indicators of interest and approaches other institutions. Benchmarking approaches are needed in the field of transfusion medicine. Major challenges include defining best practices and developing cost-effective methods of data collection. For those interested in initiating a benchmarking program, the sentinel site model may be most effective and sustainable as a starting point, although the regional model would be the ideal goal. Copyright © 2012 Elsevier Inc. All rights reserved.

  6. A preliminary comparison of hydrodynamic approaches for flood inundation modeling of urban areas in Jakarta Ciliwung river basin

    Science.gov (United States)

    Rojali, Aditia; Budiaji, Abdul Somat; Pribadi, Yudhistira Satya; Fatria, Dita; Hadi, Tri Wahyu

    2017-07-01

    This paper addresses on the numerical modeling approaches for flood inundation in urban areas. Decisive strategy to choose between 1D, 2D or even a hybrid 1D-2D model is more than important to optimize flood inundation analyses. To find cost effective yet robust and accurate model has been our priority and motivation in the absence of available High Performance Computing facilities. The application of 1D, 1D/2D and full 2D modeling approach to river flood study in Jakarta Ciliwung river basin, and a comparison of approaches benchmarked for the inundation study are presented. This study demonstrate the successful use of 1D/2D and 2D system to model Jakarta Ciliwung river basin in terms of inundation results and computational aspect. The findings of the study provide an interesting comparison between modeling approaches, HEC-RAS 1D, 1D-2D, 2D, and ANUGA when benchmarked to the Manggarai water level measurement.

  7. Cognitive effects of two nutraceuticals Ginseng and Bacopa benchmarked against modafinil: a review and comparison of effect sizes.

    Science.gov (United States)

    Neale, Chris; Camfield, David; Reay, Jonathon; Stough, Con; Scholey, Andrew

    2013-03-01

    Over recent years there has been increasing research into both pharmaceutical and nutraceutical cognition enhancers. Here we aimed to calculate the effect sizes of positive cognitive effect of the pharmaceutical modafinil in order to benchmark the effect of two widely used nutraceuticals Ginseng and Bacopa (which have consistent acute and chronic cognitive effects, respectively). A search strategy was implemented to capture clinical studies into the neurocognitive effects of modafinil, Ginseng and Bacopa. Studies undertaken on healthy human subjects using a double-blind, placebo-controlled design were included. For each study where appropriate data were included, effect sizes (Cohen's d) were calculated for measures showing significant positive and negative effects of treatment over placebo. The highest effect sizes for cognitive outcomes were 0.77 for modafinil (visuospatial memory accuracy), 0.86 for Ginseng (simple reaction time) and 0.95 for Bacopa (delayed word recall). These data confirm that neurocognitive enhancement from well characterized nutraceuticals can produce cognition enhancing effects of similar magnitude to those from pharmaceutical interventions. Future research should compare these effects directly in clinical trials. © 2012 The Authors. British Journal of Clinical Pharmacology © 2012 The British Pharmacological Society.

  8. Cognitive effects of two nutraceuticals Ginseng and Bacopa benchmarked against modafinil: a review and comparison of effect sizes

    Science.gov (United States)

    Neale, Chris; Camfield, David; Reay, Jonathon; Stough, Con; Scholey, Andrew

    2013-01-01

    Over recent years there has been increasing research into both pharmaceutical and nutraceutical cognition enhancers. Here we aimed to calculate the effect sizes of positive cognitive effect of the pharmaceutical modafinil in order to benchmark the effect of two widely used nutraceuticals Ginseng and Bacopa (which have consistent acute and chronic cognitive effects, respectively). A search strategy was implemented to capture clinical studies into the neurocognitive effects of modafinil, Ginseng and Bacopa. Studies undertaken on healthy human subjects using a double‐blind, placebo‐controlled design were included. For each study where appropriate data were included, effect sizes (Cohen's d) were calculated for measures showing significant positive and negative effects of treatment over placebo. The highest effect sizes for cognitive outcomes were 0.77 for modafinil (visuospatial memory accuracy), 0.86 for Ginseng (simple reaction time) and 0.95 for Bacopa (delayed word recall). These data confirm that neurocognitive enhancement from well characterized nutraceuticals can produce cognition enhancing effects of similar magnitude to those from pharmaceutical interventions. Future research should compare these effects directly in clinical trials. PMID:23043278

  9. A comparison of global optimization algorithms with standard benchmark functions and real-world applications using Energy Plus

    Energy Technology Data Exchange (ETDEWEB)

    Kamph, Jerome Henri; Robinson, Darren; Wetter, Michael

    2009-09-01

    There is an increasing interest in the use of computer algorithms to identify combinations of parameters which optimise the energy performance of buildings. For such problems, the objective function can be multi-modal and needs to be approximated numerically using building energy simulation programs. As these programs contain iterative solution algorithms, they introduce discontinuities in the numerical approximation to the objective function. Metaheuristics often work well for such problems, but their convergence to a global optimum cannot be established formally. Moreover, different algorithms tend to be suited to particular classes of optimization problems. To shed light on this issue we compared the performance of two metaheuristics, the hybrid CMA-ES/HDE and the hybrid PSO/HJ, in minimizing standard benchmark functions and real-world building energy optimization problems of varying complexity. From this we find that the CMA-ES/HDE performs well on more complex objective functions, but that the PSO/HJ more consistently identifies the global minimum for simpler objective functions. Both identified similar values in the objective functions arising from energy simulations, but with different combinations of model parameters. This may suggest that the objective function is multi-modal. The algorithms also correctly identified some non-intuitive parameter combinations that were caused by a simplified control sequence of the building energy system that does not represent actual practice, further reinforcing their utility.

  10. Comparison benchmark between tokamak simulation code and TokSys for Chinese Fusion Engineering Test Reactor vertical displacement control design

    International Nuclear Information System (INIS)

    Qiu Qing-Lai; Xiao Bing-Jia; Guo Yong; Liu Lei; Wang Yue-Hang

    2017-01-01

    Vertical displacement event (VDE) is a big challenge to the existing tokamak equipment and that being designed. As a Chinese next-step tokamak, the Chinese Fusion Engineering Test Reactor (CFETR) has to pay attention to the VDE study with full-fledged numerical codes during its conceptual design. The tokamak simulation code (TSC) is a free boundary time-dependent axisymmetric tokamak simulation code developed in PPPL, which advances the MHD equations describing the evolution of the plasma in a rectangular domain. The electromagnetic interactions between the surrounding conductor circuits and the plasma are solved self-consistently. The TokSys code is a generic modeling and simulation environment developed in GA. Its RZIP model treats the plasma as a fixed spatial distribution of currents which couple with the surrounding conductors through circuit equations. Both codes have been individually used for the VDE study on many tokamak devices, such as JT-60U, EAST, NSTX, DIII-D, and ITER. Considering the model differences, benchmark work is needed to answer whether they reproduce each other’s results correctly. In this paper, the TSC and TokSys codes are used for analyzing the CFETR vertical instability passive and active controls design simultaneously. It is shown that with the same inputs, the results from these two codes conform with each other. (paper)

  11. Controlling for race/ethnicity: a comparison of California commercial health plans CAHPS scores to NCBD benchmarks

    Directory of Open Access Journals (Sweden)

    Lopez Rebeca A

    2010-01-01

    Full Text Available Abstract Background Because California has higher managed care penetration and the race/ethnicity of Californians differs from the rest of the United States, we tested the hypothesis that California's lower health plan Consumer Assessment of Healthcare Providers and Systems (CAHPS® survey results are attributable to the state's racial/ethnic composition. Methods California CAHPS survey responses for commercial health plans were compared to national responses for five selected measures: three global ratings of doctor, health plan and health care, and two composite scores regarding doctor communication and staff courtesy, respect, and helpfulness. We used the 2005 National CAHPS 3.0 Benchmarking Database to assess patient experiences of care. Multiple stepwise logistic regression was used to see if patient experience ratings based on CAHPS responses in California commercial health plans differed from all other states combined. Results CAHPS patient experience responses in California were not significantly different than the rest of the nation after adjusting for age, general health rating, individual health plan, education, time in health plan, race/ethnicity, and gender. Both California and national patient experience scores varied by race/ethnicity. In both California and the rest of the nation Blacks tended to be more satisfied, while Asians were less satisfied. Conclusions California commercial health plan enrollees rate their experiences of care similarly to enrollees in the rest of the nation when seven different variables including race/ethnicity are considered. These findings support accounting for more than just age, gender and general health rating before comparing health plans from one state to another. Reporting on race/ethnicity disparities in member experiences of care could raise awareness and increase accountability for reducing these racial and ethnic disparities.

  12. Comparison of the Predictive Performance and Interpretability of Random Forest and Linear Models on Benchmark Data Sets.

    Science.gov (United States)

    Marchese Robinson, Richard L; Palczewska, Anna; Palczewski, Jan; Kidley, Nathan

    2017-08-28

    The ability to interpret the predictions made by quantitative structure-activity relationships (QSARs) offers a number of advantages. While QSARs built using nonlinear modeling approaches, such as the popular Random Forest algorithm, might sometimes be more predictive than those built using linear modeling approaches, their predictions have been perceived as difficult to interpret. However, a growing number of approaches have been proposed for interpreting nonlinear QSAR models in general and Random Forest in particular. In the current work, we compare the performance of Random Forest to those of two widely used linear modeling approaches: linear Support Vector Machines (SVMs) (or Support Vector Regression (SVR)) and partial least-squares (PLS). We compare their performance in terms of their predictivity as well as the chemical interpretability of the predictions using novel scoring schemes for assessing heat map images of substructural contributions. We critically assess different approaches for interpreting Random Forest models as well as for obtaining predictions from the forest. We assess the models on a large number of widely employed public-domain benchmark data sets corresponding to regression and binary classification problems of relevance to hit identification and toxicology. We conclude that Random Forest typically yields comparable or possibly better predictive performance than the linear modeling approaches and that its predictions may also be interpreted in a chemically and biologically meaningful way. In contrast to earlier work looking at interpretation of nonlinear QSAR models, we directly compare two methodologically distinct approaches for interpreting Random Forest models. The approaches for interpreting Random Forest assessed in our article were implemented using open-source programs that we have made available to the community. These programs are the rfFC package ( https://r-forge.r-project.org/R/?group_id=1725 ) for the R statistical

  13. Benchmark of PENELOPE code for low-energy photon transport: dose comparisons with MCNP4 and EGS4

    International Nuclear Information System (INIS)

    Ye, Sung-Joon; Brezovich, Ivan A; Pareek, Prem; Naqvi, Shahid A

    2004-01-01

    The expanding clinical use of low-energy photon emitting 125 I and 103 Pd seeds in recent years has led to renewed interest in their dosimetric properties. Numerous papers pointed out that higher accuracy could be obtained in Monte Carlo simulations by utilizing newer libraries for the low-energy photon cross-sections, such as XCOM and EPDL97. The recently developed PENELOPE 2001 Monte Carlo code is user friendly and incorporates photon cross-section data from the EPDL97. The code has been verified for clinical dosimetry of high-energy electron and photon beams, but has not yet been tested at low energies. In the present work, we have benchmarked the PENELOPE code for 10-150 keV photons. We computed radial dose distributions from 0 to 10 cm in water at photon energies of 10-150 keV using both PENELOPE and MCNP4C with either DLC-146 or DLC-200 cross-section libraries, assuming a point source located at the centre of a 30 cm diameter and 20 cm length cylinder. Throughout the energy range of simulated photons (except for 10 keV), PENELOPE agreed within statistical uncertainties (at worst ±5%) with MCNP/DLC-146 in the entire region of 1-10 cm and with published EGS4 data up to 5 cm. The dose at 1 cm (or dose rate constant) of PENELOPE agreed with MCNP/DLC-146 and EGS4 data within approximately ±2% in the range of 20-150 keV, while MCNP/DLC-200 produced values up to 9% lower in the range of 20-100 keV than PENELOPE or the other codes. However, the differences among the four datasets became negligible above 100 keV

  14. Benchmark of PENELOPE code for low-energy photon transport: dose comparisons with MCNP4 and EGS4.

    Science.gov (United States)

    Ye, Sung-Joon; Brezovich, Ivan A; Pareek, Prem; Naqvi, Shahid A

    2004-02-07

    The expanding clinical use of low-energy photon emitting 125I and 103Pd seeds in recent years has led to renewed interest in their dosimetric properties. Numerous papers pointed out that higher accuracy could be obtained in Monte Carlo simulations by utilizing newer libraries for the low-energy photon cross-sections, such as XCOM and EPDL97. The recently developed PENELOPE 2001 Monte Carlo code is user friendly and incorporates photon cross-section data from the EPDL97. The code has been verified for clinical dosimetry of high-energy electron and photon beams, but has not yet been tested at low energies. In the present work, we have benchmarked the PENELOPE code for 10-150 keV photons. We computed radial dose distributions from 0 to 10 cm in water at photon energies of 10-150 keV using both PENELOPE and MCNP4C with either DLC-146 or DLC-200 cross-section libraries, assuming a point source located at the centre of a 30 cm diameter and 20 cm length cylinder. Throughout the energy range of simulated photons (except for 10 keV), PENELOPE agreed within statistical uncertainties (at worst +/- 5%) with MCNP/DLC-146 in the entire region of 1-10 cm and with published EGS4 data up to 5 cm. The dose at 1 cm (or dose rate constant) of PENELOPE agreed with MCNP/DLC-146 and EGS4 data within approximately +/- 2% in the range of 20-150 keV, while MCNP/DLC-200 produced values up to 9% lower in the range of 20-100 keV than PENELOPE or the other codes. However, the differences among the four datasets became negligible above 100 keV.

  15. A benchmarking study

    Directory of Open Access Journals (Sweden)

    H. Groessing

    2015-02-01

    Full Text Available A benchmark study for permeability measurement is presented. In the past studies of other research groups which focused on the reproducibility of 1D-permeability measurements showed high standard deviations of the gained permeability values (25%, even though a defined test rig with required specifications was used. Within this study, the reproducibility of capacitive in-plane permeability testing system measurements was benchmarked by comparing results of two research sites using this technology. The reproducibility was compared by using a glass fibre woven textile and carbon fibre non crimped fabric (NCF. These two material types were taken into consideration due to the different electrical properties of glass and carbon with respect to dielectric capacitive sensors of the permeability measurement systems. In order to determine the unsaturated permeability characteristics as function of fibre volume content the measurements were executed at three different fibre volume contents including five repetitions. It was found that the stability and reproducibility of the presentedin-plane permeability measurement system is very good in the case of the glass fibre woven textiles. This is true for the comparison of the repetition measurements as well as for the comparison between the two different permeameters. These positive results were confirmed by a comparison to permeability values of the same textile gained with an older generation permeameter applying the same measurement technology. Also it was shown, that a correct determination of the grammage and the material density are crucial for correct correlation of measured permeability values and fibre volume contents.

  16. Analysis of a molten salt reactor benchmark

    International Nuclear Information System (INIS)

    Ghosh, Biplab; Bajpai, Anil; Degweker, S.B.

    2013-01-01

    This paper discusses results of our studies of an IAEA molten salt reactor (MSR) benchmark. The benchmark, proposed by Japan, involves burnup calculations of a single lattice cell of a MSR for burning plutonium and other minor actinides. We have analyzed this cell with in-house developed burnup codes BURNTRAN and McBURN. This paper also presents a comparison of the results of our codes and those obtained by the proposers of the benchmark. (author)

  17. Benchmarking in Foodservice Operations

    National Research Council Canada - National Science Library

    Johnson, Bonnie

    1998-01-01

    The objective of this study was to identify usage of foodservice performance measures, important activities in foodservice benchmarking, and benchmarking attitudes, beliefs, and practices by foodservice directors...

  18. SciDB versus Spark: A Preliminary Comparison Based on an Earth Science Use Case

    Science.gov (United States)

    Clune, T.; Kuo, K. S.; Doan, K.; Oloso, A.

    2015-12-01

    We compare two Big Data technologies, SciDB and Spark, for performance, usability, and extensibility, when applied to a representative Earth science use case. SciDB is a new-generation parallel distributed database management system (DBMS) based on the array data model that is capable of handling multidimensional arrays efficiently but requires lengthy data ingest prior to analysis, whereas Spark is a fast and general engine for large scale data processing that can immediately process raw data files and thereby avoid the ingest process. Once data have been ingested, SciDB is very efficient in database operations such as subsetting. Spark, on the other hand, provides greater flexibility by supporting a wide variety of high-level tools including DBMS's. For the performance aspect of this preliminary comparison, we configure Spark to operate directly on text or binary data files and thereby limit the need for additional tools. Arguably, a more appropriate comparison would involve exploring other configurations of Spark which exploit supported high-level tools, but that is beyond our current resources. To make the comparison as "fair" as possible, we export the arrays produced by SciDB into text files (or converting them to binary files) for the intake by Spark and thereby avoid any additional file processing penalties. The Earth science use case selected for this comparison is the identification and tracking of snowstorms in the NASA Modern Era Retrospective-analysis for Research and Applications (MERRA) reanalysis data. The identification portion of the use case is to flag all grid cells of the MERRA high-resolution hourly data that satisfies our criteria for snowstorm, whereas the tracking portion connects flagged cells adjacent in time and space to form a snowstorm episode. We will report the results of our comparisons at this presentation.

  19. California commercial building energy benchmarking

    Energy Technology Data Exchange (ETDEWEB)

    Kinney, Satkartar; Piette, Mary Ann

    2003-07-01

    Building energy benchmarking is the comparison of whole-building energy use relative to a set of similar buildings. It provides a useful starting point for individual energy audits and for targeting buildings for energy-saving measures in multiple-site audits. Benchmarking is of interest and practical use to a number of groups. Energy service companies and performance contractors communicate energy savings potential with ''typical'' and ''best-practice'' benchmarks while control companies and utilities can provide direct tracking of energy use and combine data from multiple buildings. Benchmarking is also useful in the design stage of a new building or retrofit to determine if a design is relatively efficient. Energy managers and building owners have an ongoing interest in comparing energy performance to others. Large corporations, schools, and government agencies with numerous facilities also use benchmarking methods to compare their buildings to each other. The primary goal of Task 2.1.1 Web-based Benchmarking was the development of a web-based benchmarking tool, dubbed Cal-Arch, for benchmarking energy use in California commercial buildings. While there were several other benchmarking tools available to California consumers prior to the development of Cal-Arch, there were none that were based solely on California data. Most available benchmarking information, including the Energy Star performance rating, were developed using DOE's Commercial Building Energy Consumption Survey (CBECS), which does not provide state-level data. Each database and tool has advantages as well as limitations, such as the number of buildings and the coverage by type, climate regions and end uses. There is considerable commercial interest in benchmarking because it provides an inexpensive method of screening buildings for tune-ups and retrofits. However, private companies who collect and manage consumption data are concerned that the

  20. Benchmarking of RESRAD-OFFSITE : transition from RESRAD (onsite) to RESRAD-OFFSITE and comparison of the RESRAD-OFFSITE predictions with peercodes

    International Nuclear Information System (INIS)

    Yu, C.; Gnanapragasam, E.; Cheng, J.-J.; Biwer, B.

    2006-01-01

    The main purpose of this report is to document the benchmarking results and verification of the RESRAD-OFFSITE code as part of the quality assurance requirements of the RESRAD development program. This documentation will enable the U.S. Department of Energy (DOE) and its contractors, and the U.S. Nuclear Regulatory Commission (NRC) and its licensees and other stakeholders to use the quality-assured version of the code to perform dose analysis in a risk-informed and technically defensible manner to demonstrate compliance with the NRC's License Termination Rule, Title 10, Part 20, Subpart E, of the Code of Federal Regulations (10 CFR Part 20, Subpart E); DOE's 10 CFR Part 834, Order 5400.5, ''Radiation Protection of the Public and the Environment''; and other Federal and State regulatory requirements as appropriate. The other purpose of this report is to document the differences and similarities between the RESRAD (onsite) and RESRAD-OFFSITE codes so that users (dose analysts and risk assessors) can make a smooth transition from use of the RESRAD (onsite) code to use of the RESRAD-OFFSITE code for performing both onsite and offsite dose analyses. The evolution of the RESRAD-OFFSITE code from the RESRAD (onsite) code is described in Chapter 1 to help the dose analyst and risk assessor make a smooth conceptual transition from the use of one code to that of the other. Chapter 2 provides a comparison of the predictions of RESRAD (onsite) and RESRAD-OFFSITE for an onsite exposure scenario. Chapter 3 documents the results of benchmarking RESRAD-OFFSITE's atmospheric transport and dispersion submodel against the U.S. Environmental Protection Agency's (EPA's) CAP88-PC (Clean Air Act Assessment Package-1988) and ISCLT3 (Industrial Source Complex-Long Term) models. Chapter 4 documents the comparison results of the predictions of the RESRAD-OFFSITE code and its submodels with the predictions of peer models. This report was prepared by Argonne National Laboratory's (Argonne

  1. A preliminary comparison of mineral deposits in faults near Yucca Mountain, Nevada, with possible analogs

    International Nuclear Information System (INIS)

    Vaniman, D.T.; Bish, D.L.; Chipera, S.

    1988-05-01

    Several faults near Yucca Mountain, Nevada, contain abundant calcite and opal-CT, with lesser amounts of opal-A and sepiolite or smectite. These secondary minerals are being studied to determine the directions, amounts, and timing of transport involved in their formation. Such information is important for evaluating the future performances of a potential high-level nuclear waste repository beneath Yucca Mountain. This report is a preliminary assessment of how those minerals were formed. Possible analog deposits from known hydrothermal veins, warm springs, cold springs or seeps, soils, and aeolian sands were studied by petrographic and x-ray diffraction methods for comparison with the minerals deposited in the faults; there are major mineralogic differences in all of these environments except in the aeolian sands and in some cold seeps. Preliminary conclusions are that the deposits in the faults and in the sand ramps are closely related, and that the process of deposition did not require upward transport from depth. 35 refs., 25 figs

  2. Benchmarks for GADRAS performance validation

    International Nuclear Information System (INIS)

    Mattingly, John K.; Mitchell, Dean James; Rhykerd, Charles L. Jr.

    2009-01-01

    The performance of the Gamma Detector Response and Analysis Software (GADRAS) was validated by comparing GADRAS model results to experimental measurements for a series of benchmark sources. Sources for the benchmark include a plutonium metal sphere, bare and shielded in polyethylene, plutonium oxide in cans, a highly enriched uranium sphere, bare and shielded in polyethylene, a depleted uranium shell and spheres, and a natural uranium sphere. The benchmark experimental data were previously acquired and consist of careful collection of background and calibration source spectra along with the source spectra. The calibration data were fit with GADRAS to determine response functions for the detector in each experiment. A one-dimensional model (pie chart) was constructed for each source based on the dimensions of the benchmark source. The GADRAS code made a forward calculation from each model to predict the radiation spectrum for the detector used in the benchmark experiment. The comparisons between the GADRAS calculation and the experimental measurements are excellent, validating that GADRAS can correctly predict the radiation spectra for these well-defined benchmark sources.

  3. Benchmarking in Czech Higher Education

    Directory of Open Access Journals (Sweden)

    Plaček Michal

    2015-12-01

    Full Text Available The first part of this article surveys the current experience with the use of benchmarking at Czech universities specializing in economics and management. The results indicate that collaborative benchmarking is not used on this level today, but most actors show some interest in its introduction. The expression of the need for it and the importance of benchmarking as a very suitable performance-management tool in less developed countries are the impetus for the second part of our article. Based on an analysis of the current situation and existing needs in the Czech Republic, as well as on a comparison with international experience, recommendations for public policy are made, which lie in the design of a model of a collaborative benchmarking for Czech economics and management in higher-education programs. Because the fully complex model cannot be implemented immediately – which is also confirmed by structured interviews with academics who have practical experience with benchmarking –, the final model is designed as a multi-stage model. This approach helps eliminate major barriers to the implementation of benchmarking.

  4. Benchmarked Library Websites Comparative Study

    KAUST Repository

    Ramli, Rindra M.; Tyhurst, Janis

    2015-01-01

    This presentation provides an analysis of services provided by the benchmarked library websites. The exploratory study includes comparison of these websites against a list of criterion and presents a list of services that are most commonly deployed by the selected websites. In addition to that, the investigators proposed a list of services that could be provided via the KAUST library website.

  5. Evaluation and comparison of benchmark QSAR models to predict a relevant REACH endpoint: The bioconcentration factor (BCF)

    Energy Technology Data Exchange (ETDEWEB)

    Gissi, Andrea [Laboratory of Environmental Chemistry and Toxicology, IRCCS – Istituto di Ricerche Farmacologiche Mario Negri, Via La Masa 19, 20156 Milano (Italy); Dipartimento di Farmacia – Scienze del Farmaco, Università degli Studi di Bari “Aldo Moro”, Via E. Orabona 4, 70125 Bari (Italy); Lombardo, Anna; Roncaglioni, Alessandra [Laboratory of Environmental Chemistry and Toxicology, IRCCS – Istituto di Ricerche Farmacologiche Mario Negri, Via La Masa 19, 20156 Milano (Italy); Gadaleta, Domenico [Laboratory of Environmental Chemistry and Toxicology, IRCCS – Istituto di Ricerche Farmacologiche Mario Negri, Via La Masa 19, 20156 Milano (Italy); Dipartimento di Farmacia – Scienze del Farmaco, Università degli Studi di Bari “Aldo Moro”, Via E. Orabona 4, 70125 Bari (Italy); Mangiatordi, Giuseppe Felice; Nicolotti, Orazio [Dipartimento di Farmacia – Scienze del Farmaco, Università degli Studi di Bari “Aldo Moro”, Via E. Orabona 4, 70125 Bari (Italy); Benfenati, Emilio, E-mail: emilio.benfenati@marionegri.it [Laboratory of Environmental Chemistry and Toxicology, IRCCS – Istituto di Ricerche Farmacologiche Mario Negri, Via La Masa 19, 20156 Milano (Italy)

    2015-02-15

    regression (R{sup 2}=0.85) and sensitivity (average>0.70) for new compounds in the AD but not present in the training set. However, no single optimal model exists and, thus, it would be wise a case-by-case assessment. Yet, integrating the wealth of information from multiple models remains the winner approach. - Highlights: • REACH encourages the use of in silico methods in the assessment of chemicals safety. • The performances of nine BCF models were evaluated on a benchmark database of 851 chemicals. • We compared the models on the basis of both regression and classification performance. • Statistics on chemicals out of the training set and/or within the applicability domain were compiled. • The results show that QSAR models are useful as weight-of-evidence in support to other methods.

  6. Evaluation and comparison of benchmark QSAR models to predict a relevant REACH endpoint: The bioconcentration factor (BCF)

    International Nuclear Information System (INIS)

    Gissi, Andrea; Lombardo, Anna; Roncaglioni, Alessandra; Gadaleta, Domenico; Mangiatordi, Giuseppe Felice; Nicolotti, Orazio; Benfenati, Emilio

    2015-01-01

    sensitivity (average>0.70) for new compounds in the AD but not present in the training set. However, no single optimal model exists and, thus, it would be wise a case-by-case assessment. Yet, integrating the wealth of information from multiple models remains the winner approach. - Highlights: • REACH encourages the use of in silico methods in the assessment of chemicals safety. • The performances of nine BCF models were evaluated on a benchmark database of 851 chemicals. • We compared the models on the basis of both regression and classification performance. • Statistics on chemicals out of the training set and/or within the applicability domain were compiled. • The results show that QSAR models are useful as weight-of-evidence in support to other methods

  7. Comparison of neutron diffusion theory codes in two and three space dimensions using a sodium cooled fast reactor benchmark

    International Nuclear Information System (INIS)

    Butland, A.T.D.; Putney, J.; Sweet, D.W.

    1980-04-01

    This report describes work performed to compare two UK neutron diffusion theory codes, TIGAR and SNAP, with published results for eight other codes available abroad. Both mesh edge and mesh centred finite difference diffusion theory codes as well as one axial synthesis code are included in the comparison and a range of iteration procedures are used by them. Comparison is made of calculations for a model of the sodium cooled fast reactor SNR-300 in both triangular and rectangular geometry and for a range of spatial meshes, enabling extrapolations to infinite mesh to be made. Calculated values of the effective multiplication constant, keff, for all the codes, agree very well when extrapolated to infinite mesh, indicating that no significant errors arising from the finite difference approximation but independent of mesh spacing are present in the calculations. The variation of keff with mesh area is found to be linear for the small meshes considered here, with the gradients for the mesh centred and mesh edged codes being of opposite sign. The results obtained using the mesh centred codes TIGAR, SNAP and CITATION agree closely with one another for all the meshes considered; the mesh edge codes agree less closely. (author)

  8. a Preliminary Investigation on Comparison and Transformation of SENTINEL-2 MSI and Landsat 8 Oli

    Science.gov (United States)

    Chen, F.; Lou, S.; Fan, Q.; Li, J.; Wang, C.; Claverie, M.

    2018-05-01

    A PRELIMINARY INVESTIGATION ON COMPARISON AND TRANSFORMATION OF SENTINEL-2 MSI AND LANDSAT 8 OLI Timely and accurate earth observation with short revisit interval is usually necessary, especially for emergency response. Currently, several new generation sensors provided with similar channel characteristics have been operated onboard different satellite platforms, including Sentinel-2 and Landsat 8. Joint use of the observations by different sensors offers an opportunity to meet the demands for emergency requirements. For example, through the combination of Landsat and Sentinel-2 data, the land can be observed every 2-3 days at medium spatial resolution. However, differences are expected in radiometric values (e.g., channel reflectance) of the corresponding channels between two sensors. Spectral response function (SRF) is taken as an important aspect of sensor settings. Accordingly, between-sensor differences due to SRFs variation need to be quantified and compensated. The comparison of SRFs shows difference (more or less) in channel settings between Sentinel-2 Multi-Spectral Instrument (MSI) and Landsat 8 Operational Land Imager (OLI). Effect of the difference in SRF on corresponding values between MSI and OLI was investigated, mainly in terms of channel reflectance and several derived spectral indices. Spectra samples from ASTER Spectral Library Version 2.0 and Hyperion data archives were used in obtaining channel reflectance simulation of MSI and OLI. Preliminary results show that MSI and OLI are well comparable in several channels with small relative discrepancy (model is not ensured when the target belongs to another spectra collection. If an improper transformation model is selected, the between-sensor discrepancy will even largely increase. In conclusion, improvement in between-sensor consistency is possibly a challenge, through linear transformation based on model(s) generated from other spectra collections.

  9. Aerodynamic Benchmarking of the Deepwind Design

    DEFF Research Database (Denmark)

    Bedona, Gabriele; Schmidt Paulsen, Uwe; Aagaard Madsen, Helge

    2015-01-01

    The aerodynamic benchmarking for the DeepWind rotor is conducted comparing different rotor geometries and solutions and keeping the comparison as fair as possible. The objective for the benchmarking is to find the most suitable configuration in order to maximize the power production and minimize...... the blade solicitation and the cost of energy. Different parameters are considered for the benchmarking study. The DeepWind blade is characterized by a shape similar to the Troposkien geometry but asymmetric between the top and bottom parts: this shape is considered as a fixed parameter in the benchmarking...

  10. Professional Performance and Bureaucratic Benchmarking Information

    DEFF Research Database (Denmark)

    Schneider, Melanie L.; Mahlendorf, Matthias D.; Schäffer, Utz

    Prior research documents positive effects of benchmarking information provision on performance and attributes this to social comparisons. However, the effects on professional recipients are unclear. Studies of professional control indicate that professional recipients often resist bureaucratic...... controls because of organizational-professional conflicts. We therefore analyze the association between bureaucratic benchmarking information provision and professional performance and suggest that the association is more positive if prior professional performance was low. We test our hypotheses based...... on archival, publicly disclosed, professional performance data for 191 German orthopedics departments, matched with survey data on bureaucratic benchmarking information given to chief orthopedists by the administration. We find a positive association between bureaucratic benchmarking information provision...

  11. Benchmarking and Performance Measurement.

    Science.gov (United States)

    Town, J. Stephen

    This paper defines benchmarking and its relationship to quality management, describes a project which applied the technique in a library context, and explores the relationship between performance measurement and benchmarking. Numerous benchmarking methods contain similar elements: deciding what to benchmark; identifying partners; gathering…

  12. Preliminary comparison of the system of AERMOD and ISCST3 models

    International Nuclear Information System (INIS)

    Turtos Carbonell, Leonor; Curbelo Garea, Lariza; Diaz Rivero, Norberto

    2006-01-01

    On October 21st, 2005 the U.S. Environmental Protection Agency (EPA), establishes AERMOD as regulatory model to be used for the dispersion of pollutants at local scale, in substitution of the ISCST3 model used up to that moment. Whenever a new dispersion model appears, it is necessary for the scientific community to make a comparison in order to discover the differences between the results obtained with the new model and the previous one. Considering the above mentioned fact, this work makes a preliminary comparison between the maximum concentrations calculated by each model (ISCST3 and AERMOD) for a specific case study that consists of eleven batteries of generation sets distributed throughout Havana City which will operate in base load mode and will use a fuel oil with 4% of sulphur. The modelling domain is the 50 xs 37 km with 1 x 1 km cells for a total of 1 850 calculation points (receptors), located in all Havana City and the bordering municipalities of Havana province. In each one of these receptors the dispersion of SO 2 and NO x were modelled

  13. A preliminary comparison between TOVS and GOME level 2 ozone data

    Science.gov (United States)

    Rathman, William; Monks, Paul S.; Llewellyn-Jones, David; Burrows, John P.

    1997-09-01

    A preliminary comparison between total column ozone concentration values derived from TIROS Operational Vertical Sounder (TOVS) and Global Ozone Monitoring Experiment (GOME) has been carried out. Two comparisons of ozone datasets have been made: a) TOVS ozone analysis maps vs. GOME level 2 data; b) TOVS data located at Northern Hemisphere Ground Ozone Stations (NHGOS) vs. GOME data. Both analyses consistently showed an offset in the value of the total column ozone between the datasets [for analyses a) 35 Dobson Units (DU); and for analyses b) 10 DU], despite a good correlation between the spatial and temporal features of the datasets. A noticeably poor correlation in the latitudinal bands 10°/20° North and 10°/20° South was observed—the reasons for which are discussed. The smallest region which was statistically representative of the ozone value correlation dataset of TOVS data at NHGOS and GOME level-2 data was determined to be a region that was enclosed by effective radius of 0.75 arc-degrees (83.5km).

  14. Benchmarking von Krankenhausinformationssystemen – eine vergleichende Analyse deutschsprachiger Benchmarkingcluster

    Directory of Open Access Journals (Sweden)

    Jahn, Franziska

    2015-08-01

    Full Text Available Benchmarking is a method of strategic information management used by many hospitals today. During the last years, several benchmarking clusters have been established within the German-speaking countries. They support hospitals in comparing and positioning their information system’s and information management’s costs, performance and efficiency against other hospitals. In order to differentiate between these benchmarking clusters and to provide decision support in selecting an appropriate benchmarking cluster, a classification scheme is developed. The classification scheme observes both general conditions and examined contents of the benchmarking clusters. It is applied to seven benchmarking clusters which have been active in the German-speaking countries within the last years. Currently, performance benchmarking is the most frequent benchmarking type, whereas the observed benchmarking clusters differ in the number of benchmarking partners and their cooperation forms. The benchmarking clusters also deal with different benchmarking subjects. Assessing costs and quality application systems, physical data processing systems, organizational structures of information management and IT services processes are the most frequent benchmarking subjects. There is still potential for further activities within the benchmarking clusters to measure strategic and tactical information management, IT governance and quality of data and data-processing processes. Based on the classification scheme and the comparison of the benchmarking clusters, we derive general recommendations for benchmarking of hospital information systems.

  15. Benchmarking in the Netherlands

    International Nuclear Information System (INIS)

    1999-01-01

    In two articles an overview is given of the activities in the Dutch industry and energy sector with respect to benchmarking. In benchmarking operational processes of different competitive businesses are compared to improve your own performance. Benchmark covenants for energy efficiency between the Dutch government and industrial sectors contribute to a growth of the number of benchmark surveys in the energy intensive industry in the Netherlands. However, some doubt the effectiveness of the benchmark studies

  16. Atomic Energy Research benchmark activity

    International Nuclear Information System (INIS)

    Makai, M.

    1998-01-01

    The test problems utilized in the validation and verification process of computer programs in Atomic Energie Research are collected into one bunch. This is the first step towards issuing a volume in which tests for VVER are collected, along with reference solutions and a number of solutions. The benchmarks do not include the ZR-6 experiments because they have been published along with a number of comparisons in the Final reports of TIC. The present collection focuses on operational and mathematical benchmarks which cover almost the entire range of reaktor calculation. (Author)

  17. Dynamic benchmarking of simulation codes

    International Nuclear Information System (INIS)

    Henry, R.E.; Paik, C.Y.; Hauser, G.M.

    1996-01-01

    output includes a plot of the MAAP calculation and the plant data. For the large integral experiments, a major part, but not all of the MAAP code is needed. These use an experiment specific benchmark routine that includes all of the information and boundary conditions for performing the calculation, as well as the information of which parts of MAAP are unnecessary and can be 'bypassed'. Lastly, the separate effects tests only require a few MAAP routines. These are exercised through their own specific benchmark routine that includes the experiment specific information and boundary conditions. This benchmark routine calls the appropriate MAAP routines from the source code, performs the calculations, including integration where necessary and provide the comparison between the MAAP calculation and the experimental observations. (author)

  18. Numisheet2005 Benchmark Analysis on Forming of an Automotive Underbody Cross Member: Benchmark 2

    International Nuclear Information System (INIS)

    Buranathiti, Thaweepat; Cao Jian

    2005-01-01

    This report presents an international cooperation benchmark effort focusing on simulations of a sheet metal stamping process. A forming process of an automotive underbody cross member using steel and aluminum blanks is used as a benchmark. Simulation predictions from each submission are analyzed via comparison with the experimental results. A brief summary of various models submitted for this benchmark study is discussed. Prediction accuracy of each parameter of interest is discussed through the evaluation of cumulative errors from each submission

  19. Core Benchmarks Descriptions

    International Nuclear Information System (INIS)

    Pavlovichev, A.M.

    2001-01-01

    Actual regulations while designing of new fuel cycles for nuclear power installations comprise a calculational justification to be performed by certified computer codes. It guarantees that obtained calculational results will be within the limits of declared uncertainties that are indicated in a certificate issued by Gosatomnadzor of Russian Federation (GAN) and concerning a corresponding computer code. A formal justification of declared uncertainties is the comparison of calculational results obtained by a commercial code with the results of experiments or of calculational tests that are calculated with an uncertainty defined by certified precision codes of MCU type or of other one. The actual level of international cooperation provides an enlarging of the bank of experimental and calculational benchmarks acceptable for a certification of commercial codes that are being used for a design of fuel loadings with MOX fuel. In particular, the work is practically finished on the forming of calculational benchmarks list for a certification of code TVS-M as applied to MOX fuel assembly calculations. The results on these activities are presented

  20. A PRELIMINARY INVESTIGATION ON COMPARISON AND TRANSFORMATION OF SENTINEL-2 MSI AND LANDSAT 8 OLI

    Directory of Open Access Journals (Sweden)

    F. Chen

    2018-05-01

    Full Text Available A PRELIMINARY INVESTIGATION ON COMPARISON AND TRANSFORMATION OF SENTINEL-2 MSI AND LANDSAT 8 OLI Timely and accurate earth observation with short revisit interval is usually necessary, especially for emergency response. Currently, several new generation sensors provided with similar channel characteristics have been operated onboard different satellite platforms, including Sentinel-2 and Landsat 8. Joint use of the observations by different sensors offers an opportunity to meet the demands for emergency requirements. For example, through the combination of Landsat and Sentinel-2 data, the land can be observed every 2–3 days at medium spatial resolution. However, differences are expected in radiometric values (e.g., channel reflectance of the corresponding channels between two sensors. Spectral response function (SRF is taken as an important aspect of sensor settings. Accordingly, between-sensor differences due to SRFs variation need to be quantified and compensated. The comparison of SRFs shows difference (more or less in channel settings between Sentinel-2 Multi-Spectral Instrument (MSI and Landsat 8 Operational Land Imager (OLI. Effect of the difference in SRF on corresponding values between MSI and OLI was investigated, mainly in terms of channel reflectance and several derived spectral indices. Spectra samples from ASTER Spectral Library Version 2.0 and Hyperion data archives were used in obtaining channel reflectance simulation of MSI and OLI. Preliminary results show that MSI and OLI are well comparable in several channels with small relative discrepancy (< 5 %, including the Costal Aerosol channel, a NIR (855–875 nm channel, the SWIR channels, and the Cirrus channel. Meanwhile, for channels covering Blue, Green, Red, and NIR (785–900 nm, the between-sensor differences are significantly presented. Compared with the difference in reflectance of each individual channel, the difference in derived spectral index is more

  1. Concrete benchmark experiment: ex-vessel LWR surveillance dosimetry

    International Nuclear Information System (INIS)

    Ait Abderrahim, H.; D'Hondt, P.; Oeyen, J.; Risch, P.; Bioux, P.

    1993-09-01

    The analysis of DOEL-1 in-vessel and ex-vessel neutron dosimetry, using the DOT 3.5 Sn code coupled with the VITAMIN-C cross-section library, showed the same C/E values for different detectors at the surveillance capsule and the ex-vessel cavity positions. These results seem to be in contradiction with those obtained in several Benchmark experiments (PCA, PSF, VENUS...) when using the same computational tools. Indeed a strong decreasing radial trend of the C/E was observed, partly explained by the overestimation of the iron inelastic scattering. The flat trend seen in DOEL-1 could be explained by compensating errors in the calculation such as the backscattering due to the concrete walls outside the cavity. The 'Concrete Benchmark' experiment has been designed to judge the ability of this calculation methods to treat the backscattering. This paper describes the 'Concrete Benchmark' experiment, the measured and computed neutron dosimetry results and their comparison. This preliminary analysis seems to indicate an overestimation of the backscattering effect in the calculations. (authors). 5 figs., 1 tab., 7 refs

  2. Free-field ground motions for the nonproliferation experiment: Preliminary comparisons with nearby nuclear events

    International Nuclear Information System (INIS)

    Olsen, K.H.; Peratt, A.L.

    1994-01-01

    Since 1987, we have installed fixed arrays of tri-axial accelerometers in the fire-field near the shot horizons for low-yield (≤ 20 kt) nuclear events in the N-tunnel complex beneath Rainier Mesa. For the Nonproliferation Experiment (NPE) we augmented the array to achieve 23 free-field stations. Goals are: (a) to examine robustness and stability of various free-field source function estimates -- e.g., reduced displacement potentials (RDP) and spectra; (b) to compare close-in with regional estimates to test whether detailed close-in free-field and/or surface ground motion data can improve predictability of regional-teleseismic source functions; (c) to provide experimental data for checking two-dimensional numerical simulations. We report preliminary comparisons between experimental free-field data for NPE (1993) and three nearby nuclear events (MISTY ECHO, 1988; MINERAL QUARRY, 1990; HUNTERS TROPHY, 1992). All four working points are within 1 km of each other in the same wet tuff bed, thus reducing concerns about possible large differences in material properties between widely separated shots. Initial comparison of acceleration and velocity seismograms for the four events reveals: (1) There is a large departure from the spherical symmetry commonly assumed in analytic treatments of source theory; both vertical and tangential components are surprisingly large. (2) All shots show similar first-peak particle-velocity amplitude decay rates suggesting significant attenuation even in the supposedly purely elastic region. (3) Sharp (>20 Hz) arrivals are not observed at tunnel level from near-surface pP reflections or spall-closure sources -- but broadened peaks are seen that suggest more diffuse reflected energy from the surface and from the Paleozoic limestone basement below tunnel level

  3. High Energy Physics (HEP) benchmark program

    International Nuclear Information System (INIS)

    Yasu, Yoshiji; Ichii, Shingo; Yashiro, Shigeo; Hirayama, Hideo; Kokufuda, Akihiro; Suzuki, Eishin.

    1993-01-01

    High Energy Physics (HEP) benchmark programs are indispensable tools to select suitable computer for HEP application system. Industry standard benchmark programs can not be used for this kind of particular selection. The CERN and the SSC benchmark suite are famous HEP benchmark programs for this purpose. The CERN suite includes event reconstruction and event generator programs, while the SSC one includes event generators. In this paper, we found that the results from these two suites are not consistent. And, the result from the industry benchmark does not agree with either of these two. Besides, we describe comparison of benchmark results using EGS4 Monte Carlo simulation program with ones from two HEP benchmark suites. Then, we found that the result from EGS4 in not consistent with the two ones. The industry standard of SPECmark values on various computer systems are not consistent with the EGS4 results either. Because of these inconsistencies, we point out the necessity of a standardization of HEP benchmark suites. Also, EGS4 benchmark suite should be developed for users of applications such as medical science, nuclear power plant, nuclear physics and high energy physics. (author)

  4. Aquatic Life Benchmarks

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Aquatic Life Benchmarks is an EPA-developed set of criteria for freshwater species. These benchmarks are based on toxicity values reviewed by EPA and used in the...

  5. Benchmarking for Higher Education.

    Science.gov (United States)

    Jackson, Norman, Ed.; Lund, Helen, Ed.

    The chapters in this collection explore the concept of benchmarking as it is being used and developed in higher education (HE). Case studies and reviews show how universities in the United Kingdom are using benchmarking to aid in self-regulation and self-improvement. The chapters are: (1) "Introduction to Benchmarking" (Norman Jackson…

  6. Benchmarking and validation activities within JEFF project

    Directory of Open Access Journals (Sweden)

    Cabellos O.

    2017-01-01

    Full Text Available The challenge for any nuclear data evaluation project is to periodically release a revised, fully consistent and complete library, with all needed data and covariances, and ensure that it is robust and reliable for a variety of applications. Within an evaluation effort, benchmarking activities play an important role in validating proposed libraries. The Joint Evaluated Fission and Fusion (JEFF Project aims to provide such a nuclear data library, and thus, requires a coherent and efficient benchmarking process. The aim of this paper is to present the activities carried out by the new JEFF Benchmarking and Validation Working Group, and to describe the role of the NEA Data Bank in this context. The paper will also review the status of preliminary benchmarking for the next JEFF-3.3 candidate cross-section files.

  7. Benchmarking and validation activities within JEFF project

    Science.gov (United States)

    Cabellos, O.; Alvarez-Velarde, F.; Angelone, M.; Diez, C. J.; Dyrda, J.; Fiorito, L.; Fischer, U.; Fleming, M.; Haeck, W.; Hill, I.; Ichou, R.; Kim, D. H.; Klix, A.; Kodeli, I.; Leconte, P.; Michel-Sendis, F.; Nunnenmann, E.; Pecchia, M.; Peneliau, Y.; Plompen, A.; Rochman, D.; Romojaro, P.; Stankovskiy, A.; Sublet, J. Ch.; Tamagno, P.; Marck, S. van der

    2017-09-01

    The challenge for any nuclear data evaluation project is to periodically release a revised, fully consistent and complete library, with all needed data and covariances, and ensure that it is robust and reliable for a variety of applications. Within an evaluation effort, benchmarking activities play an important role in validating proposed libraries. The Joint Evaluated Fission and Fusion (JEFF) Project aims to provide such a nuclear data library, and thus, requires a coherent and efficient benchmarking process. The aim of this paper is to present the activities carried out by the new JEFF Benchmarking and Validation Working Group, and to describe the role of the NEA Data Bank in this context. The paper will also review the status of preliminary benchmarking for the next JEFF-3.3 candidate cross-section files.

  8. Results of a benchmark study for the seismic analysis and testing of WWER type NPPs: Overview and general comparison for Paks NPP

    International Nuclear Information System (INIS)

    Guerpinar, A.; Zola, M.

    2001-01-01

    Within the framework of the IAEA coordinated 'Benchmark Study for the seismic analysis and testing of WWER-type NPPs', in-situ dynamic structural testing activities have been performed at the Paks Nuclear Power Plant in Hungary. The specific objective of the investigation was to obtain experimental data on the actual dynamic structural behaviour of the plant's major constructions and equipment under normal operating conditions, for enabling a valid seismic safety review to be made. This paper refers on the comparison of the results obtained from the experimental activities performed by ISMES with those coming from analytical studies performed for the Coordinated Research Programme (CRP) by Siemens (Germany), EQE (Bulgaria), Central Laboratory (Bulgaria), M. David Consulting (Czech Republic), IVO (Finland). This paper gives a synthetic description of the conducted experiments and presents some results, regarding in particular the free-field excitations produced during the earthquake-simulation experiments and an experiment of the dynamic soil-structure interaction global effects at the base of the reactor containment structure. The specific objective of the experimental investigation was to obtain valid data on the dynamic behaviour of the plant's major constructions, under normal operating conditions, to support the analytical assessment of their actual seismic safety. The full-scale dynamic structural testing activities have been performed in December 1994 at the Paks (H) Nuclear Power Plant. The Paks NPP site has been subjected to low level earthquake-like ground shaking, through appropriately devised underground explosions, and the dynamic response of the plant's 1st reactor unit important structures was appropriately measured and digitally recorded, with the whole nuclear power plant under normal operating conditions. In-situ free field response was measured concurrently and, moreover, site-specific geophysical and seismological data were simultaneously

  9. SP2Bench: A SPARQL Performance Benchmark

    Science.gov (United States)

    Schmidt, Michael; Hornung, Thomas; Meier, Michael; Pinkel, Christoph; Lausen, Georg

    A meaningful analysis and comparison of both existing storage schemes for RDF data and evaluation approaches for SPARQL queries necessitates a comprehensive and universal benchmark platform. We present SP2Bench, a publicly available, language-specific performance benchmark for the SPARQL query language. SP2Bench is settled in the DBLP scenario and comprises a data generator for creating arbitrarily large DBLP-like documents and a set of carefully designed benchmark queries. The generated documents mirror vital key characteristics and social-world distributions encountered in the original DBLP data set, while the queries implement meaningful requests on top of this data, covering a variety of SPARQL operator constellations and RDF access patterns. In this chapter, we discuss requirements and desiderata for SPARQL benchmarks and present the SP2Bench framework, including its data generator, benchmark queries and performance metrics.

  10. Benchmarking of refinery emissions performance : Executive summary

    International Nuclear Information System (INIS)

    2003-07-01

    This study was undertaken to collect emissions performance data for Canadian and comparable American refineries. The objective was to examine parameters that affect refinery air emissions performance and develop methods or correlations to normalize emissions performance. Another objective was to correlate and compare the performance of Canadian refineries to comparable American refineries. For the purpose of this study, benchmarking involved the determination of levels of emission performance that are being achieved for generic groups of facilities. A total of 20 facilities were included in the benchmarking analysis, and 74 American refinery emission correlations were developed. The recommended benchmarks, and the application of those correlations for comparison between Canadian and American refinery performance, were discussed. The benchmarks were: sulfur oxides, nitrogen oxides, carbon monoxide, particulate, volatile organic compounds, ammonia and benzene. For each refinery in Canada, benchmark emissions were developed. Several factors can explain differences in Canadian and American refinery emission performance. 4 tabs., 7 figs

  11. Benchmarking local healthcare-associated infections: Available benchmarks and interpretation challenges

    Directory of Open Access Journals (Sweden)

    Aiman El-Saed

    2013-10-01

    Full Text Available Summary: Growing numbers of healthcare facilities are routinely collecting standardized data on healthcare-associated infection (HAI, which can be used not only to track internal performance but also to compare local data to national and international benchmarks. Benchmarking overall (crude HAI surveillance metrics without accounting or adjusting for potential confounders can result in misleading conclusions. Methods commonly used to provide risk-adjusted metrics include multivariate logistic regression analysis, stratification, indirect standardization, and restrictions. The characteristics of recognized benchmarks worldwide, including the advantages and limitations are described. The choice of the right benchmark for the data from the Gulf Cooperation Council (GCC states is challenging. The chosen benchmark should have similar data collection and presentation methods. Additionally, differences in surveillance environments including regulations should be taken into consideration when considering such a benchmark. The GCC center for infection control took some steps to unify HAI surveillance systems in the region. GCC hospitals still need to overcome legislative and logistic difficulties in sharing data to create their own benchmark. The availability of a regional GCC benchmark may better enable health care workers and researchers to obtain more accurate and realistic comparisons. Keywords: Benchmarking, Comparison, Surveillance, Healthcare-associated infections

  12. Benchmarking semantic web technology

    CERN Document Server

    García-Castro, R

    2009-01-01

    This book addresses the problem of benchmarking Semantic Web Technologies; first, from a methodological point of view, proposing a general methodology to follow in benchmarking activities over Semantic Web Technologies and, second, from a practical point of view, presenting two international benchmarking activities that involved benchmarking the interoperability of Semantic Web technologies using RDF(S) as the interchange language in one activity and OWL in the other.The book presents in detail how the different resources needed for these interoperability benchmarking activities were defined:

  13. Benchmarking in University Toolbox

    Directory of Open Access Journals (Sweden)

    Katarzyna Kuźmicz

    2015-06-01

    Full Text Available In the face of global competition and rising challenges that higher education institutions (HEIs meet, it is imperative to increase innovativeness and efficiency of their management. Benchmarking can be the appropriate tool to search for a point of reference necessary to assess institution’s competitive position and learn from the best in order to improve. The primary purpose of the paper is to present in-depth analysis of benchmarking application in HEIs worldwide. The study involves indicating premises of using benchmarking in HEIs. It also contains detailed examination of types, approaches and scope of benchmarking initiatives. The thorough insight of benchmarking applications enabled developing classification of benchmarking undertakings in HEIs. The paper includes review of the most recent benchmarking projects and relating them to the classification according to the elaborated criteria (geographical range, scope, type of data, subject, support and continuity. The presented examples were chosen in order to exemplify different approaches to benchmarking in higher education setting. The study was performed on the basis of the published reports from benchmarking projects, scientific literature and the experience of the author from the active participation in benchmarking projects. The paper concludes with recommendations for university managers undertaking benchmarking, derived on the basis of the conducted analysis.

  14. Preliminary enviromagnetic comparison of the moss, lichen, and filter fabric bags to air pollution monitoring

    Directory of Open Access Journals (Sweden)

    Hanna Salo

    2014-08-01

    Full Text Available Air quality and anthropogenic air pollutants are usually investigated by passive biomonitoring which utilizes native species. Active biomonitoring, instead, refers to the use of transplants or bags in areas lacking native species. In Finland, the standardized moss bag technique SFS 5794 is commonly applied in active monitoring but there is still need for simpler and labor-saving sample material even on international scale. This article focuses on a preliminary comparison of the usability and collection efficiency of bags made of moss Sphagnum papillosum, lichen Hypogymnia physodes, and filter fabric (Filtrete™ in active biomonitoring of air pollutants around an industrial site in Harjavalta, SW Finland. The samples are analyzed with magnetic (i.e. magnetic susceptibility, isothermal remanent magnetization, hysteresis loop and hysteresis parameters methods highly suitable as a first-step tool for pollution studies. The results show that the highest magnetic susceptibility of each sample material is measured close to the industrial site. Furthermore, moss bags accumulate more magnetic material than lichen bags which, on the contrary, perform better at further distances. Filter fabric bags are tested only at 1 km sites indicating a good accumulation capability near the source. Pseudo-single-domain (PSD magnetite is identified as the main magnetic mineral in all sample materials and good correlations are found between different bag types. To conclude, all three materials effectively accumulate air pollutants and are suitable for air quality studies. The results of this article provide a base for later studies which are needed in order to fully determine a new, efficient, and easy sample material for active monitoring.

  15. MOx Depletion Calculation Benchmark

    International Nuclear Information System (INIS)

    San Felice, Laurence; Eschbach, Romain; Dewi Syarifah, Ratna; Maryam, Seif-Eddine; Hesketh, Kevin

    2016-01-01

    Under the auspices of the NEA Nuclear Science Committee (NSC), the Working Party on Scientific Issues of Reactor Systems (WPRS) has been established to study the reactor physics, fuel performance, radiation transport and shielding, and the uncertainties associated with modelling of these phenomena in present and future nuclear power systems. The WPRS has different expert groups to cover a wide range of scientific issues in these fields. The Expert Group on Reactor Physics and Advanced Nuclear Systems (EGRPANS) was created in 2011 to perform specific tasks associated with reactor physics aspects of present and future nuclear power systems. EGRPANS provides expert advice to the WPRS and the nuclear community on the development needs (data and methods, validation experiments, scenario studies) for different reactor systems and also provides specific technical information regarding: core reactivity characteristics, including fuel depletion effects; core power/flux distributions; Core dynamics and reactivity control. In 2013 EGRPANS published a report that investigated fuel depletion effects in a Pressurised Water Reactor (PWR). This was entitled 'International Comparison of a Depletion Calculation Benchmark on Fuel Cycle Issues' NEA/NSC/DOC(2013) that documented a benchmark exercise for UO 2 fuel rods. This report documents a complementary benchmark exercise that focused on PuO 2 /UO 2 Mixed Oxide (MOX) fuel rods. The results are especially relevant to the back-end of the fuel cycle, including irradiated fuel transport, reprocessing, interim storage and waste repository. Saint-Laurent B1 (SLB1) was the first French reactor to use MOx assemblies. SLB1 is a 900 MWe PWR, with 30% MOx fuel loading. The standard MOx assemblies, used in Saint-Laurent B1 reactor, include three zones with different plutonium enrichments, high Pu content (5.64%) in the center zone, medium Pu content (4.42%) in the intermediate zone and low Pu content (2.91%) in the peripheral zone

  16. Spectrum integrated (n,He) cross section comparison and least squares analysis for /sup 6/Li and /sup 10/B in benchmark fields

    International Nuclear Information System (INIS)

    Schenter, R.E.; Oliver, B.M.; Farrar, H. IV

    1987-01-01

    Spectrum integrated cross sections for /sup 6/Li and /sup 10/B from five benchmark fast reactor neutron fields are compared with calculated values obtained using the ENDF/B-V Cross Section Files. The benchmark fields include the Coupled Fast Reactivity Measurements Facility (CFRMF) at the Idaho National Engineering Laboratory, the 10% Enriched U-235 Critical Assembly (BIG-10) at Los Alamos National Laboratory, the Sigma Sigma and Fission Cavity fields of the BR-1 reactor at CEN/SCK, and the Intermediate-Energy Standard Neutron Field (ISNF) at the National Bureau of Standards. Results from least square analyses using the FERRET computer code to obtain adjusted cross section values and their uncertainties are presented. Input to these calculations include the above five benchmark data sets. These analyses indicate a need for revision in the ENDF/B-V files for the /sup 10/B cross section for energies above 50 keV

  17. Spectrum integrated (n,He) cross section comparisons and least squares analyses for 6Li and 10B in benchmark fields

    International Nuclear Information System (INIS)

    Schenter, R.E.; Oliver, B.M.; Farrar, H. IV.

    1986-06-01

    Spectrum integrated cross sections for 6 Li and 10 B from five benchmark fast reactor neutron fields are compared with calculated values obtained using the ENDF/B-V Cross Section Files. The benchmark fields include the Coupled Fast Reactivity Measurements Facility (CFRMF) at the Idaho National Engineering Laboratory, the 10% Enriched U-235 Critical Assembly (BIG-10) at Los Alamos National Laboratory, the Sigma-Sigma and Fission Cavity fields of the BR-1 reactor at CEN/SCK, and the Intermediate Energy Standard Neutron Field (ISNF) at the National Bureau of Standards. Results from least square analyses using the FERRET computer code to obtain adjusted cross section values and their uncertainties are presented. Input to these calculations include the above five benchmark data sets. These analyses indicate a need for revision in the ENDF/B-V files for the 10 B and 6 Li cross sections for energies above 50 keV

  18. A Comparison of Evidence-Based Estimates and Empirical Benchmarks of the Appropriate Rate of Use of Radiation Therapy in Ontario

    International Nuclear Information System (INIS)

    Mackillop, William J.; Kong, Weidong; Brundage, Michael; Hanna, Timothy P.; Zhang-Salomons, Jina; McLaughlin, Pierre-Yves; Tyldesley, Scott

    2015-01-01

    Purpose: Estimates of the appropriate rate of use of radiation therapy (RT) are required for planning and monitoring access to RT. Our objective was to compare estimates of the appropriate rate of use of RT derived from mathematical models, with the rate observed in a population of patients with optimal access to RT. Methods and Materials: The rate of use of RT within 1 year of diagnosis (RT 1Y ) was measured in the 134,541 cases diagnosed in Ontario between November 2009 and October 2011. The lifetime rate of use of RT (RT LIFETIME ) was estimated by the multicohort utilization table method. Poisson regression was used to evaluate potential barriers to access to RT and to identify a benchmark subpopulation with unimpeded access to RT. Rates of use of RT were measured in the benchmark subpopulation and compared with published evidence-based estimates of the appropriate rates. Results: The benchmark rate for RT 1Y , observed under conditions of optimal access, was 33.6% (95% confidence interval [CI], 33.0%-34.1%), and the benchmark for RT LIFETIME was 41.5% (95% CI, 41.2%-42.0%). Benchmarks for RT LIFETIME for 4 of 5 selected sites and for all cancers combined were significantly lower than the corresponding evidence-based estimates. Australian and Canadian evidence-based estimates of RT LIFETIME for 5 selected sites differed widely. RT LIFETIME in the overall population of Ontario was just 7.9% short of the benchmark but 20.9% short of the Australian evidence-based estimate of the appropriate rate. Conclusions: Evidence-based estimates of the appropriate lifetime rate of use of RT may overestimate the need for RT in Ontario

  19. Assessment of the monitoring and evaluation system for integrated community case management (ICCM) in Ethiopia: a comparison against global benchmark indicators.

    Science.gov (United States)

    Mamo, Dereje; Hazel, Elizabeth; Lemma, Israel; Guenther, Tanya; Bekele, Abeba; Demeke, Berhanu

    2014-10-01

    Program managers require feasible, timely, reliable, and valid measures of iCCM implementation to identify problems and assess progress. The global iCCM Task Force developed benchmark indicators to guide implementers to develop or improve monitoring and evaluation (M&E) systems. To assesses Ethiopia's iCCM M&E system by determining the availability and feasibility of the iCCM benchmark indicators. We conducted a desk review of iCCM policy documents, monitoring tools, survey reports, and other rele- vant documents; and key informant interviews with government and implementing partners involved in iCCM scale-up and M&E. Currently, Ethiopia collects data to inform most (70% [33/47]) iCCM benchmark indicators, and modest extra effort could boost this to 83% (39/47). Eight (17%) are not available given the current system. Most benchmark indicators that track coordination and policy, human resources, service delivery and referral, supervision, and quality assurance are available through the routine monitoring systems or periodic surveys. Indicators for supply chain management are less available due to limited consumption data and a weak link with treatment data. Little information is available on iCCM costs. Benchmark indicators can detail the status of iCCM implementation; however, some indicators may not fit country priorities, and others may be difficult to collect. The government of Ethiopia and partners should review and prioritize the benchmark indicators to determine which should be included in the routine M&E system, especially since iCCMdata are being reviewed for addition to the HMIS. Moreover, the Health Extension Worker's reporting burden can be minimized by an integrated reporting approach.

  20. Benchmark neutron porosity log calculations

    International Nuclear Information System (INIS)

    Little, R.C.; Michael, M.; Verghese, K.; Gardner, R.P.

    1989-01-01

    Calculations have been made for a benchmark neutron porosity log problem with the general purpose Monte Carlo code MCNP and the specific purpose Monte Carlo code McDNL. For accuracy and timing comparison purposes the CRAY XMP and MicroVax II computers have been used with these codes. The CRAY has been used for an analog version of the MCNP code while the MicroVax II has been used for the optimized variance reduction versions of both codes. Results indicate that the two codes give the same results within calculated standard deviations. Comparisons are given and discussed for accuracy (precision) and computation times for the two codes

  1. PRELIMINARY RESULTS OF THE COMPARISON OF SATELLITE IMAGERS USING TUZ GÖLÜ AS A REFERENCE STANDARD

    Directory of Open Access Journals (Sweden)

    H. Özen

    2012-07-01

    Full Text Available Earth surfaces, such as deserts, salt lakes, and playas, have been widely used in the vicarious radiometric calibration of optical earth observation satellites. In 2009, the Infrared and Visible Optical Sensors (IVOS sub-group of the Committee of Earth Observation Satellites (CEOS Working Group on Calibration and Validation (WGCV designated eight LANDNET reference sites to focus international efforts, facilitate traceability and enable the establishment of measurement "best practices." With support from the European Space Agency (ESA, one of the LANDNET sites, the Tuz Gölü salt lake located in central Turkey, was selected to host a cross-comparison of measurement instrumentation and methodologies conducted by 11 different ground teams across the globe. This paper provides an overview of the preliminary results of the cross-comparison of the ground-based spectral measurements made during the CEOS Land Comparison 13-27 August, 2010 with the simultaneous satellite image data acquisitions of the same site.

  2. Comparison of results from the MCNP criticality validation suite using ENDF/B-VI and preliminary ENDF/B-VII nuclear data

    Energy Technology Data Exchange (ETDEWEB)

    Mosteller, R. D. (Russell D.)

    2004-01-01

    The MCNP Criticality Validation Suite is a collection of 31 benchmarks taken from the International Handbook of Evaluated Criticality Safety Benchmark Experiments. MCNP5 calculations clearly demonstrate that, overall, nuclear data for a preliminary version of ENDFB-VII produce better agreement with the benchmarks in the suite than do corresponding data from ENDF/B-VI. Additional calculations identify areas where improvements in the data still are needed. Based on results for the MCNP Criticality Validation Suite, the Pre-ENDF/B-VII nuclear data produce substantially better overall results than do their ENDF/B-VI counterparts. The calculated values for k{sub eff} for bare metal spheres and for an IEU cylinder reflected by normal uranium are in much better agreement with the benchmark values. In addition, the values of k{sub eff} for the bare metal spheres are much more consistent with those for corresponding metal spheres reflected by normal uranium or water. In addition, a long-standing controversy about the need for an ad hoc adjustment to the {sup 238}U resonance integral for thermal systems may finally be resolved. On the other hand, improvements still are needed in a number of areas. Those areas include intermediate-energy cross sections for {sup 235}U, angular distributions for elastic scattering in deuterium, and fast cross sections for {sup 237}Np.

  3. MCNP neutron benchmarks

    International Nuclear Information System (INIS)

    Hendricks, J.S.; Whalen, D.J.; Cardon, D.A.; Uhle, J.L.

    1991-01-01

    Over 50 neutron benchmark calculations have recently been completed as part of an ongoing program to validate the MCNP Monte Carlo radiation transport code. The new and significant aspects of this work are as follows: These calculations are the first attempt at a validation program for MCNP and the first official benchmarking of version 4 of the code. We believe the chosen set of benchmarks is a comprehensive set that may be useful for benchmarking other radiation transport codes and data libraries. These calculations provide insight into how well neutron transport calculations can be expected to model a wide variety of problems

  4. Preliminary results from comparisons of redundant tiltmeters at three sites in central california

    Science.gov (United States)

    Mortensen, C.E.; Johnston, M.J.S.

    1979-01-01

    The U.S. Geological Survey has been operating a network of shallow-borehole tiltmeters in central California since June 1973. At six sites redundant instruments have been installed as a check on data coherency. These include the Sage Ranch, Tres Pinos, New Idria, Aromas, Bear Valley and San Juan Bautista tiltmeter sites. Preliminary results from the comparison of redundant data from the Aromas, Bear Valley and San Juan Bautista sites for periods of eight, three and seven months respectively, suggest that short period tilt signals in the range 5 min < T < 3-5 h and ranging in amplitude from 5 ?? 10-8 to 10-6 rad, but not including step offsets, show excellent agreement on closely spaced instruments. Agreement is not as good in this period range for instruments at San Juan Bautista with a separation of 200 m. Signals of interest observed in this period range include coseismic tilts, teleseisms and tilts associated with creep events. Tilt signals in the period range 3-5 h < T < 2- 5 weeks are not always coherent at all three of the redundant tilt sites studied. Tilt signals in this period range have amplitudes up to 5 ?? 10-6 rad and wavelengths down to at least the instrument separation at the closely spaced sites (~several meters). Regarding longerterm coherency, the instruments at San Juan Bautista with 200-m spacing, agree within 0.5 ??rad for the N-S component and 0.7 jurad for the E-W component for a period of two months. The closely spaced redundant instruments at Aromas agree within 2 ??rad for the N-S component and 1 ??rad for the E-W component for the eight-month period of operation. Data from the three sites have been checked for effects of temperature, atmospheric pressure and rainfall. The latter appears to be critically site dependent. The worst case tilts for 1 inch of rainfall can be more than 1 jurad with a duration of a few days to a week. Typical rain-induced tilts are less than 0.3 ??rad for 1 inch of rain. The two instruments at the Sage Ranch

  5. Benchmarking biofuels; Biobrandstoffen benchmarken

    Energy Technology Data Exchange (ETDEWEB)

    Croezen, H.; Kampman, B.; Bergsma, G.

    2012-03-15

    A sustainability benchmark for transport biofuels has been developed and used to evaluate the various biofuels currently on the market. For comparison, electric vehicles, hydrogen vehicles and petrol/diesel vehicles were also included. A range of studies as well as growing insight are making it ever clearer that biomass-based transport fuels may have just as big a carbon footprint as fossil fuels like petrol or diesel, or even bigger. At the request of Greenpeace Netherlands, CE Delft has brought together current understanding on the sustainability of fossil fuels, biofuels and electric vehicles, with particular focus on the performance of the respective energy carriers on three sustainability criteria, with the first weighing the heaviest: (1) Greenhouse gas emissions; (2) Land use; and (3) Nutrient consumption [Dutch] Greenpeace Nederland heeft CE Delft gevraagd een duurzaamheidsmeetlat voor biobrandstoffen voor transport te ontwerpen en hierop de verschillende biobrandstoffen te scoren. Voor een vergelijk zijn ook elektrisch rijden, rijden op waterstof en rijden op benzine of diesel opgenomen. Door onderzoek en voortschrijdend inzicht blijkt steeds vaker dat transportbrandstoffen op basis van biomassa soms net zoveel of zelfs meer broeikasgassen veroorzaken dan fossiele brandstoffen als benzine en diesel. CE Delft heeft voor Greenpeace Nederland op een rijtje gezet wat de huidige inzichten zijn over de duurzaamheid van fossiele brandstoffen, biobrandstoffen en elektrisch rijden. Daarbij is gekeken naar de effecten van de brandstoffen op drie duurzaamheidscriteria, waarbij broeikasgasemissies het zwaarst wegen: (1) Broeikasgasemissies; (2) Landgebruik; en (3) Nutriëntengebruik.

  6. Benchmark calculations of power distribution within fuel assemblies. Phase 2: comparison of data reduction and power reconstruction methods in production codes

    International Nuclear Information System (INIS)

    2000-01-01

    Systems loaded with plutonium in the form of mixed-oxide (MOX) fuel show somewhat different neutronic characteristics compared with those using conventional uranium fuels. In order to maintain adequate safety standards, it is essential to accurately predict the characteristics of MOX-fuelled systems and to further validate both the nuclear data and the computation methods used. A computation benchmark on power distribution within fuel assemblies to compare different techniques used in production codes for fine flux prediction in systems partially loaded with MOX fuel was carried out at an international level. It addressed first the numerical schemes for pin power reconstruction, then investigated the global performance including cross-section data reduction methods. This report provides the detailed results of this second phase of the benchmark. The analysis of the results revealed that basic data still need to be improved, primarily for higher plutonium isotopes and minor actinides. (author)

  7. Benchmarking af kommunernes sagsbehandling

    DEFF Research Database (Denmark)

    Amilon, Anna

    Fra 2007 skal Ankestyrelsen gennemføre benchmarking af kommuernes sagsbehandlingskvalitet. Formålet med benchmarkingen er at udvikle praksisundersøgelsernes design med henblik på en bedre opfølgning og at forbedre kommunernes sagsbehandling. Dette arbejdspapir diskuterer metoder for benchmarking...

  8. Internet based benchmarking

    DEFF Research Database (Denmark)

    Bogetoft, Peter; Nielsen, Kurt

    2005-01-01

    We discuss the design of interactive, internet based benchmarking using parametric (statistical) as well as nonparametric (DEA) models. The user receives benchmarks and improvement potentials. The user is also given the possibility to search different efficiency frontiers and hereby to explore...

  9. The Drill Down Benchmark

    NARCIS (Netherlands)

    P.A. Boncz (Peter); T. Rühl (Tim); F. Kwakkel

    1998-01-01

    textabstractData Mining places specific requirements on DBMS query performance that cannot be evaluated satisfactorily using existing OLAP benchmarks. The DD Benchmark - defined here - provides a practical case and yardstick to explore how well a DBMS is able to support Data Mining applications. It

  10. Benchmarking Tool Kit.

    Science.gov (United States)

    Canadian Health Libraries Association.

    Nine Canadian health libraries participated in a pilot test of the Benchmarking Tool Kit between January and April, 1998. Although the Tool Kit was designed specifically for health libraries, the content and approach are useful to other types of libraries as well. Used to its full potential, benchmarking can provide a common measuring stick to…

  11. How Activists Use Benchmarks

    DEFF Research Database (Denmark)

    Seabrooke, Leonard; Wigan, Duncan

    2015-01-01

    Non-governmental organisations use benchmarks as a form of symbolic violence to place political pressure on firms, states, and international organisations. The development of benchmarks requires three elements: (1) salience, that the community of concern is aware of the issue and views...... are put to the test. The first is a reformist benchmarking cycle where organisations defer to experts to create a benchmark that conforms with the broader system of politico-economic norms. The second is a revolutionary benchmarking cycle driven by expert-activists that seek to contest strong vested...... interests and challenge established politico-economic norms. Differentiating these cycles provides insights into how activists work through organisations and with expert networks, as well as how campaigns on complex economic issues can be mounted and sustained....

  12. EGS4 benchmark program

    International Nuclear Information System (INIS)

    Yasu, Y.; Hirayama, H.; Namito, Y.; Yashiro, S.

    1995-01-01

    This paper proposes EGS4 Benchmark Suite which consists of three programs called UCSAMPL4, UCSAMPL4I and XYZDOS. This paper also evaluates optimization methods of recent RISC/UNIX systems, such as IBM, HP, DEC, Hitachi and Fujitsu, for the benchmark suite. When particular compiler option and math library were included in the evaluation process, system performed significantly better. Observed performance of some of the RISC/UNIX systems were beyond some so-called Mainframes of IBM, Hitachi or Fujitsu. The computer performance of EGS4 Code System on an HP9000/735 (99MHz) was defined to be the unit of EGS4 Unit. The EGS4 Benchmark Suite also run on various PCs such as Pentiums, i486 and DEC alpha and so forth. The performance of recent fast PCs reaches that of recent RISC/UNIX systems. The benchmark programs have been evaluated with correlation of industry benchmark programs, namely, SPECmark. (author)

  13. Second benchmark problem for WIPP structural computations

    International Nuclear Information System (INIS)

    Krieg, R.D.; Morgan, H.S.; Hunter, T.O.

    1980-12-01

    This report describes the second benchmark problem for comparison of the structural codes used in the WIPP project. The first benchmark problem consisted of heated and unheated drifts at a depth of 790 m, whereas this problem considers a shallower level (650 m) more typical of the repository horizon. But more important, the first problem considered a homogeneous salt configuration, whereas this problem considers a configuration with 27 distinct geologic layers, including 10 clay layers - 4 of which are to be modeled as possible slip planes. The inclusion of layering introduces complications in structural and thermal calculations that were not present in the first benchmark problem. These additional complications will be handled differently by the various codes used to compute drift closure rates. This second benchmark problem will assess these codes by evaluating the treatment of these complications

  14. Reactor fuel depletion benchmark of TINDER

    International Nuclear Information System (INIS)

    Martin, W.J.; Oliveira, C.R.E. de; Hecht, A.A.

    2014-01-01

    Highlights: • A reactor burnup benchmark of TINDER, coupling MCNP6 to CINDER2008, was performed. • TINDER is a poor candidate for fuel depletion calculations using its current libraries. • Data library modification is necessary if fuel depletion is desired from TINDER. - Abstract: Accurate burnup calculations are key to proper nuclear reactor design, fuel cycle modeling, and disposal estimations. The TINDER code, originally designed for activation analyses, has been modified to handle full burnup calculations, including the widely used predictor–corrector feature. In order to properly characterize the performance of TINDER for this application, a benchmark calculation was performed. Although the results followed the trends of past benchmarked codes for a UO 2 PWR fuel sample from the Takahama-3 reactor, there were obvious deficiencies in the final result, likely in the nuclear data library that was used. Isotopic comparisons versus experiment and past code benchmarks are given, as well as hypothesized areas of deficiency and future work

  15. Verification and validation benchmarks.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy

    2007-02-01

    Verification and validation (V&V) are the primary means to assess the accuracy and reliability of computational simulations. V&V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V&V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the level of

  16. Verification and validation benchmarks

    International Nuclear Information System (INIS)

    Oberkampf, William Louis; Trucano, Timothy Guy

    2007-01-01

    Verification and validation (V and V) are the primary means to assess the accuracy and reliability of computational simulations. V and V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V and V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the

  17. Verification and validation benchmarks

    International Nuclear Information System (INIS)

    Oberkampf, William L.; Trucano, Timothy G.

    2008-01-01

    Verification and validation (V and V) are the primary means to assess the accuracy and reliability of computational simulations. V and V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V and V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the

  18. Developing integrated benchmarks for DOE performance measurement

    Energy Technology Data Exchange (ETDEWEB)

    Barancik, J.I.; Kramer, C.F.; Thode, Jr. H.C.

    1992-09-30

    The objectives of this task were to describe and evaluate selected existing sources of information on occupational safety and health with emphasis on hazard and exposure assessment, abatement, training, reporting, and control identifying for exposure and outcome in preparation for developing DOE performance benchmarks. Existing resources and methodologies were assessed for their potential use as practical performance benchmarks. Strengths and limitations of current data resources were identified. Guidelines were outlined for developing new or improved performance factors, which then could become the basis for selecting performance benchmarks. Data bases for non-DOE comparison populations were identified so that DOE performance could be assessed relative to non-DOE occupational and industrial groups. Systems approaches were described which can be used to link hazards and exposure, event occurrence, and adverse outcome factors, as needed to generate valid, reliable, and predictive performance benchmarks. Data bases were identified which contain information relevant to one or more performance assessment categories . A list of 72 potential performance benchmarks was prepared to illustrate the kinds of information that can be produced through a benchmark development program. Current information resources which may be used to develop potential performance benchmarks are limited. There is need to develop an occupational safety and health information and data system in DOE, which is capable of incorporating demonstrated and documented performance benchmarks prior to, or concurrent with the development of hardware and software. A key to the success of this systems approach is rigorous development and demonstration of performance benchmark equivalents to users of such data before system hardware and software commitments are institutionalized.

  19. Benchmarking and the laboratory

    Science.gov (United States)

    Galloway, M; Nadin, L

    2001-01-01

    This article describes how benchmarking can be used to assess laboratory performance. Two benchmarking schemes are reviewed, the Clinical Benchmarking Company's Pathology Report and the College of American Pathologists' Q-Probes scheme. The Clinical Benchmarking Company's Pathology Report is undertaken by staff based in the clinical management unit, Keele University with appropriate input from the professional organisations within pathology. Five annual reports have now been completed. Each report is a detailed analysis of 10 areas of laboratory performance. In this review, particular attention is focused on the areas of quality, productivity, variation in clinical practice, skill mix, and working hours. The Q-Probes scheme is part of the College of American Pathologists programme in studies of quality assurance. The Q-Probes scheme and its applicability to pathology in the UK is illustrated by reviewing two recent Q-Probe studies: routine outpatient test turnaround time and outpatient test order accuracy. The Q-Probes scheme is somewhat limited by the small number of UK laboratories that have participated. In conclusion, as a result of the government's policy in the UK, benchmarking is here to stay. Benchmarking schemes described in this article are one way in which pathologists can demonstrate that they are providing a cost effective and high quality service. Key Words: benchmarking • pathology PMID:11477112

  20. Integral benchmarks with reference to thorium fuel cycle

    International Nuclear Information System (INIS)

    Ganesan, S.

    2003-01-01

    This is a power point presentation about the Indian participation in the CRP 'Evaluated Data for the Thorium-Uranium fuel cycle'. The plans and scope of the Indian participation are to provide selected integral experimental benchmarks for nuclear data validation, including Indian Thorium burn up benchmarks, post-irradiation examination studies, comparison of basic evaluated data files and analysis of selected benchmarks for Th-U fuel cycle

  1. Preliminary report on SG126 Task 3: 129I interlaboratory comparison

    International Nuclear Information System (INIS)

    Roberts, M.L.; Caffee, M.W.; Proctor, I.D.

    1996-01-01

    An interlaboratory comparison exercise for 129 I has been organized and conducted. A total of seven laboratories participated in the exercise to either a full or limited extent. In the comparison, a suite of 11 samples was used. This suite of standards contained both synthetic 'standard type' materials (i.e., AgI) and environmental materials. The isotopic 129 I/ 127 I ratio of the samples varied from 10 -8 to 10 -14 . Results of the comparison are presented

  2. Shielding benchmark problems, (2)

    International Nuclear Information System (INIS)

    Tanaka, Shun-ichi; Sasamoto, Nobuo; Oka, Yoshiaki; Shin, Kazuo; Tada, Keiko.

    1980-02-01

    Shielding benchmark problems prepared by Working Group of Assessment of Shielding Experiments in the Research Committee on Shielding Design in the Atomic Energy Society of Japan were compiled by Shielding Laboratory in Japan Atomic Energy Research Institute. Fourteen shielding benchmark problems are presented newly in addition to twenty-one problems proposed already, for evaluating the calculational algorithm and accuracy of computer codes based on discrete ordinates method and Monte Carlo method and for evaluating the nuclear data used in codes. The present benchmark problems are principally for investigating the backscattering and the streaming of neutrons and gamma rays in two- and three-dimensional configurations. (author)

  3. Toxicological Benchmarks for Wildlife

    Energy Technology Data Exchange (ETDEWEB)

    Sample, B.E. Opresko, D.M. Suter, G.W.

    1993-01-01

    Ecological risks of environmental contaminants are evaluated by using a two-tiered process. In the first tier, a screening assessment is performed where concentrations of contaminants in the environment are compared to no observed adverse effects level (NOAEL)-based toxicological benchmarks. These benchmarks represent concentrations of chemicals (i.e., concentrations presumed to be nonhazardous to the biota) in environmental media (water, sediment, soil, food, etc.). While exceedance of these benchmarks does not indicate any particular level or type of risk, concentrations below the benchmarks should not result in significant effects. In practice, when contaminant concentrations in food or water resources are less than these toxicological benchmarks, the contaminants may be excluded from further consideration. However, if the concentration of a contaminant exceeds a benchmark, that contaminant should be retained as a contaminant of potential concern (COPC) and investigated further. The second tier in ecological risk assessment, the baseline ecological risk assessment, may use toxicological benchmarks as part of a weight-of-evidence approach (Suter 1993). Under this approach, based toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. Other sources of evidence include media toxicity tests, surveys of biota (abundance and diversity), measures of contaminant body burdens, and biomarkers. This report presents NOAEL- and lowest observed adverse effects level (LOAEL)-based toxicological benchmarks for assessment of effects of 85 chemicals on 9 representative mammalian wildlife species (short-tailed shrew, little brown bat, meadow vole, white-footed mouse, cottontail rabbit, mink, red fox, and whitetail deer) or 11 avian wildlife species (American robin, rough-winged swallow, American woodcock, wild turkey, belted kingfisher, great blue heron, barred owl, barn owl, Cooper's hawk, and red

  4. Comparisons of the MCNP criticality benchmark suite with ENDF/B-VI.8, JENDL-3.3, and JEFF-3.0

    International Nuclear Information System (INIS)

    Kim, Do Heon; Gil, Choong-Sup; Kim, Jung-Do; Chang, Jonghwa

    2003-01-01

    A comparative study has been performed with the latest evaluated nuclear data libraries ENDF/B-VI.8, JENDL-3.3, and JEFF-3.0. The study has been conducted through the benchmark calculations for 91 criticality problems with the libraries processed for MCNP4C. The calculation results have been compared with those of the ENDF60 library. The self-shielding effects of the unresolved-resonance (UR) probability tables have also been estimated for each library. The χ 2 differences between the MCNP results and experimental data were calculated for the libraries. (author)

  5. Electron Bernstein wave simulations and comparison to preliminary NSTX emission data

    International Nuclear Information System (INIS)

    Preinhaelter, Josef; Urban, Jakub; Pavlo, Pavol; Taylor, Gary; Diem, Steffi; Vahala, Linda; Vahala, George

    2006-01-01

    Simulations indicate that during flattop current discharges the optimal angles for the aiming of the National Spherical Torus Experiment (NSTX) antennae are quite rugged and basically independent of time. The time development of electron Bernstein wave emission (EBWE) at particular frequencies as well as the frequency spectrum of EBWE as would be seen by the recently installed NSTX antennae are computed. The simulation of EBWE at low frequencies (e.g., 16 GHz) agrees well with the recent preliminary EBWE measurements on NSTX. At high frequencies, the sensitivity of EBWE to magnetic field variations is understood by considering the Doppler broadened electron cyclotron harmonics and the cutoffs and resonances in the plasma. Significant EBWE variations are seen if the magnetic field is increased by as little as 2% at the plasma edge. The simulations for the low frequency antenna are compared to preliminary experimental data published separately by Diem et al. [Rev. Sci. Instrum.77 (2006)

  6. Comparison of investigator-delineated gross tumor volumes and quality assurance in pancreatic cancer: Analysis of the pretrial benchmark case for the SCALOP trial.

    Science.gov (United States)

    Fokas, Emmanouil; Clifford, Charlotte; Spezi, Emiliano; Joseph, George; Branagan, Jennifer; Hurt, Chris; Nixon, Lisette; Abrams, Ross; Staffurth, John; Mukherjee, Somnath

    2015-12-01

    To evaluate the variation in investigator-delineated volumes and assess plans from the radiotherapy trial quality assurance (RTTQA) program of SCALOP, a phase II trial in locally advanced pancreatic cancer. Participating investigators (n=25) outlined a pre-trial benchmark case as per RT protocol, and the accuracy of investigators' GTV (iGTV) and PTV (iPTV) was evaluated, against the trials team-defined gold standard GTV (gsGTV) and PTV (gsPTV), using both qualitative and geometric analyses. The median Jaccard Conformity Index (JCI) and Geographical Miss Index (GMI) were calculated. Participating RT centers also submitted a radiotherapy plan for this benchmark case, which was centrally reviewed against protocol-defined constraints. Twenty-five investigator-defined contours were evaluated. The median JCI and GMI of iGTVs were 0.57 (IQR: 0.51-0.65) and 0.26 (IQR: 0.15-0.40). For iPTVs, these were 0.75 (IQR: 0.71-0.79) and 0.14 (IQR: 0.11-0.22) respectively. Qualitative analysis showed largest variation at the tumor edges and failure to recognize a peri-pancreatic lymph node. There were no major protocol deviations in RT planning, but three minor PTV coverage deviations were identified. . SCALOP demonstrated considerable variation in iGTV delineation. RTTQA workshops and real-time central review of delineations are needed in future trials. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  7. Diagnostic Algorithm Benchmarking

    Science.gov (United States)

    Poll, Scott

    2011-01-01

    A poster for the NASA Aviation Safety Program Annual Technical Meeting. It describes empirical benchmarking on diagnostic algorithms using data from the ADAPT Electrical Power System testbed and a diagnostic software framework.

  8. Benchmarking Swiss electricity grids

    International Nuclear Information System (INIS)

    Walti, N.O.; Weber, Ch.

    2001-01-01

    This extensive article describes a pilot benchmarking project initiated by the Swiss Association of Electricity Enterprises that assessed 37 Swiss utilities. The data collected from these utilities on a voluntary basis included data on technical infrastructure, investments and operating costs. These various factors are listed and discussed in detail. The assessment methods and rating mechanisms that provided the benchmarks are discussed and the results of the pilot study are presented that are to form the basis of benchmarking procedures for the grid regulation authorities under the planned Switzerland's electricity market law. Examples of the practical use of the benchmarking methods are given and cost-efficiency questions still open in the area of investment and operating costs are listed. Prefaces by the Swiss Association of Electricity Enterprises and the Swiss Federal Office of Energy complete the article

  9. Benchmarking and Regulation

    DEFF Research Database (Denmark)

    Agrell, Per J.; Bogetoft, Peter

    . The application of benchmarking in regulation, however, requires specific steps in terms of data validation, model specification and outlier detection that are not systematically documented in open publications, leading to discussions about regulatory stability and economic feasibility of these techniques...

  10. Financial Integrity Benchmarks

    Data.gov (United States)

    City of Jackson, Mississippi — This data compiles standard financial integrity benchmarks that allow the City to measure its financial standing. It measure the City's debt ratio and bond ratings....

  11. Benchmarking in Foodservice Operations

    National Research Council Canada - National Science Library

    Johnson, Bonnie

    1998-01-01

    .... The design of this study included two parts: (1) eleven expert panelists involved in a Delphi technique to identify and rate importance of foodservice performance measures and rate the importance of benchmarking activities, and (2...

  12. MFTF TOTAL benchmark

    International Nuclear Information System (INIS)

    Choy, J.H.

    1979-06-01

    A benchmark of the TOTAL data base management system as applied to the Mirror Fusion Test Facility (MFTF) data base was implemented and run in February and March of 1979. The benchmark was run on an Interdata 8/32 and involved the following tasks: (1) data base design, (2) data base generation, (3) data base load, and (4) develop and implement programs to simulate MFTF usage of the data base

  13. Accelerator shielding benchmark problems

    International Nuclear Information System (INIS)

    Hirayama, H.; Ban, S.; Nakamura, T.

    1993-01-01

    Accelerator shielding benchmark problems prepared by Working Group of Accelerator Shielding in the Research Committee on Radiation Behavior in the Atomic Energy Society of Japan were compiled by Radiation Safety Control Center of National Laboratory for High Energy Physics. Twenty-five accelerator shielding benchmark problems are presented for evaluating the calculational algorithm, the accuracy of computer codes and the nuclear data used in codes. (author)

  14. Shielding benchmark problems

    International Nuclear Information System (INIS)

    Tanaka, Shun-ichi; Sasamoto, Nobuo; Oka, Yoshiaki; Kawai, Masayoshi; Nakazawa, Masaharu.

    1978-09-01

    Shielding benchmark problems were prepared by the Working Group of Assessment of Shielding Experiments in the Research Comittee on Shielding Design of the Atomic Energy Society of Japan, and compiled by the Shielding Laboratory of Japan Atomic Energy Research Institute. Twenty-one kinds of shielding benchmark problems are presented for evaluating the calculational algorithm and the accuracy of computer codes based on the discrete ordinates method and the Monte Carlo method and for evaluating the nuclear data used in the codes. (author)

  15. Benchmarking infrastructure for mutation text mining.

    Science.gov (United States)

    Klein, Artjom; Riazanov, Alexandre; Hindle, Matthew M; Baker, Christopher Jo

    2014-02-25

    Experimental research on the automatic extraction of information about mutations from texts is greatly hindered by the lack of consensus evaluation infrastructure for the testing and benchmarking of mutation text mining systems. We propose a community-oriented annotation and benchmarking infrastructure to support development, testing, benchmarking, and comparison of mutation text mining systems. The design is based on semantic standards, where RDF is used to represent annotations, an OWL ontology provides an extensible schema for the data and SPARQL is used to compute various performance metrics, so that in many cases no programming is needed to analyze results from a text mining system. While large benchmark corpora for biological entity and relation extraction are focused mostly on genes, proteins, diseases, and species, our benchmarking infrastructure fills the gap for mutation information. The core infrastructure comprises (1) an ontology for modelling annotations, (2) SPARQL queries for computing performance metrics, and (3) a sizeable collection of manually curated documents, that can support mutation grounding and mutation impact extraction experiments. We have developed the principal infrastructure for the benchmarking of mutation text mining tasks. The use of RDF and OWL as the representation for corpora ensures extensibility. The infrastructure is suitable for out-of-the-box use in several important scenarios and is ready, in its current state, for initial community adoption.

  16. Benchmarking infrastructure for mutation text mining

    Science.gov (United States)

    2014-01-01

    Background Experimental research on the automatic extraction of information about mutations from texts is greatly hindered by the lack of consensus evaluation infrastructure for the testing and benchmarking of mutation text mining systems. Results We propose a community-oriented annotation and benchmarking infrastructure to support development, testing, benchmarking, and comparison of mutation text mining systems. The design is based on semantic standards, where RDF is used to represent annotations, an OWL ontology provides an extensible schema for the data and SPARQL is used to compute various performance metrics, so that in many cases no programming is needed to analyze results from a text mining system. While large benchmark corpora for biological entity and relation extraction are focused mostly on genes, proteins, diseases, and species, our benchmarking infrastructure fills the gap for mutation information. The core infrastructure comprises (1) an ontology for modelling annotations, (2) SPARQL queries for computing performance metrics, and (3) a sizeable collection of manually curated documents, that can support mutation grounding and mutation impact extraction experiments. Conclusion We have developed the principal infrastructure for the benchmarking of mutation text mining tasks. The use of RDF and OWL as the representation for corpora ensures extensibility. The infrastructure is suitable for out-of-the-box use in several important scenarios and is ready, in its current state, for initial community adoption. PMID:24568600

  17. Supermarket Refrigeration System - Benchmark for Hybrid System Control

    DEFF Research Database (Denmark)

    Sloth, Lars Finn; Izadi-Zamanabadi, Roozbeh; Wisniewski, Rafal

    2007-01-01

    This paper presents a supermarket refrigeration system as a benchmark for development of new ideas and a comparison of methods for hybrid systems' modeling and control. The benchmark features switch dynamics and discrete valued input making it a hybrid system, furthermore the outputs are subjected...

  18. Parton Distribution Benchmarking with LHC Data

    NARCIS (Netherlands)

    Ball, Richard D.; Carrazza, Stefano; Debbio, Luigi Del; Forte, Stefano; Gao, Jun; Hartland, Nathan; Huston, Joey; Nadolsky, Pavel; Rojo, Juan; Stump, Daniel; Thorne, Robert S.; Yuan, C. -P.

    2012-01-01

    We present a detailed comparison of the most recent sets of NNLO PDFs from the ABM, CT, HERAPDF, MSTW and NNPDF collaborations. We compare parton distributions at low and high scales and parton luminosities relevant for LHC phenomenology. We study the PDF dependence of LHC benchmark inclusive cross

  19. Benchmarking electricity distribution

    Energy Technology Data Exchange (ETDEWEB)

    Watts, K. [Department of Justice and Attorney-General, QLD (Australia)

    1995-12-31

    Benchmarking has been described as a method of continuous improvement that involves an ongoing and systematic evaluation and incorporation of external products, services and processes recognised as representing best practice. It is a management tool similar to total quality management (TQM) and business process re-engineering (BPR), and is best used as part of a total package. This paper discusses benchmarking models and approaches and suggests a few key performance indicators that could be applied to benchmarking electricity distribution utilities. Some recent benchmarking studies are used as examples and briefly discussed. It is concluded that benchmarking is a strong tool to be added to the range of techniques that can be used by electricity distribution utilities and other organizations in search of continuous improvement, and that there is now a high level of interest in Australia. Benchmarking represents an opportunity for organizations to approach learning from others in a disciplined and highly productive way, which will complement the other micro-economic reforms being implemented in Australia. (author). 26 refs.

  20. Concrete benchmark experiment: ex-vessel LWR surveillance dosimetry; Experience ``Benchmark beton`` pour la dosimetrie hors cuve dans les reacteurs a eau legere

    Energy Technology Data Exchange (ETDEWEB)

    Ait Abderrahim, H.; D`Hondt, P.; Oeyen, J.; Risch, P.; Bioux, P.

    1993-09-01

    The analysis of DOEL-1 in-vessel and ex-vessel neutron dosimetry, using the DOT 3.5 Sn code coupled with the VITAMIN-C cross-section library, showed the same C/E values for different detectors at the surveillance capsule and the ex-vessel cavity positions. These results seem to be in contradiction with those obtained in several Benchmark experiments (PCA, PSF, VENUS...) when using the same computational tools. Indeed a strong decreasing radial trend of the C/E was observed, partly explained by the overestimation of the iron inelastic scattering. The flat trend seen in DOEL-1 could be explained by compensating errors in the calculation such as the backscattering due to the concrete walls outside the cavity. The `Concrete Benchmark` experiment has been designed to judge the ability of this calculation methods to treat the backscattering. This paper describes the `Concrete Benchmark` experiment, the measured and computed neutron dosimetry results and their comparison. This preliminary analysis seems to indicate an overestimation of the backscattering effect in the calculations. (authors). 5 figs., 1 tab., 7 refs.

  1. Global two-channel AVHRR aerosol climatology: effects of stratospheric aerosols and preliminary comparisons with MODIS and MISR retrievals

    International Nuclear Information System (INIS)

    Geogdzhayev, Igor V.; Mishchenko, Michael I.; Liu Li; Remer, Lorraine

    2004-01-01

    We present an update on the status of the global climatology of the aerosol column optical thickness and Angstrom exponent derived from channel-1 and -2 radiances of the Advanced Very High Resolution Radiometer (AVHRR) in the framework of the Global Aerosol Climatology Project (GACP). The latest version of the climatology covers the period from July 1983 to September 2001 and is based on an adjusted value of the diffuse component of the ocean reflectance as derived from extensive comparisons with ship sun-photometer data. We use the updated GACP climatology and Stratospheric Aerosol and Gas Experiment (SAGE) data to analyze how stratospheric aerosols from major volcanic eruptions can affect the GACP aerosol product. One possible retrieval strategy based on the AVHRR channel-1 and -2 data alone is to infer both the stratospheric and the tropospheric aerosol optical thickness while assuming fixed microphysical models for both aerosol components. The second approach is to use the SAGE stratospheric aerosol data in order to constrain the AVHRR retrieval algorithm. We demonstrate that the second approach yields a consistent long-term record of the tropospheric aerosol optical thickness and Angstrom exponent. Preliminary comparisons of the GACP aerosol product with MODerate resolution Imaging Spectrometer (MODIS) and Multiangle Imaging Spectro-Radiometer aerosol retrievals show reasonable agreement, the GACP global monthly optical thickness being lower than the MODIS one by approximately 0.03. Larger differences are observed on a regional scale. Comparisons of the GACP and MODIS Angstrom exponent records are less conclusive and require further analysis

  2. The KMAT: Benchmarking Knowledge Management.

    Science.gov (United States)

    de Jager, Martha

    Provides an overview of knowledge management and benchmarking, including the benefits and methods of benchmarking (e.g., competitive, cooperative, collaborative, and internal benchmarking). Arthur Andersen's KMAT (Knowledge Management Assessment Tool) is described. The KMAT is a collaborative benchmarking tool, designed to help organizations make…

  3. A benchmark comparison of the Canadian Supercritical Water-Cooled Reactor (SCWR) 64-element fuel lattice cell parameters using various computer codes

    International Nuclear Information System (INIS)

    Sharpe, J.; Salaun, F.; Hummel, D.; Moghrabi, A.; Nowak, M.; Pencer, J.; Novog, D.; Buijs, A.

    2015-01-01

    Discrepancies in key lattice physics parameters have been observed between various deterministic (e.g. DRAGON and WIMS-AECL) and stochastic (MCNP, KENO) neutron transport codes in modeling previous versions of the Canadian SCWR lattice cell. Further, inconsistencies in these parameters have also been observed when using different nuclear data libraries. In this work, the predictions of k∞, various reactivity coefficients, and relative ring-averaged pin powers have been re-evaluated using these codes and libraries with the most recent 64-element fuel assembly geometry. A benchmark problem has been defined to quantify the dissimilarities between code results for a number of responses along the fuel channel under prescribed hot full power (HFP), hot zero power (HZP) and cold zero power (CZP) conditions and at several fuel burnups (0, 25 and 50 MW·d·kg"-"1 [HM]). Results from deterministic (TRITON, DRAGON) and stochastic codes (MCNP6, KENO V.a and KENO-VI) are presented. (author)

  4. Benchmarking the Netherlands. Benchmarking for growth

    International Nuclear Information System (INIS)

    2003-01-01

    This is the fourth edition of the Ministry of Economic Affairs' publication 'Benchmarking the Netherlands', which aims to assess the competitiveness of the Dutch economy. The methodology and objective of the benchmarking remain the same. The basic conditions for economic activity (institutions, regulation, etc.) in a number of benchmark countries are compared in order to learn from the solutions found by other countries for common economic problems. This publication is devoted entirely to the potential output of the Dutch economy. In other words, its ability to achieve sustainable growth and create work over a longer period without capacity becoming an obstacle. This is important because economic growth is needed to increase prosperity in the broad sense and meeting social needs. Prosperity in both a material (per capita GDP) and immaterial (living environment, environment, health, etc) sense, in other words. The economy's potential output is determined by two structural factors: the growth of potential employment and the structural increase in labour productivity. Analysis by the Netherlands Bureau for Economic Policy Analysis (CPB) shows that in recent years the increase in the capacity for economic growth has been realised mainly by increasing the supply of labour and reducing the equilibrium unemployment rate. In view of the ageing of the population in the coming years and decades the supply of labour is unlikely to continue growing at the pace we have become accustomed to in recent years. According to a number of recent studies, to achieve a respectable rate of sustainable economic growth the aim will therefore have to be to increase labour productivity. To realise this we have to focus on for six pillars of economic policy: (1) human capital, (2) functioning of markets, (3) entrepreneurship, (4) spatial planning, (5) innovation, and (6) sustainability. These six pillars determine the course for economic policy aiming at higher productivity growth. Throughout

  5. Benchmarking the Netherlands. Benchmarking for growth

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2003-01-01

    This is the fourth edition of the Ministry of Economic Affairs' publication 'Benchmarking the Netherlands', which aims to assess the competitiveness of the Dutch economy. The methodology and objective of the benchmarking remain the same. The basic conditions for economic activity (institutions, regulation, etc.) in a number of benchmark countries are compared in order to learn from the solutions found by other countries for common economic problems. This publication is devoted entirely to the potential output of the Dutch economy. In other words, its ability to achieve sustainable growth and create work over a longer period without capacity becoming an obstacle. This is important because economic growth is needed to increase prosperity in the broad sense and meeting social needs. Prosperity in both a material (per capita GDP) and immaterial (living environment, environment, health, etc) sense, in other words. The economy's potential output is determined by two structural factors: the growth of potential employment and the structural increase in labour productivity. Analysis by the Netherlands Bureau for Economic Policy Analysis (CPB) shows that in recent years the increase in the capacity for economic growth has been realised mainly by increasing the supply of labour and reducing the equilibrium unemployment rate. In view of the ageing of the population in the coming years and decades the supply of labour is unlikely to continue growing at the pace we have become accustomed to in recent years. According to a number of recent studies, to achieve a respectable rate of sustainable economic growth the aim will therefore have to be to increase labour productivity. To realise this we have to focus on for six pillars of economic policy: (1) human capital, (2) functioning of markets, (3) entrepreneurship, (4) spatial planning, (5) innovation, and (6) sustainability. These six pillars determine the course for economic policy aiming at higher productivity

  6. Preliminary Assessment of ATR-C Capabilities to Provide Integral Benchmark Data for Key Structural/Matrix Materials that May be Used for Nuclear Data Testing and Analytical Methods Validation

    Energy Technology Data Exchange (ETDEWEB)

    John D. Bess

    2009-03-01

    The purpose of this research is to provide a fundamental computational investigation into the possible integration of experimental activities with the Advanced Test Reactor Critical (ATR-C) facility with the development of benchmark experiments. Criticality benchmarks performed in the ATR-C could provide integral data for key matrix and structural materials used in nuclear systems. Results would then be utilized in the improvement of nuclear data libraries and as a means for analytical methods validation. It is proposed that experiments consisting of well-characterized quantities of materials be placed in the Northwest flux trap position of the ATR-C. The reactivity worth of the material could be determined and computationally analyzed through comprehensive benchmark activities including uncertainty analyses. Experiments were modeled in the available benchmark model of the ATR using MCNP5 with the ENDF/B-VII.0 cross section library. A single bar (9.5 cm long, 0.5 cm wide, and 121.92 cm high) of each material could provide sufficient reactivity difference in the core geometry for computational modeling and analysis. However, to provide increased opportunity for the validation of computational models, additional bars of material placed in the flux trap would increase the effective reactivity up to a limit of 1$ insertion. For simplicity in assembly manufacture, approximately four bars of material could provide a means for additional experimental benchmark configurations, except in the case of strong neutron absorbers and many materials providing positive reactivity. Future tasks include the cost analysis and development of the experimental assemblies, including means for the characterization of the neutron flux and spectral indices. Oscillation techniques may also serve to provide additional means for experimentation and validation of computational methods and acquisition of integral data for improving neutron cross sections. Further assessment of oscillation

  7. Preliminary Assessment of ATR-C Capabilities to Provide Integral Benchmark Data for Key Structural/Matrix Materials that May be Used for Nuclear Data Testing and Analytical Methods Validation

    Energy Technology Data Exchange (ETDEWEB)

    John D. Bess

    2009-07-01

    The purpose of this document is to identify some suggested types of experiments that can be performed in the Advanced Test Reactor Critical (ATR-C) facility. A fundamental computational investigation is provided to demonstrate possible integration of experimental activities in the ATR-C with the development of benchmark experiments. Criticality benchmarks performed in the ATR-C could provide integral data for key matrix and structural materials used in nuclear systems. Results would then be utilized in the improvement of nuclear data libraries and as a means for analytical methods validation. It is proposed that experiments consisting of well-characterized quantities of materials be placed in the Northwest flux trap position of the ATR-C. The reactivity worth of the material could be determined and computationally analyzed through comprehensive benchmark activities including uncertainty analyses. Experiments were modeled in the available benchmark model of the ATR using MCNP5 with the ENDF/B-VII.0 cross section library. A single bar (9.5 cm long, 0.5 cm wide, and 121.92 cm high) of each material could provide sufficient reactivity difference in the core geometry for computational modeling and analysis. However, to provide increased opportunity for the validation of computational models, additional bars of material placed in the flux trap would increase the effective reactivity up to a limit of 1$ insertion. For simplicity in assembly manufacture, approximately four bars of material could provide a means for additional experimental benchmark configurations, except in the case of strong neutron absorbers and many materials providing positive reactivity. Future tasks include the cost analysis and development of the experimental assemblies, including means for the characterization of the neutron flux and spectral indices. Oscillation techniques may also serve to provide additional means for experimentation and validation of computational methods and acquisition of

  8. A flexible Monte Carlo tool for patient or phantom specific calculations: comparison with preliminary validation measurements

    Science.gov (United States)

    Davidson, S.; Cui, J.; Followill, D.; Ibbott, G.; Deasy, J.

    2008-02-01

    The Dose Planning Method (DPM) is one of several 'fast' Monte Carlo (MC) computer codes designed to produce an accurate dose calculation for advanced clinical applications. We have developed a flexible machine modeling process and validation tests for open-field and IMRT calculations. To complement the DPM code, a practical and versatile source model has been developed, whose parameters are derived from a standard set of planning system commissioning measurements. The primary photon spectrum and the spectrum resulting from the flattening filter are modeled by a Fatigue function, cut-off by a multiplying Fermi function, which effectively regularizes the difficult energy spectrum determination process. Commonly-used functions are applied to represent the off-axis softening, increasing primary fluence with increasing angle ('the horn effect'), and electron contamination. The patient dependent aspect of the MC dose calculation utilizes the multi-leaf collimator (MLC) leaf sequence file exported from the treatment planning system DICOM output, coupled with the source model, to derive the particle transport. This model has been commissioned for Varian 2100C 6 MV and 18 MV photon beams using percent depth dose, dose profiles, and output factors. A 3-D conformal plan and an IMRT plan delivered to an anthropomorphic thorax phantom were used to benchmark the model. The calculated results were compared to Pinnacle v7.6c results and measurements made using radiochromic film and thermoluminescent detectors (TLD).

  9. Benchmarking in Mobarakeh Steel Company

    Directory of Open Access Journals (Sweden)

    Sasan Ghasemi

    2008-05-01

    Full Text Available Benchmarking is considered as one of the most effective ways of improving performance incompanies. Although benchmarking in business organizations is a relatively new concept and practice, ithas rapidly gained acceptance worldwide. This paper introduces the benchmarking project conducted in Esfahan’s Mobarakeh Steel Company, as the first systematic benchmarking project conducted in Iran. It aimsto share the process deployed for the benchmarking project in this company and illustrate how the projectsystematic implementation led to succes.

  10. Benchmarking in Mobarakeh Steel Company

    OpenAIRE

    Sasan Ghasemi; Mohammad Nazemi; Mehran Nejati

    2008-01-01

    Benchmarking is considered as one of the most effective ways of improving performance in companies. Although benchmarking in business organizations is a relatively new concept and practice, it has rapidly gained acceptance worldwide. This paper introduces the benchmarking project conducted in Esfahan's Mobarakeh Steel Company, as the first systematic benchmarking project conducted in Iran. It aims to share the process deployed for the benchmarking project in this company and illustrate how th...

  11. Shielding Benchmark Computational Analysis

    International Nuclear Information System (INIS)

    Hunter, H.T.; Slater, C.O.; Holland, L.B.; Tracz, G.; Marshall, W.J.; Parsons, J.L.

    2000-01-01

    Over the past several decades, nuclear science has relied on experimental research to verify and validate information about shielding nuclear radiation for a variety of applications. These benchmarks are compared with results from computer code models and are useful for the development of more accurate cross-section libraries, computer code development of radiation transport modeling, and building accurate tests for miniature shielding mockups of new nuclear facilities. When documenting measurements, one must describe many parts of the experimental results to allow a complete computational analysis. Both old and new benchmark experiments, by any definition, must provide a sound basis for modeling more complex geometries required for quality assurance and cost savings in nuclear project development. Benchmarks may involve one or many materials and thicknesses, types of sources, and measurement techniques. In this paper the benchmark experiments of varying complexity are chosen to study the transport properties of some popular materials and thicknesses. These were analyzed using three-dimensional (3-D) models and continuous energy libraries of MCNP4B2, a Monte Carlo code developed at Los Alamos National Laboratory, New Mexico. A shielding benchmark library provided the experimental data and allowed a wide range of choices for source, geometry, and measurement data. The experimental data had often been used in previous analyses by reputable groups such as the Cross Section Evaluation Working Group (CSEWG) and the Organization for Economic Cooperation and Development/Nuclear Energy Agency Nuclear Science Committee (OECD/NEANSC)

  12. Preliminary comparison of the Essie and PubMed search engines for answering clinical questions using MD on Tap, a PDA-based program for accessing biomedical literature.

    Science.gov (United States)

    Sutton, Victoria R; Hauser, Susan E

    2005-01-01

    MD on Tap, a PDA application that searches and retrieves biomedical literature, is specifically designed for use by mobile healthcare professionals. With the goal of improving the usability of the application, a preliminary comparison was made of two search engines (PubMed and Essie) to determine which provided most efficient path to the desired clinically-relevant information.

  13. Deviating From the Benchmarks

    DEFF Research Database (Denmark)

    Rocha, Vera; Van Praag, Mirjam; Carneiro, Anabela

    This paper studies three related questions: To what extent otherwise similar startups employ different quantities and qualities of human capital at the moment of entry? How persistent are initial human capital choices over time? And how does deviating from human capital benchmarks influence firm......, founders human capital, and the ownership structure of startups (solo entrepreneurs versus entrepreneurial teams). We then study the survival implications of exogenous deviations from these benchmarks, based on spline models for survival data. Our results indicate that (especially negative) deviations from...... the benchmark can be substantial, are persistent over time, and hinder the survival of firms. The implications may, however, vary according to the sector and the ownership structure at entry. Given the stickiness of initial choices, wrong human capital decisions at entry turn out to be a close to irreversible...

  14. PRELIMINARY STUDY TO PRIMARY EDUCATION FACILITIES (A Comparison Study between Indonesia and Developed Countries

    Directory of Open Access Journals (Sweden)

    Lucy Yosita

    2006-01-01

    Full Text Available This writing is a preliminary study to condition of primary education facilities in Indonesia, and then comparing these with theories as well as various relevant cases aimed to know the problem more obviously. Basically, there is difference between primary education facilities in Indonesia with those in developed countries. Meanwhile on the other hand, the condition as well as the completion of education facility is actually as the main factor contributes to address the purpose of learning process. If building design, interior and also site plan were dynamic in form, space, colour and tools, those would be probably more stimulate activity and influence into the growth of students. However, lastly, it is still required further analysis, as an example analysis to student's behaviour in spaces of learning environment, more detail and within enough time, not only at indoor but also at outdoor.

  15. A comparison of the recruitment of antibody forming cells in the nose and lung: Preliminary findings

    Energy Technology Data Exchange (ETDEWEB)

    King-Herbert, A P; Bice, D E; Harkema, J R

    1988-12-01

    Instillation of a particulate antigen into a selected lung lobe leads to an accumulation of antibody forming cells in the exposed lung lobe. Our goal in this preliminary study was to determine if an immune response could be elicited in the nasal mucosa of Beagle dogs exposed to a particulate antigen, and if so, to compare this immune response with that of the lungs when the nasal mucosa and the lungs are each immunized with a different particulate antigen. An Immune response was observed when the nasal mucosa was exposed to particulate antigen, but numbers of antibody-forming cells and levels of antibody in the nose were much lower than observed in an immunized lung lobe. (author)

  16. A comparison of the recruitment of antibody forming cells in the nose and lung: Preliminary findings

    International Nuclear Information System (INIS)

    King-Herbert, A.P.; Bice, D.E.; Harkema, J.R.

    1988-01-01

    Instillation of a particulate antigen into a selected lung lobe leads to an accumulation of antibody forming cells in the exposed lung lobe. Our goal in this preliminary study was to determine if an immune response could be elicited in the nasal mucosa of Beagle dogs exposed to a particulate antigen, and if so, to compare this immune response with that of the lungs when the nasal mucosa and the lungs are each immunized with a different particulate antigen. An Immune response was observed when the nasal mucosa was exposed to particulate antigen, but numbers of antibody-forming cells and levels of antibody in the nose were much lower than observed in an immunized lung lobe. (author)

  17. Operating performance and environmental and safety risks: A preliminary comparison of majors and independents

    International Nuclear Information System (INIS)

    Pulsipher, A.G.; Iledare, W.O.; Baumann, R.H.; Mesyanzhinov, D.

    1995-01-01

    The objective is to compare the safety and environmental records of oil and gas companies operating on the OCS in the Gulf of Mexico over the past decade. The reason for doing so is to help inform public sector policy-makers and private sector decision-makers about the potential safety and environmental risks associated with the expected increased presence of smaller independents in the domestic oil and gas industry in general and on the federal OCS in particular. The preliminary conclusion is that although independents have had a modestly high incidence of fires and explosions than the majors, the difference is not significant statistically and is largely attributable to a few ''bad actors'' rather than demonstrably poorer practice by the group as a whole

  18. Benchmark referencing of neutron dosimetry measurements

    International Nuclear Information System (INIS)

    Eisenhauer, C.M.; Grundl, J.A.; Gilliam, D.M.; McGarry, E.D.; Spiegel, V.

    1980-01-01

    The concept of benchmark referencing involves interpretation of dosimetry measurements in applied neutron fields in terms of similar measurements in benchmark fields whose neutron spectra and intensity are well known. The main advantage of benchmark referencing is that it minimizes or eliminates many types of experimental uncertainties such as those associated with absolute detection efficiencies and cross sections. In this paper we consider the cavity external to the pressure vessel of a power reactor as an example of an applied field. The pressure vessel cavity is an accessible location for exploratory dosimetry measurements aimed at understanding embrittlement of pressure vessel steel. Comparisons with calculated predictions of neutron fluence and spectra in the cavity provide a valuable check of the computational methods used to estimate pressure vessel safety margins for pressure vessel lifetimes

  19. MIPS bacterial genomes functional annotation benchmark dataset.

    Science.gov (United States)

    Tetko, Igor V; Brauner, Barbara; Dunger-Kaltenbach, Irmtraud; Frishman, Goar; Montrone, Corinna; Fobo, Gisela; Ruepp, Andreas; Antonov, Alexey V; Surmeli, Dimitrij; Mewes, Hans-Wernen

    2005-05-15

    Any development of new methods for automatic functional annotation of proteins according to their sequences requires high-quality data (as benchmark) as well as tedious preparatory work to generate sequence parameters required as input data for the machine learning methods. Different program settings and incompatible protocols make a comparison of the analyzed methods difficult. The MIPS Bacterial Functional Annotation Benchmark dataset (MIPS-BFAB) is a new, high-quality resource comprising four bacterial genomes manually annotated according to the MIPS functional catalogue (FunCat). These resources include precalculated sequence parameters, such as sequence similarity scores, InterPro domain composition and other parameters that could be used to develop and benchmark methods for functional annotation of bacterial protein sequences. These data are provided in XML format and can be used by scientists who are not necessarily experts in genome annotation. BFAB is available at http://mips.gsf.de/proj/bfab

  20. Benchmarking criticality safety calculations with subcritical experiments

    International Nuclear Information System (INIS)

    Mihalczo, J.T.

    1984-06-01

    Calculation of the neutron multiplication factor at delayed criticality may be necessary for benchmarking calculations but it may not be sufficient. The use of subcritical experiments to benchmark criticality safety calculations could result in substantial savings in fuel material costs for experiments. In some cases subcritical configurations could be used to benchmark calculations where sufficient fuel to achieve delayed criticality is not available. By performing a variety of measurements with subcritical configurations, much detailed information can be obtained which can be compared directly with calculations. This paper discusses several measurements that can be performed with subcritical assemblies and presents examples that include comparisons between calculation and experiment where possible. Where not, examples from critical experiments have been used but the measurement methods could also be used for subcritical experiments

  1. HPCG Benchmark Technical Specification

    Energy Technology Data Exchange (ETDEWEB)

    Heroux, Michael Allen [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Dongarra, Jack [Univ. of Tennessee, Knoxville, TN (United States); Luszczek, Piotr [Univ. of Tennessee, Knoxville, TN (United States)

    2013-10-01

    The High Performance Conjugate Gradient (HPCG) benchmark [cite SNL, UTK reports] is a tool for ranking computer systems based on a simple additive Schwarz, symmetric Gauss-Seidel preconditioned conjugate gradient solver. HPCG is similar to the High Performance Linpack (HPL), or Top 500, benchmark [1] in its purpose, but HPCG is intended to better represent how today’s applications perform. In this paper we describe the technical details of HPCG: how it is designed and implemented, what code transformations are permitted and how to interpret and report results.

  2. Benchmarking for Best Practice

    CERN Document Server

    Zairi, Mohamed

    1998-01-01

    Benchmarking for Best Practice uses up-to-the-minute case-studies of individual companies and industry-wide quality schemes to show how and why implementation has succeeded. For any practitioner wanting to establish best practice in a wide variety of business areas, this book makes essential reading. .It is also an ideal textbook on the applications of TQM since it describes concepts, covers definitions and illustrates the applications with first-hand examples. Professor Mohamed Zairi is an international expert and leading figure in the field of benchmarking. His pioneering work in this area l

  3. Benchmarking Danish Industries

    DEFF Research Database (Denmark)

    Gammelgaard, Britta; Bentzen, Eric; Aagaard Andreassen, Mette

    2003-01-01

    compatible survey. The International Manufacturing Strategy Survey (IMSS) doesbring up the question of supply chain management, but unfortunately, we did not have access to thedatabase. Data from the members of the SCOR-model, in the form of benchmarked performance data,may exist, but are nonetheless...... not public. The survey is a cooperative project "Benchmarking DanishIndustries" with CIP/Aalborg University, the Danish Technological University, the DanishTechnological Institute and Copenhagen Business School as consortia partners. The project has beenfunded by the Danish Agency for Trade and Industry...

  4. RB reactor benchmark cores

    International Nuclear Information System (INIS)

    Pesic, M.

    1998-01-01

    A selected set of the RB reactor benchmark cores is presented in this paper. The first results of validation of the well-known Monte Carlo MCNP TM code and adjoining neutron cross section libraries are given. They confirm the idea for the proposal of the new U-D 2 O criticality benchmark system and support the intention to include this system in the next edition of the recent OECD/NEA Project: International Handbook of Evaluated Criticality Safety Experiment, in near future. (author)

  5. Full sphere hydrodynamic and dynamo benchmarks

    KAUST Repository

    Marti, P.

    2014-01-26

    Convection in planetary cores can generate fluid flow and magnetic fields, and a number of sophisticated codes exist to simulate the dynamic behaviour of such systems. We report on the first community activity to compare numerical results of computer codes designed to calculate fluid flow within a whole sphere. The flows are incompressible and rapidly rotating and the forcing of the flow is either due to thermal convection or due to moving boundaries. All problems defined have solutions that alloweasy comparison, since they are either steady, slowly drifting or perfectly periodic. The first two benchmarks are defined based on uniform internal heating within the sphere under the Boussinesq approximation with boundary conditions that are uniform in temperature and stress-free for the flow. Benchmark 1 is purely hydrodynamic, and has a drifting solution. Benchmark 2 is a magnetohydrodynamic benchmark that can generate oscillatory, purely periodic, flows and magnetic fields. In contrast, Benchmark 3 is a hydrodynamic rotating bubble benchmark using no slip boundary conditions that has a stationary solution. Results from a variety of types of code are reported, including codes that are fully spectral (based on spherical harmonic expansions in angular coordinates and polynomial expansions in radius), mixed spectral and finite difference, finite volume, finite element and also a mixed Fourier-finite element code. There is good agreement between codes. It is found that in Benchmarks 1 and 2, the approximation of a whole sphere problem by a domain that is a spherical shell (a sphere possessing an inner core) does not represent an adequate approximation to the system, since the results differ from whole sphere results. © The Authors 2014. Published by Oxford University Press on behalf of The Royal Astronomical Society.

  6. A benchmark comparison of the Canadian Supercritical Water-Cooled Reactor (SCWR) 64-element fuel lattice cell parameters using various computer codes

    Energy Technology Data Exchange (ETDEWEB)

    Sharpe, J.; Salaun, F.; Hummel, D.; Moghrabi, A., E-mail: sharpejr@mcmaster.ca [McMaster University, Hamilton, ON (Canada); Nowak, M. [McMaster University, Hamilton, ON (Canada); Institut National Polytechnique de Grenoble, Phelma, Grenoble (France); Pencer, J. [McMaster University, Hamilton, ON (Canada); Canadian Nuclear Laboratories, Chalk River, ON, (Canada); Novog, D.; Buijs, A. [McMaster University, Hamilton, ON (Canada)

    2015-07-01

    Discrepancies in key lattice physics parameters have been observed between various deterministic (e.g. DRAGON and WIMS-AECL) and stochastic (MCNP, KENO) neutron transport codes in modeling previous versions of the Canadian SCWR lattice cell. Further, inconsistencies in these parameters have also been observed when using different nuclear data libraries. In this work, the predictions of k∞, various reactivity coefficients, and relative ring-averaged pin powers have been re-evaluated using these codes and libraries with the most recent 64-element fuel assembly geometry. A benchmark problem has been defined to quantify the dissimilarities between code results for a number of responses along the fuel channel under prescribed hot full power (HFP), hot zero power (HZP) and cold zero power (CZP) conditions and at several fuel burnups (0, 25 and 50 MW·d·kg{sup -1} [HM]). Results from deterministic (TRITON, DRAGON) and stochastic codes (MCNP6, KENO V.a and KENO-VI) are presented. (author)

  7. Quality in E-Learning--A Conceptual Framework Based on Experiences from Three International Benchmarking Projects

    Science.gov (United States)

    Ossiannilsson, E.; Landgren, L.

    2012-01-01

    Between 2008 and 2010, Lund University took part in three international benchmarking projects, "E-xcellence+," the "eLearning Benchmarking Exercise 2009," and the "First Dual-Mode Distance Learning Benchmarking Club." A comparison of these models revealed a rather high level of correspondence. From this finding and…

  8. Benchmarking and Performance Management

    Directory of Open Access Journals (Sweden)

    Adrian TANTAU

    2010-12-01

    Full Text Available The relevance of the chosen topic is explained by the meaning of the firm efficiency concept - the firm efficiency means the revealed performance (how well the firm performs in the actual market environment given the basic characteristics of the firms and their markets that are expected to drive their profitability (firm size, market power etc.. This complex and relative performance could be due to such things as product innovation, management quality, work organization, some other factors can be a cause even if they are not directly observed by the researcher. The critical need for the management individuals/group to continuously improve their firm/company’s efficiency and effectiveness, the need for the managers to know which are the success factors and the competitiveness determinants determine consequently, what performance measures are most critical in determining their firm’s overall success. Benchmarking, when done properly, can accurately identify both successful companies and the underlying reasons for their success. Innovation and benchmarking firm level performance are critical interdependent activities. Firm level variables, used to infer performance, are often interdependent due to operational reasons. Hence, the managers need to take the dependencies among these variables into account when forecasting and benchmarking performance. This paper studies firm level performance using financial ratio and other type of profitability measures. It uses econometric models to describe and then propose a method to forecast and benchmark performance.

  9. Surveys and Benchmarks

    Science.gov (United States)

    Bers, Trudy

    2012-01-01

    Surveys and benchmarks continue to grow in importance for community colleges in response to several factors. One is the press for accountability, that is, for colleges to report the outcomes of their programs and services to demonstrate their quality and prudent use of resources, primarily to external constituents and governing boards at the state…

  10. Comorbid psychiatric diagnoses in kleptomania and pathological gambling: a preliminary comparison study.

    Science.gov (United States)

    Dannon, Pinhas N; Lowengrub, Katherine; Sasson, Marina; Shalgi, Bosmat; Tuson, Lali; Saphir, Yafa; Kotler, Moshe

    2004-08-01

    Kleptomania and pathological gambling (PG) are currently classified in the DSM IV as impulse control disorders. Impulse control disorders are characterized by an overwhelming temptation to perform an act that is harmful to the person or others. The patient usually feels a sense of tension before committing the act and then experiences pleasure or relief while in the process of performing the act. Kleptomania and PG are often associated with other comorbid psychiatric diagnoses. Forty-four pathological gamblers and 19 kleptomanics were included in this study. All enrolled patients underwent a complete diagnostic psychiatric evaluation and were examined for symptoms of depression and anxiety using the Hamilton depression rating scale and the Hamilton anxiety rating scale, respectively. In addition, the patients completed self-report questionnaires about their demographic status and addictive behavior. The comorbid lifetime diagnoses found at a high prevalence among our kleptomanic patients included 47% with affective disorders (9/19) and 37% with anxiety disorders (7/19). The comorbid lifetime diagnoses found at a high prevalence in our sample of pathological gamblers included 27% with affective disorders (12/44), 21% with alcohol abuse (9/44), and 7% with a history of substance abuse (3/44). A larger study is needed to confirm these preliminary results.

  11. The Art Gallery Test: A Preliminary Comparison between Traditional Neuropsychological and Ecological VR-Based Tests

    Directory of Open Access Journals (Sweden)

    Pedro Gamito

    2017-11-01

    Full Text Available Ecological validity should be the cornerstone of any assessment of cognitive functioning. For this purpose, we have developed a preliminary study to test the Art Gallery Test (AGT as an alternative to traditional neuropsychological testing. The AGT involves three visual search subtests displayed in a virtual reality (VR art gallery, designed to assess visual attention within an ecologically valid setting. To evaluate the relation between AGT and standard neuropsychological assessment scales, data were collected on a normative sample of healthy adults (n = 30. The measures consisted of concurrent paper-and-pencil neuropsychological measures [Montreal Cognitive Assessment (MoCA, Frontal Assessment Battery (FAB, and Color Trails Test (CTT] along with the outcomes from the three subtests of the AGT. The results showed significant correlations between the AGT subtests describing different visual search exercises strategies with global and specific cognitive measures. Comparative visual search was associated with attention and cognitive flexibility (CTT; whereas visual searches involving pictograms correlated with global cognitive function (MoCA.

  12. A preliminary diffusional kurtosis imaging study of Parkinson disease: comparison with conventional diffusion tensor imaging

    Energy Technology Data Exchange (ETDEWEB)

    Kamagata, Koji; Kamiya, Kouhei; Suzuki, Michimasa; Hori, Masaaki; Yoshida, Mariko; Aoki, Shigeki [Juntendo University School of Medicine, Department of Radiology, Bunkyo-ku, Tokyo (Japan); Tomiyama, Hiroyuki; Hatano, Taku; Motoi, Yumiko; Hattori, Nobutaka [Juntendo University School of Medicine, Department of Neurology, Tokyo (Japan); Abe, Osamu [Nihon University School of Medicine, Department of Radiology, Tokyo (Japan); Shimoji, Keigo [National Center of Neurology and Psychiatry Hospital, Department of Radiology, Tokyo (Japan)

    2014-03-15

    Diffusional kurtosis imaging (DKI) is a more sensitive technique than conventional diffusion tensor imaging (DTI) for assessing tissue microstructure. In particular, it quantifies the microstructural integrity of white matter, even in the presence of crossing fibers. The aim of this preliminary study was to compare how DKI and DTI show white matter alterations in Parkinson disease (PD). DKI scans were obtained with a 3-T magnetic resonance imager from 12 patients with PD and 10 healthy controls matched by age and sex. Tract-based spatial statistics were used to compare the mean kurtosis (MK), mean diffusivity (MD), and fractional anisotropy (FA) maps of the PD patient group and the control group. In addition, a region-of-interest analysis was performed for the area of the posterior corona radiata and superior longitudinal fasciculus (SLF) fiber crossing. FA values in the frontal white matter were significantly lower in PD patients than in healthy controls. Reductions in MK occurred more extensively throughout the brain: in addition to frontal white matter, MK was lower in the parietal, occipital, and right temporal white matter. The MK value of the area of the posterior corona radiata and SLF fiber crossing was also lower in the PD group. DKI detects changes in the cerebral white matter of PD patients more sensitively than conventional DTI. In addition, DKI is useful for evaluating crossing fibers. By providing a sensitive index of brain pathology in PD, DKI may enable improved monitoring of disease progression. (orig.)

  13. Benchmarking of Percutaneous Injuries at the Ministry of Health Hospitals of Saudi Arabia in Comparison with the United States Hospitals Participating in Exposure Prevention Information Network (EPINet™

    Directory of Open Access Journals (Sweden)

    ZA Memish

    2015-01-01

    Full Text Available Background: Exposure to blood-borne pathogens from needle-stick and sharp injuries continues to pose a significant risk to health care workers. These events are of concern because of the risk to transmit blood-borne diseases such as hepatitis B virus, hepatitis C virus, and the human immunodeficiency virus. Objective: To benchmark different risk factors associated with needle-stick incidents among health care workers in the Ministry of Health hospitals in the Kingdom of Saudi Arabia compared to the US hospitals participating in Exposure Prevention Information Network (EPINet ™. Methods: Prospective surveillance of needle-stick and sharp incidents carried out during the year 2012 using EPINet™ ver 1.5 that provides uniform needle stick and sharp injury report form. Results: The annual percutaneous incidents (PIs rate per 100 occupied beds was 3.2 at the studied MOH hospitals. Nurses were the most affected job category by PIs (59.4%. Most PIs happened in patients' wards in the Ministry of Health hospitals (34.6%. Disposable syringes were the most common cause of PIs (47.20%. Most PIs occurred during use of the syringes (36.4%. Conclusion: Among health care workers, nurses and physicians appear especially at risk of exposure to PIs. Important risk factors of injuries include working in patient room, using disposable syringes, devices without safety features. Preventive strategies such as continuous training of health care workers with special emphasis on nurses and physicians, encouragement of reporting of such incidents, observation of sharp handling, their use and implementation of safety devices are warranted.

  14. Analysis of an OECD/NEA high-temperature reactor benchmark

    International Nuclear Information System (INIS)

    Hosking, J. G.; Newton, T. D.; Koeberl, O.; Morris, P.; Goluoglu, S.; Tombakoglu, T.; Colak, U.; Sartori, E.

    2006-01-01

    This paper describes analyses of the OECD/NEA HTR benchmark organized by the 'Working Party on the Scientific Issues of Reactor Systems (WPRS)', formerly the 'Working Party on the Physics of Plutonium Fuels and Innovative Fuel Cycles'. The benchmark was specifically designed to provide inter-comparisons for plutonium and thorium fuels when used in HTR systems. Calculations considering uranium fuel have also been included in the benchmark, in order to identify any increased uncertainties when using plutonium or thorium fuels. The benchmark consists of five phases, which include cell and whole-core calculations. Analysis of the benchmark has been performed by a number of international participants, who have used a range of deterministic and Monte Carlo code schemes. For each of the benchmark phases, neutronics parameters have been evaluated. Comparisons are made between the results of the benchmark participants, as well as comparisons between the predictions of the deterministic calculations and those from detailed Monte Carlo calculations. (authors)

  15. Comparison of preliminary D-T and ''catalyzed'' D-D system studies

    International Nuclear Information System (INIS)

    Usher, J.L.; Powell, J.R.; Fillo, J.A.; Lazareth, O.W.

    1976-01-01

    The purpose of the research currently underway is to provide technological and eventual economic comparison of a reference D-T reactor to a ''catalyzed'' D-D reactor. Two separate reactor designs are delineated and examined for this purpose. These systems include plasma parameters, blanket and shield configurations, magnetic coil configurations, and power conversion systems, including a divertor-direct convertor system for the D-D design. The initial conclusions reached are as follows: (a) no extraordinary requirements in the D-D reactor in the areas of blanket or magnet technology, (b) advantageous use of minimum activity blankets and shields, (c) increased overall efficiency via introduction of divertor-direct convertor subsystem in D-D design and (d) 65 percent increase in the toroidal radius of the D-D design compared to the D-T reference value

  16. Inter-individual relationships in proboscis monkeys: a preliminary comparison with other non-human primates.

    Science.gov (United States)

    Matsuda, Ikki; Tuuga, Augustine; Bernard, Henry; Furuichi, Takeshi

    2012-01-01

    This is the first report on inter-individual relationships within a one-male group of proboscis monkeys (Nasalis larvatus) based on detailed identification of individuals. From May 2005 to 2006, focal and ad libitum data of agonistic and grooming behaviour were collected in a forest along the Menanggul River, Sabah, Malaysia. During the study period, we collected over 1,968 h of focal data on the adult male and 1,539 h of focal data on the six females. Their social interactions, including agonistic and grooming behaviour, appeared to follow typical patterns reported for other colobines: the incidence of social interaction within groups is low. Of 39 agonistic events, 26 were displacement from sleeping places along the river, 6 were the α male threatening other monkeys to mediate quarrels between females and between females and juveniles, and 7 were displacement from feeding places. Although the agonistic behaviour matrix based on the 33 intra-group agonistic events (excluding events between adults and juveniles and between adults and infants) was indicative of non-significant linearity, there were some specific dominated individuals within the group of proboscis monkeys. Nonetheless, grooming behaviour among adult females within a group were not affected by the dominance hierarchy. This study also conducted initial comparisons of grooming patterns among proboscis monkeys and other primate species. On the basis of comparison of their grooming networks, similar grooming patterns among both-sex-disperse and male-philopatric/female-disperse species were detected. Because adult females in these species migrate to groups repeatedly, it may be difficult to establish the firm grooming exchange relationship for particular individuals within groups, unlike in female-philopatric/male-disperse species. However, grooming distribution patterns within groups among primate species were difficult to explain solely on the basis of their dispersal patterns. Newly immigrated females

  17. Benchmarking i den offentlige sektor

    DEFF Research Database (Denmark)

    Bukh, Per Nikolaj; Dietrichson, Lars; Sandalgaard, Niels

    2008-01-01

    I artiklen vil vi kort diskutere behovet for benchmarking i fraværet af traditionelle markedsmekanismer. Herefter vil vi nærmere redegøre for, hvad benchmarking er med udgangspunkt i fire forskellige anvendelser af benchmarking. Regulering af forsyningsvirksomheder vil blive behandlet, hvorefter...

  18. Cloud benchmarking for performance

    OpenAIRE

    Varghese, Blesson; Akgun, Ozgur; Miguel, Ian; Thai, Long; Barker, Adam

    2014-01-01

    Date of Acceptance: 20/09/2014 How can applications be deployed on the cloud to achieve maximum performance? This question has become significant and challenging with the availability of a wide variety of Virtual Machines (VMs) with different performance capabilities in the cloud. The above question is addressed by proposing a six step benchmarking methodology in which a user provides a set of four weights that indicate how important each of the following groups: memory, processor, computa...

  19. A Preliminary Comparison of Three Dimensional Particle Tracking and Sizing using Plenoptic Imaging and Digital In-line Holography

    Energy Technology Data Exchange (ETDEWEB)

    Guildenbecher, Daniel Robert; Munz, Elise Dahnke; Farias, Paul Abraham; Thurow, Brian S [Auburn U

    2015-12-01

    Digital in-line holography and plenoptic photography are two techniques for single-shot, volumetric measurement of 3D particle fields. Here we present a preliminary comparison of the two methods by applying plenoptic imaging to experimental configurations that have been previously investigated with digital in-line holography. These experiments include the tracking of secondary droplets from the impact of a water drop on a thin film of water and tracking of pellets from a shotgun. Both plenoptic imaging and digital in-line holography successfully quantify the 3D nature of these particle fields. This includes measurement of the 3D particle position, individual particle sizes, and three-component velocity vectors. For the initial processing methods presented here, both techniques give out-of-plane positional accuracy of approximately 1-2 particle diameters. For a fixed image sensor, digital holography achieves higher effective in-plane spatial resolutions. However, collimated and coherent illumination makes holography susceptible to image distortion through index of refraction gradients, as demonstrated in the shotgun experiments. On the other hand, plenotpic imaging allows for a simpler experimental configuration. Furthermore, due to the use of diffuse, white-light illumination, plenoptic imaging is less susceptible to image distortion in the shotgun experiments. Additional work is needed to better quantify sources of uncertainty, particularly in the plenoptic experiments, as well as develop data processing methodologies optimized for the plenoptic measurement.

  20. The challenge of benchmarking health systems

    OpenAIRE

    Lapão, Luís Velez

    2015-01-01

    WOS:000359623300001 PMID: 26301085 The article by Catan et al. presents a benchmarking exercise comparing Israel and Portugal on the implementation of Information and Communication Technologies in the healthcare sector. Special attention was given to e-Health and m-Health. The authors collected information via a set of interviews with key stakeholders. They compared two different cultures and societies, which have reached slightly different implementation outcomes. Although the comparison ...

  1. Benchmark testing calculations for 232Th

    International Nuclear Information System (INIS)

    Liu Ping

    2003-01-01

    The cross sections of 232 Th from CNDC and JENDL-3.3 were processed with NJOY97.45 code in the ACE format for the continuous-energy Monte Carlo Code MCNP4C. The K eff values and central reaction rates based on CENDL-3.0, JENDL-3.3 and ENDF/B-6.2 were calculated using MCNP4C code for benchmark assembly, and the comparisons with experimental results are given. (author)

  2. Benchmark for Evaluating Moving Object Indexes

    DEFF Research Database (Denmark)

    Chen, Su; Jensen, Christian Søndergaard; Lin, Dan

    2008-01-01

    that targets techniques for the indexing of the current and near-future positions of moving objects. This benchmark enables the comparison of existing and future indexing techniques. It covers important aspects of such indexes that have not previously been covered by any benchmark. Notable aspects covered......Progress in science and engineering relies on the ability to measure, reliably and in detail, pertinent properties of artifacts under design. Progress in the area of database-index design thus relies on empirical studies based on prototype implementations of indexes. This paper proposes a benchmark...... include update efficiency, query efficiency, concurrency control, and storage requirements. Next, the paper applies the benchmark to half a dozen notable moving-object indexes, thus demonstrating the viability of the benchmark and offering new insight into the performance properties of the indexes....

  3. Comparison of attention training and cognitive therapy in the treatment of social phobia: a preliminary investigation.

    Science.gov (United States)

    Donald, Juliet; Abbott, Maree J; Smith, Evelyn

    2014-01-01

    Prominent models of social phobia highlight the role played by attentional factors, such as self-focused attention, in the development and maintenance of social phobia. Elevated self-focused attention is associated with increases in self-rated anxiety. Treatments that aim to modify and change attentional processes, specifically self-focused attention, will have a direct effect on social phobia symptoms. Thus, Attention Training targets attentional focus. The present study aimed to investigate the efficacy of Attention Training in comparison to an established treatment for social phobia, Cognitive Therapy. Participants (Intention-to-treat = 45; completers = 30) were allocated to either 6 weeks of Attention Training or Cognitive Therapy. It was hypothesized that both treatments would be effective in reducing social phobia symptoms, but that Attention Training would work primarily by reducing levels of self-focused attention. The results found an overall effectiveness of both treatment conditions in reducing social phobia symptoms. However, Attention Training significantly improved scores on the Self-Focused Attention questionnaire and the Brief Fear of Negative Evaluation questionnaire compared to Cognitive Therapy. Attention Training seems to be a promising treatment for social phobia.

  4. Thermal conductivity of silicic tuffs: predictive formalism and comparison with preliminary experimental results

    International Nuclear Information System (INIS)

    Lappin, A. R.

    1980-07-01

    Performance of both near- and far-field thermomechanical calculations to assess the feasibility of waste disposal in silicic tuffs requires a formalism for predicting thermal conductivity of a broad range of tuffs. This report summarizes the available thermal conductivity data for silicate phases that occur in tuffs and describes several grain-density and conductivity trends which may be expected to result from post-emplacement alteration. A bounding curve is drawn that predicts the minimum theoretical matrix (zero-porosity) conductivity for most tuffs as a function of grain density. Comparison of experimental results with this curve shows that experimental conductivities are consistently lower at any given grain density. Use of the lowered bounding curve and an effective gas conductivity of 0.12 W/m 0 C allows conservative prediction of conductivity for a broad range of tuff types. For the samples measured here, use of the predictive curve allows estimation of conductivity to within 15% or better, with one exception. Application and possible improvement of the formalism are also discussed

  5. A PWR Thorium Pin Cell Burnup Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Weaver, Kevan Dean; Zhao, X.; Pilat, E. E; Hejzlar, P.

    2000-05-01

    As part of work to evaluate the potential benefits of using thorium in LWR fuel, a thorium fueled benchmark comparison was made in this study between state-of-the-art codes, MOCUP (MCNP4B + ORIGEN2), and CASMO-4 for burnup calculations. The MOCUP runs were done individually at MIT and INEEL, using the same model but with some differences in techniques and cross section libraries. Eigenvalue and isotope concentrations were compared on a PWR pin cell model up to high burnup. The eigenvalue comparison as a function of burnup is good: the maximum difference is within 2% and the average absolute difference less than 1%. The isotope concentration comparisons are better than a set of MOX fuel benchmarks and comparable to a set of uranium fuel benchmarks reported in the literature. The actinide and fission product data sources used in the MOCUP burnup calculations for a typical thorium fuel are documented. Reasons for code vs code differences are analyzed and discussed.

  6. Preliminary study of micro nutrient intake comparison of elementary school children on holiday and schooldays

    International Nuclear Information System (INIS)

    Widya Dwi Ariyani; K Oginawati; Muhayatun; Endah Damastuti; Syukria Kurniawati

    2010-01-01

    The dietary pattern has influences on nutritional status. In this activity, we compared micro nutrient intake of elementary school children (7-12 years) as a consequence of dietary pattern difference on holiday and schooldays, due to most of their time was spend in school where the tendency of snack consuming is usually higher at school. Therefore, the comparison of dietary pattern and micro nutrient daily intake of elementary school children on holiday and schooldays was needed to carry out. Food sampling was done by duplicate diet method for 3 days in a row with one day among them was a holiday. The determination of micro nutrient elements concentration was measured using instrumental neutron activation analysis (INAA) and atomic absorption spectrometry (AAS). There was a significant difference of daily intake of Na, K, Ca, Fe, and Cr on holiday and schooldays, while for Br, Mg, Zn, Mn, Cu, Se and Co were no significant difference. The most significant difference contained on sodium intake with average daily intake was 2578 mg/day on schooldays and 1298 mg/day on holiday. It was caused by the number of high sodium content snacks consumed on schooldays were bigger than on holiday. However, the results of micro nutrient daily intake obtained either on schooldays or on holiday generally were below RDA (Recommended Dietary Allowance), except for Na and Cr. It's expected that this result could be used as information about nutrition status of children as next generation on behalf of supporting the formation of high quality human resources. (author)

  7. Comparison of brain connectivity between Internet gambling disorder and Internet gaming disorder: A preliminary study.

    Science.gov (United States)

    Bae, Sujin; Han, Doug Hyun; Jung, Jaebum; Nam, Ki Chun; Renshaw, Perry F

    2017-12-01

    Background and aims Given the similarities in clinical symptoms, Internet gaming disorder (IGD) is thought to be diagnostically similar to Internet-based gambling disorder (ibGD). However, cognitive enhancement and educational use of Internet gaming suggest that the two disorders derive from different neurobiological mechanisms. The goal of this study was to compare subjects with ibGD to those with IGD. Methods Fifteen patients with IGD, 14 patients with ibGD, and 15 healthy control subjects were included in this study. Resting-state functional magnetic resonance imaging data for all participants were acquired using a 3.0 Tesla MRI scanner (Philips, Eindhoven, The Netherlands). Seed-based analyses, the three brain networks of default mode, cognitive control, and reward circuitry, were performed. Results Both IGD and ibGD groups demonstrated decreased functional connectivity (FC) within the default-mode network (DMN) (family-wise error p < .001) compared with healthy control subjects. However, the IGD group demonstrated increased FC within the cognitive network compared with both the ibGD (p < .01) and healthy control groups (p < .01). In contrast, the ibGD group demonstrated increased FC within the reward circuitry compared with both IGD (p < .01) and healthy control subjects (p < .01). Discussion and conclusions The IGD and ibGD groups shared the characteristic of decreased FC in the DMN. However, the IGD group demonstrated increased FC within the cognitive network compared with both ibGD and healthy comparison groups.

  8. Benchmarking reference services: an introduction.

    Science.gov (United States)

    Marshall, J G; Buchanan, H S

    1995-01-01

    Benchmarking is based on the common sense idea that someone else, either inside or outside of libraries, has found a better way of doing certain things and that your own library's performance can be improved by finding out how others do things and adopting the best practices you find. Benchmarking is one of the tools used for achieving continuous improvement in Total Quality Management (TQM) programs. Although benchmarking can be done on an informal basis, TQM puts considerable emphasis on formal data collection and performance measurement. Used to its full potential, benchmarking can provide a common measuring stick to evaluate process performance. This article introduces the general concept of benchmarking, linking it whenever possible to reference services in health sciences libraries. Data collection instruments that have potential application in benchmarking studies are discussed and the need to develop common measurement tools to facilitate benchmarking is emphasized.

  9. Issues in benchmarking human reliability analysis methods: A literature review

    International Nuclear Information System (INIS)

    Boring, Ronald L.; Hendrickson, Stacey M.L.; Forester, John A.; Tran, Tuan Q.; Lois, Erasmia

    2010-01-01

    There is a diversity of human reliability analysis (HRA) methods available for use in assessing human performance within probabilistic risk assessments (PRA). Due to the significant differences in the methods, including the scope, approach, and underlying models, there is a need for an empirical comparison investigating the validity and reliability of the methods. To accomplish this empirical comparison, a benchmarking study comparing and evaluating HRA methods in assessing operator performance in simulator experiments is currently underway. In order to account for as many effects as possible in the construction of this benchmarking study, a literature review was conducted, reviewing past benchmarking studies in the areas of psychology and risk assessment. A number of lessons learned through these studies is presented in order to aid in the design of future HRA benchmarking endeavors.

  10. Issues in benchmarking human reliability analysis methods : a literature review.

    Energy Technology Data Exchange (ETDEWEB)

    Lois, Erasmia (US Nuclear Regulatory Commission); Forester, John Alan; Tran, Tuan Q. (Idaho National Laboratory, Idaho Falls, ID); Hendrickson, Stacey M. Langfitt; Boring, Ronald L. (Idaho National Laboratory, Idaho Falls, ID)

    2008-04-01

    There is a diversity of human reliability analysis (HRA) methods available for use in assessing human performance within probabilistic risk assessment (PRA). Due to the significant differences in the methods, including the scope, approach, and underlying models, there is a need for an empirical comparison investigating the validity and reliability of the methods. To accomplish this empirical comparison, a benchmarking study is currently underway that compares HRA methods with each other and against operator performance in simulator studies. In order to account for as many effects as possible in the construction of this benchmarking study, a literature review was conducted, reviewing past benchmarking studies in the areas of psychology and risk assessment. A number of lessons learned through these studies are presented in order to aid in the design of future HRA benchmarking endeavors.

  11. Aerodynamic benchmarking of the DeepWind design

    DEFF Research Database (Denmark)

    Bedon, Gabriele; Schmidt Paulsen, Uwe; Aagaard Madsen, Helge

    The aerodynamic benchmarking for the DeepWind rotor is conducted comparing different rotor geometries and solutions and keeping the comparison as fair as possible. The objective for the benchmarking is to find the most suitable configuration in order to maximize the power production and minimize...... the blade solicitation and the cost of energy. Different parameters are considered for the benchmarking study. The DeepWind blade is characterized by a shape similar to the Troposkien geometry but asymmetric between the top and bottom parts. The blade shape is considered as a fixed parameter...

  12. Benchmarking HIV health care

    DEFF Research Database (Denmark)

    Podlekareva, Daria; Reekie, Joanne; Mocroft, Amanda

    2012-01-01

    ABSTRACT: BACKGROUND: State-of-the-art care involving the utilisation of multiple health care interventions is the basis for an optimal long-term clinical prognosis for HIV-patients. We evaluated health care for HIV-patients based on four key indicators. METHODS: Four indicators of health care we...... document pronounced regional differences in adherence to guidelines and can help to identify gaps and direct target interventions. It may serve as a tool for assessment and benchmarking the clinical management of HIV-patients in any setting worldwide....

  13. Benchmarking Cloud Storage Systems

    OpenAIRE

    Wang, Xing

    2014-01-01

    With the rise of cloud computing, many cloud storage systems like Dropbox, Google Drive and Mega have been built to provide decentralized and reliable file storage. It is thus of prime importance to know their features, performance, and the best way to make use of them. In this context, we introduce BenchCloud, a tool designed as part of this thesis to conveniently and efficiently benchmark any cloud storage system. First, we provide a study of six commonly-used cloud storage systems to ident...

  14. The COST Benchmark

    DEFF Research Database (Denmark)

    Jensen, Christian Søndergaard; Tiesyte, Dalia; Tradisauskas, Nerius

    2006-01-01

    An infrastructure is emerging that enables the positioning of populations of on-line, mobile service users. In step with this, research in the management of moving objects has attracted substantial attention. In particular, quite a few proposals now exist for the indexing of moving objects...... takes into account that the available positions of the moving objects are inaccurate, an aspect largely ignored in previous indexing research. The concepts of data and query enlargement are introduced for addressing inaccuracy. As proof of concepts of the benchmark, the paper covers the application...

  15. Preliminary Compositional Comparisons of H-Chondrite Falls to Antarctic H-Chondrite Populations

    Science.gov (United States)

    Kallemeyn, G. W.; Krot, A. N.; Rubin, A. E.

    1993-07-01

    In a series of papers [e.g., 1,2], Lipschutz and co-workers compared trace- element RNAA data from Antarctic and non-Antarctic H4-6 chondrites and concluded that the two populations have significantly different concentrations of several trace elements including Co, Se, and Sb. They interpreted their data as indicating that these Antarctic H chondrites form different populations than observed H falls and may have originated in separate parent bodies. Recent work by Sears and co-workers [e.g., 3] has shown that there seem to be distinct populations of Antarctic H chondrites, distinguishable on the bases of induced thermoluminescence (TL) peak temperature, metallographic cooling rate, and cosmic ray exposure age. They showed that a group of Antarctic H chondrites having abnormally high induced TL peak temperatures (>=190 degrees C) also has cosmic ray exposure ages Ma (mostly ~8 Ma) and fast metallographic cooling rates (~100 K/Ma). Another group having induced TL peak temperatures 20 Ma and slower cooling rates (~10-20 K/Ma). We studied 24 H4-6 chondrites from Victoria Land (including 12 previously analyzed by the Lipschutz group) by optical microscopy and electron microprobe. Many of the Antarctic H chondrites studied by Lipschutz and co- workers are unsuitable for proper compositional comparisons with H chondrite falls: Four are very weathered, five are extensively shocked, and two are extensively brecciated. Furthermore, at least five of the samples contain solar-wind gas (and hence are regolith breccias) [4]. These samples were rejected because of possible compositional modification by secondary processes. For our INAA study we chose a suite of relatively unweathered and unbrecciated Antarctic H chondrites (including nine from the Lipschutz set): ALHA 77294 (H5, S3); ALHA 79026 (H5, S3); ALHA 79039 (H5, S3); ALHA 80131 (H5, S3); ALHA 80132 (H5, S4); ALHA 81037 (H6, S3); EETA 79007 (H5, S4); LEW 85320 (H6, S4); LEW 85329 (H6, S3); RKPA 78002 (H5, S2); and RKPA

  16. Benchmarking multimedia performance

    Science.gov (United States)

    Zandi, Ahmad; Sudharsanan, Subramania I.

    1998-03-01

    With the introduction of faster processors and special instruction sets tailored to multimedia, a number of exciting applications are now feasible on the desktops. Among these is the DVD playback consisting, among other things, of MPEG-2 video and Dolby digital audio or MPEG-2 audio. Other multimedia applications such as video conferencing and speech recognition are also becoming popular on computer systems. In view of this tremendous interest in multimedia, a group of major computer companies have formed, Multimedia Benchmarks Committee as part of Standard Performance Evaluation Corp. to address the performance issues of multimedia applications. The approach is multi-tiered with three tiers of fidelity from minimal to full compliant. In each case the fidelity of the bitstream reconstruction as well as quality of the video or audio output are measured and the system is classified accordingly. At the next step the performance of the system is measured. In many multimedia applications such as the DVD playback the application needs to be run at a specific rate. In this case the measurement of the excess processing power, makes all the difference. All these make a system level, application based, multimedia benchmark very challenging. Several ideas and methodologies for each aspect of the problems will be presented and analyzed.

  17. Benchmarking Using Basic DBMS Operations

    Science.gov (United States)

    Crolotte, Alain; Ghazal, Ahmad

    The TPC-H benchmark proved to be successful in the decision support area. Many commercial database vendors and their related hardware vendors used these benchmarks to show the superiority and competitive edge of their products. However, over time, the TPC-H became less representative of industry trends as vendors keep tuning their database to this benchmark-specific workload. In this paper, we present XMarq, a simple benchmark framework that can be used to compare various software/hardware combinations. Our benchmark model is currently composed of 25 queries that measure the performance of basic operations such as scans, aggregations, joins and index access. This benchmark model is based on the TPC-H data model due to its maturity and well-understood data generation capability. We also propose metrics to evaluate single-system performance and compare two systems. Finally we illustrate the effectiveness of this model by showing experimental results comparing two systems under different conditions.

  18. Benchmarking & European Sustainable Transport Policies

    DEFF Research Database (Denmark)

    Gudmundsson, H.

    2003-01-01

    , Benchmarking is one of the management tools that have recently been introduced in the transport sector. It is rapidly being applied to a wide range of transport operations, services and policies. This paper is a contribution to the discussion of the role of benchmarking in the future efforts to...... contribution to the discussions within the Eusponsored BEST Thematic Network (Benchmarking European Sustainable Transport) which ran from 2000 to 2003....

  19. Benchmarking in Czech Higher Education

    OpenAIRE

    Plaček Michal; Ochrana František; Půček Milan

    2015-01-01

    The first part of this article surveys the current experience with the use of benchmarking at Czech universities specializing in economics and management. The results indicate that collaborative benchmarking is not used on this level today, but most actors show some interest in its introduction. The expression of the need for it and the importance of benchmarking as a very suitable performance-management tool in less developed countries are the impetus for the second part of our article. Base...

  20. Power reactor pressure vessel benchmarks

    International Nuclear Information System (INIS)

    Rahn, F.J.

    1978-01-01

    A review is given of the current status of experimental and calculational benchmarks for use in understanding the radiation embrittlement effects in the pressure vessels of operating light water power reactors. The requirements of such benchmarks for application to pressure vessel dosimetry are stated. Recent developments in active and passive neutron detectors sensitive in the ranges of importance to embrittlement studies are summarized and recommendations for improvements in the benchmark are made. (author)

  1. Specification for the VERA Depletion Benchmark Suite

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Kang Seog [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2015-12-17

    CASL-X-2015-1014-000 iii Consortium for Advanced Simulation of LWRs EXECUTIVE SUMMARY The CASL neutronics simulator MPACT is under development for the neutronics and T-H coupled simulation for the pressurized water reactor. MPACT includes the ORIGEN-API and internal depletion module to perform depletion calculations based upon neutron-material reaction and radioactive decay. It is a challenge to validate the depletion capability because of the insufficient measured data. One of the detoured methods to validate it is to perform a code-to-code comparison for benchmark problems. In this study a depletion benchmark suite has been developed and a detailed guideline has been provided to obtain meaningful computational outcomes which can be used in the validation of the MPACT depletion capability.

  2. Benchmarking Academic Anatomic Pathologists

    Directory of Open Access Journals (Sweden)

    Barbara S. Ducatman MD

    2016-10-01

    Full Text Available The most common benchmarks for faculty productivity are derived from Medical Group Management Association (MGMA or Vizient-AAMC Faculty Practice Solutions Center ® (FPSC databases. The Association of Pathology Chairs has also collected similar survey data for several years. We examined the Association of Pathology Chairs annual faculty productivity data and compared it with MGMA and FPSC data to understand the value, inherent flaws, and limitations of benchmarking data. We hypothesized that the variability in calculated faculty productivity is due to the type of practice model and clinical effort allocation. Data from the Association of Pathology Chairs survey on 629 surgical pathologists and/or anatomic pathologists from 51 programs were analyzed. From review of service assignments, we were able to assign each pathologist to a specific practice model: general anatomic pathologists/surgical pathologists, 1 or more subspecialties, or a hybrid of the 2 models. There were statistically significant differences among academic ranks and practice types. When we analyzed our data using each organization’s methods, the median results for the anatomic pathologists/surgical pathologists general practice model compared to MGMA and FPSC results for anatomic and/or surgical pathology were quite close. Both MGMA and FPSC data exclude a significant proportion of academic pathologists with clinical duties. We used the more inclusive FPSC definition of clinical “full-time faculty” (0.60 clinical full-time equivalent and above. The correlation between clinical full-time equivalent effort allocation, annual days on service, and annual work relative value unit productivity was poor. This study demonstrates that effort allocations are variable across academic departments of pathology and do not correlate well with either work relative value unit effort or reported days on service. Although the Association of Pathology Chairs–reported median work relative

  3. Self-benchmarking Guide for Cleanrooms: Metrics, Benchmarks, Actions

    Energy Technology Data Exchange (ETDEWEB)

    Mathew, Paul; Sartor, Dale; Tschudi, William

    2009-07-13

    This guide describes energy efficiency metrics and benchmarks that can be used to track the performance of and identify potential opportunities to reduce energy use in laboratory buildings. This guide is primarily intended for personnel who have responsibility for managing energy use in existing laboratory facilities - including facilities managers, energy managers, and their engineering consultants. Additionally, laboratory planners and designers may also use the metrics and benchmarks described in this guide for goal-setting in new construction or major renovation. This guide provides the following information: (1) A step-by-step outline of the benchmarking process. (2) A set of performance metrics for the whole building as well as individual systems. For each metric, the guide provides a definition, performance benchmarks, and potential actions that can be inferred from evaluating this metric. (3) A list and descriptions of the data required for computing the metrics. This guide is complemented by spreadsheet templates for data collection and for computing the benchmarking metrics. This guide builds on prior research supported by the national Laboratories for the 21st Century (Labs21) program, supported by the U.S. Department of Energy and the U.S. Environmental Protection Agency. Much of the benchmarking data are drawn from the Labs21 benchmarking database and technical guides. Additional benchmark data were obtained from engineering experts including laboratory designers and energy managers.

  4. Self-benchmarking Guide for Laboratory Buildings: Metrics, Benchmarks, Actions

    Energy Technology Data Exchange (ETDEWEB)

    Mathew, Paul; Greenberg, Steve; Sartor, Dale

    2009-07-13

    This guide describes energy efficiency metrics and benchmarks that can be used to track the performance of and identify potential opportunities to reduce energy use in laboratory buildings. This guide is primarily intended for personnel who have responsibility for managing energy use in existing laboratory facilities - including facilities managers, energy managers, and their engineering consultants. Additionally, laboratory planners and designers may also use the metrics and benchmarks described in this guide for goal-setting in new construction or major renovation. This guide provides the following information: (1) A step-by-step outline of the benchmarking process. (2) A set of performance metrics for the whole building as well as individual systems. For each metric, the guide provides a definition, performance benchmarks, and potential actions that can be inferred from evaluating this metric. (3) A list and descriptions of the data required for computing the metrics. This guide is complemented by spreadsheet templates for data collection and for computing the benchmarking metrics. This guide builds on prior research supported by the national Laboratories for the 21st Century (Labs21) program, supported by the U.S. Department of Energy and the U.S. Environmental Protection Agency. Much of the benchmarking data are drawn from the Labs21 benchmarking database and technical guides. Additional benchmark data were obtained from engineering experts including laboratory designers and energy managers.

  5. Shielding benchmark test

    International Nuclear Information System (INIS)

    Kawai, Masayoshi

    1984-01-01

    Iron data in JENDL-2 have been tested by analyzing shielding benchmark experiments for neutron transmission through iron block performed at KFK using CF-252 neutron source and at ORNL using collimated neutron beam from reactor. The analyses are made by a shielding analysis code system RADHEAT-V4 developed at JAERI. The calculated results are compared with the measured data. As for the KFK experiments, the C/E values are about 1.1. For the ORNL experiments, the calculated values agree with the measured data within an accuracy of 33% for the off-center geometry. The d-t neutron transmission measurements through carbon sphere made at LLNL are also analyzed preliminarily by using the revised JENDL data for fusion neutronics calculation. (author)

  6. Benchmarking monthly homogenization algorithms

    Science.gov (United States)

    Venema, V. K. C.; Mestre, O.; Aguilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertacnik, G.; Szentimrey, T.; Stepanek, P.; Zahradnicek, P.; Viarre, J.; Müller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratianni, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Prohom Duran, M.; Likso, T.; Esteban, P.; Brandsma, T.

    2011-08-01

    The COST (European Cooperation in Science and Technology) Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative). The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random break-type inhomogeneities were added to the simulated datasets modeled as a Poisson process with normally distributed breakpoint sizes. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide) trend was added. Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including (i) the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii) the error in linear trend estimates and (iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data

  7. Benchmarking foreign electronics technologies

    Energy Technology Data Exchange (ETDEWEB)

    Bostian, C.W.; Hodges, D.A.; Leachman, R.C.; Sheridan, T.B.; Tsang, W.T.; White, R.M.

    1994-12-01

    This report has been drafted in response to a request from the Japanese Technology Evaluation Center`s (JTEC) Panel on Benchmarking Select Technologies. Since April 1991, the Competitive Semiconductor Manufacturing (CSM) Program at the University of California at Berkeley has been engaged in a detailed study of quality, productivity, and competitiveness in semiconductor manufacturing worldwide. The program is a joint activity of the College of Engineering, the Haas School of Business, and the Berkeley Roundtable on the International Economy, under sponsorship of the Alfred P. Sloan Foundation, and with the cooperation of semiconductor producers from Asia, Europe and the United States. Professors David A. Hodges and Robert C. Leachman are the project`s Co-Directors. The present report for JTEC is primarily based on data and analysis drawn from that continuing program. The CSM program is being conducted by faculty, graduate students and research staff from UC Berkeley`s Schools of Engineering and Business, and Department of Economics. Many of the participating firms are represented on the program`s Industry Advisory Board. The Board played an important role in defining the research agenda. A pilot study was conducted in 1991 with the cooperation of three semiconductor plants. The research plan and survey documents were thereby refined. The main phase of the CSM benchmarking study began in mid-1992 and will continue at least through 1997. reports are presented on the manufacture of integrated circuits; data storage; wireless technology; human-machine interfaces; and optoelectronics. Selected papers are indexed separately for inclusion in the Energy Science and Technology Database.

  8. SSI and structural benchmarks

    International Nuclear Information System (INIS)

    Philippacopoulos, A.J.; Miller, C.A.; Costantino, C.J.; Graves, H.

    1987-01-01

    This paper presents the latest results of the ongoing program entitled, Standard Problems for Structural Computer Codes, currently being worked on at BNL for the USNRC, Office of Nuclear Regulatory Research. During FY 1986, efforts were focussed on three tasks, namely, (1) an investigation of ground water effects on the response of Category I structures, (2) the Soil-Structure Interaction Workshop and (3) studies on structural benchmarks associated with Category I structures. The objective of the studies on ground water effects is to verify the applicability and the limitations of the SSI methods currently used by the industry in performing seismic evaluations of nuclear plants which are located at sites with high water tables. In a previous study by BNL (NUREG/CR-4588), it has been concluded that the pore water can influence significantly the soil-structure interaction process. This result, however, is based on the assumption of fully saturated soil profiles. Consequently, the work was further extended to include cases associated with variable water table depths. In this paper, results related to cut-off depths beyond which the pore water effects can be ignored in seismic calculations, are addressed. Comprehensive numerical data are given for soil configurations typical to those encountered in nuclear plant sites. These data were generated by using a modified version of the SLAM code which is capable of handling problems related to the dynamic response of saturated soils. Further, the paper presents some key aspects of the Soil-Structure Interaction Workshop (NUREG/CP-0054) which was held in Bethesda, MD on June 1, 1986. Finally, recent efforts related to the task on the structural benchmarks are described

  9. NASA Software Engineering Benchmarking Study

    Science.gov (United States)

    Rarick, Heather L.; Godfrey, Sara H.; Kelly, John C.; Crumbley, Robert T.; Wifl, Joel M.

    2013-01-01

    was its software assurance practices, which seemed to rate well in comparison to the other organizational groups and also seemed to include a larger scope of activities. An unexpected benefit of the software benchmarking study was the identification of many opportunities for collaboration in areas including metrics, training, sharing of CMMI experiences and resources such as instructors and CMMI Lead Appraisers, and even sharing of assets such as documented processes. A further unexpected benefit of the study was the feedback on NASA practices that was received from some of the organizations interviewed. From that feedback, other potential areas where NASA could improve were highlighted, such as accuracy of software cost estimation and budgetary practices. The detailed report contains discussion of the practices noted in each of the topic areas, as well as a summary of observations and recommendations from each of the topic areas. The resulting 24 recommendations from the topic areas were then consolidated to eliminate duplication and culled into a set of 14 suggested actionable recommendations. This final set of actionable recommendations, listed below, are items that can be implemented to improve NASA's software engineering practices and to help address many of the items that were listed in the NASA top software engineering issues. 1. Develop and implement standard contract language for software procurements. 2. Advance accurate and trusted software cost estimates for both procured and in-house software and improve the capture of actual cost data to facilitate further improvements. 3. Establish a consistent set of objectives and expectations, specifically types of metrics at the Agency level, so key trends and models can be identified and used to continuously improve software processes and each software development effort. 4. Maintain the CMMI Maturity Level requirement for critical NASA projects and use CMMI to measure organizations developing software for NASA. 5

  10. Review for session K - benchmarks

    International Nuclear Information System (INIS)

    McCracken, A.K.

    1980-01-01

    Eight of the papers to be considered in Session K are directly concerned, at least in part, with the Pool Critical Assembly (P.C.A.) benchmark at Oak Ridge. The remaining seven papers in this session, the subject of this review, are concerned with a variety of topics related to the general theme of Benchmarks and will be considered individually

  11. Internal Benchmarking for Institutional Effectiveness

    Science.gov (United States)

    Ronco, Sharron L.

    2012-01-01

    Internal benchmarking is an established practice in business and industry for identifying best in-house practices and disseminating the knowledge about those practices to other groups in the organization. Internal benchmarking can be done with structures, processes, outcomes, or even individuals. In colleges or universities with multicampuses or a…

  12. Entropy-based benchmarking methods

    NARCIS (Netherlands)

    Temurshoev, Umed

    2012-01-01

    We argue that benchmarking sign-volatile series should be based on the principle of movement and sign preservation, which states that a bench-marked series should reproduce the movement and signs in the original series. We show that the widely used variants of Denton (1971) method and the growth

  13. Benchmark simulation models, quo vadis?

    DEFF Research Database (Denmark)

    Jeppsson, U.; Alex, J; Batstone, D. J.

    2013-01-01

    As the work of the IWA Task Group on Benchmarking of Control Strategies for wastewater treatment plants (WWTPs) is coming to an end, it is essential to disseminate the knowledge gained. For this reason, all authors of the IWA Scientific and Technical Report on benchmarking have come together to p...

  14. EPA's Benchmark Dose Modeling Software

    Science.gov (United States)

    The EPA developed the Benchmark Dose Software (BMDS) as a tool to help Agency risk assessors facilitate applying benchmark dose (BMD) method’s to EPA’s human health risk assessment (HHRA) documents. The application of BMD methods overcomes many well know limitations ...

  15. Benchmark for Strategic Performance Improvement.

    Science.gov (United States)

    Gohlke, Annette

    1997-01-01

    Explains benchmarking, a total quality management tool used to measure and compare the work processes in a library with those in other libraries to increase library performance. Topics include the main groups of upper management, clients, and staff; critical success factors for each group; and benefits of benchmarking. (Author/LRW)

  16. Benchmarking: A Process for Improvement.

    Science.gov (United States)

    Peischl, Thomas M.

    One problem with the outcome-based measures used in higher education is that they measure quantity but not quality. Benchmarking, or the use of some external standard of quality to measure tasks, processes, and outputs, is partially solving that difficulty. Benchmarking allows for the establishment of a systematic process to indicate if outputs…

  17. Benchmark job – Watch out!

    CERN Multimedia

    Staff Association

    2017-01-01

    On 12 December 2016, in Echo No. 259, we already discussed at length the MERIT and benchmark jobs. Still, we find that a couple of issues warrant further discussion. Benchmark job – administrative decision on 1 July 2017 On 12 January 2017, the HR Department informed all staff members of a change to the effective date of the administrative decision regarding benchmark jobs. The benchmark job title of each staff member will be confirmed on 1 July 2017, instead of 1 May 2017 as originally announced in HR’s letter on 18 August 2016. Postponing the administrative decision by two months will leave a little more time to address the issues related to incorrect placement in a benchmark job. Benchmark job – discuss with your supervisor, at the latest during the MERIT interview In order to rectify an incorrect placement in a benchmark job, it is essential that the supervisor and the supervisee go over the assigned benchmark job together. In most cases, this placement has been done autom...

  18. Comparison of ear-canal reflectance and umbo velocity in patients with conductive hearing loss: a preliminary study.

    Science.gov (United States)

    Nakajima, Hideko H; Pisano, Dominic V; Roosli, Christof; Hamade, Mohamad A; Merchant, Gabrielle R; Mahfoud, Lorice; Halpin, Christopher F; Rosowski, John J; Merchant, Saumil N

    2012-01-01

    The goal of the present study was to investigate the clinical utility of measurements of ear-canal reflectance (ECR) in a population of patients with conductive hearing loss in the presence of an intact, healthy tympanic membrane and an aerated middle ear. We also sought to compare the diagnostic accuracy of umbo velocity (VU) measurements and measurements of ECR in the same group of patients. This prospective study comprised 31 adult patients with conductive hearing loss, of which 14 had surgically confirmed stapes fixation due to otosclerosis, 6 had surgically confirmed ossicular discontinuity, and 11 had computed tomography and vestibular evoked myogenic potential confirmed superior semicircular canal dehiscence (SCD). Measurements on all 31 ears included pure-tone audiometry for 0.25 to 8 kHz, ECR for 0.2 to 6 kHz using the Mimosa Acoustics HearID system, and VU for 0.3 to 6 kHz using the HLV-1000 laser Doppler vibrometer (Polytec Inc, Waldbronn, Germany). We analyzed power reflectance |ECR| as well as the absorbance level = 10 × log10(1 - |ECR|). All measurements were made before any surgical intervention. The VU and ECR data were plotted against normative data obtained in a companion study of 58 strictly defined normal ears (). Small increases in |ECR| at low-to-mid frequencies (400-1000 Hz) were observed in cases with stapes fixation, while narrowband decreases were seen for both SCD and ossicular discontinuity. The SCD and ossicular discontinuity differed in that the SCD had smaller decreases at mid-frequency (∼1000 Hz), whereas ossicular discontinuity had larger decreases at lower frequencies (500-800 Hz). SCD tended to have less air-bone gap at high frequencies (1-4 kHz) compared with stapes fixation and ossicular discontinuity. The |ECR| measurements, in conjunction with audiometry, could successfully separate 28 of the 31 cases into the three pathologies. By comparison, VU measurements, in conjunction with audiometry, could successfully separate

  19. Current status and results of the PBMR -Pebble Box- benchmark within the framework of the IAEA CRP5 - 341

    International Nuclear Information System (INIS)

    Reitsma, F.; Tyobeka, B.

    2010-01-01

    The verification and validation of computer codes used in the analysis of high temperature gas cooled pebble bed reactor systems has not been an easy goal to achieve. A limited amount of tests and operating reactor measurements are available. Code-to-code comparisons for realistic pebble bed reactor designs often exhibit differences that are difficult to explain and are often blamed on the complexity of the core models or the variety of analysis methods and cross section data sets employed. For this reason, within the framework of the IAEA CRP5, the 'Pebble Box' benchmark was formulated as a simple way to compare various treatments of neutronics phenomena. The problem is comprised of six test cases which were designed to investigate the treatments and effects of leakage and heterogeneity. This paper presents the preliminary results of the benchmark exercise as received during the CRP and suggests possible future steps towards the resolution of discrepancies between the results. Although few participants took part in the benchmarking exercise, the results presented here show that there is still a need for further evaluation and in-depth understanding in order to build the confidence that all the different methods, codes and cross-section data sets have the capability to handle the various neutronics effects for such systems. (authors)

  20. Benchmarking of thermalhydraulic loop models for lead-alloy-cooled advanced nuclear energy systems. Phase I: Isothermal forced convection case

    International Nuclear Information System (INIS)

    2012-06-01

    Under the auspices of the NEA Nuclear Science Committee (NSC), the Working Party on Scientific Issues of the Fuel Cycle (WPFC) has been established to co-ordinate scientific activities regarding various existing and advanced nuclear fuel cycles, including advanced reactor systems, associated chemistry and flowsheets, development and performance of fuel and materials and accelerators and spallation targets. The WPFC has different expert groups to cover a wide range of scientific issues in the field of nuclear fuel cycle. The Task Force on Lead-Alloy-Cooled Advanced Nuclear Energy Systems (LACANES) was created in 2006 to study thermal-hydraulic characteristics of heavy liquid metal coolant loop. The objectives of the task force are to (1) validate thermal-hydraulic loop models for application to LACANES design analysis in participating organisations, by benchmarking with a set of well-characterised lead-alloy coolant loop test data, (2) establish guidelines for quantifying thermal-hydraulic modelling parameters related to friction and heat transfer by lead-alloy coolant and (3) identify specific issues, either in modelling and/or in loop testing, which need to be addressed via possible future work. Nine participants from seven different institutes participated in the first phase of the benchmark. This report provides details of the benchmark specifications, method and code characteristics and results of the preliminary study: pressure loss coefficient and Phase-I. A comparison and analysis of the results will be performed together with Phase-II

  1. Results of the benchmark for blade structural models, part A

    DEFF Research Database (Denmark)

    Lekou, D.J.; Chortis, D.; Belen Fariñas, A.

    2013-01-01

    A benchmark on structural design methods for blades was performed within the InnWind.Eu project under WP2 “Lightweight Rotor” Task 2.2 “Lightweight structural design”. The present document is describes the results of the comparison simulation runs that were performed by the partners involved within...... Task 2.2 of the InnWind.Eu project. The benchmark is based on the reference wind turbine and the reference blade provided by DTU [1]. "Structural Concept developers/modelers" of WP2 were provided with the necessary input for a comparison numerical simulation run, upon definition of the reference blade...

  2. Benchmarking multi-dimensional large strain consolidation analyses

    International Nuclear Information System (INIS)

    Priestley, D.; Fredlund, M.D.; Van Zyl, D.

    2010-01-01

    Analyzing the consolidation of tailings slurries and dredged fills requires a more extensive formulation than is used for common (small strain) consolidation problems. Large strain consolidation theories have traditionally been limited to 1-D formulations. SoilVision Systems has developed the capacity to analyze large strain consolidation problems in 2 and 3-D. The benchmarking of such formulations is not a trivial task. This paper presents several examples of modeling large strain consolidation in the beta versions of the new software. These examples were taken from the literature and were used to benchmark the large strain formulation used by the new software. The benchmarks reported here are: a comparison to the consolidation software application CONDES0, Townsend's Scenario B and a multi-dimensional analysis of long-term column tests performed on oil sands tailings. All three of these benchmarks were attained using the SVOffice suite. (author)

  3. JENDL-4.0 benchmarking for fission reactor applications

    International Nuclear Information System (INIS)

    Chiba, Go; Okumura, Keisuke; Sugino, Kazuteru; Nagaya, Yasunobu; Yokoyama, Kenji; Kugo, Teruhiko; Ishikawa, Makoto; Okajima, Shigeaki

    2011-01-01

    Benchmark testing for the newly developed Japanese evaluated nuclear data library JENDL-4.0 is carried out by using a huge amount of integral data. Benchmark calculations are performed with a continuous-energy Monte Carlo code and with the deterministic procedure, which has been developed for fast reactor analyses in Japan. Through the present benchmark testing using a wide range of benchmark data, significant improvement in the performance of JENDL-4.0 for fission reactor applications is clearly demonstrated in comparison with the former library JENDL-3.3. Much more accurate and reliable prediction for neutronic parameters for both thermal and fast reactors becomes possible by using the library JENDL-4.0. (author)

  4. Preliminary performance assessment for the Waste Isolation Pilot Plant, December 1992. Volume 1, Third comparison with 40 CFR 191, Subpart B

    Energy Technology Data Exchange (ETDEWEB)

    1992-12-01

    Before disposing of transuranic radioactive wastes in the Waste Isolation Pilot Plant (WIPP), the United States Department of Energy (DOE) must evaluate compliance with applicable long-term regulations of the United States Environmental Protection Agency (EPA). Sandia National Laboratories is conducting iterative performance assessments of the WIPP for the DOE to provide interim guidance while preparing for final compliance evaluations. This volume contains an overview of WIPP performance assessment and a preliminary comparison with the long-term requirements of the Environmental Radiation Protection Standards for Management and Disposal of Spent Nuclear Fuel, High-Level and Transuranic Radioactive Wastes (40 CFR 191, Subpart B).

  5. Benchmarking school nursing practice: the North West Regional Benchmarking Group

    OpenAIRE

    Littler, Nadine; Mullen, Margaret; Beckett, Helen; Freshney, Alice; Pinder, Lynn

    2016-01-01

    It is essential that the quality of care is reviewed regularly through robust processes such as benchmarking to ensure all outcomes and resources are evidence-based so that children and young people’s needs are met effectively. This article provides an example of the use of benchmarking in school nursing practice. Benchmarking has been defined as a process for finding, adapting and applying best practices (Camp, 1994). This concept was first adopted in the 1970s ‘from industry where it was us...

  6. Benchmarking the Multidimensional Stellar Implicit Code MUSIC

    Science.gov (United States)

    Goffrey, T.; Pratt, J.; Viallet, M.; Baraffe, I.; Popov, M. V.; Walder, R.; Folini, D.; Geroux, C.; Constantino, T.

    2017-04-01

    We present the results of a numerical benchmark study for the MUltidimensional Stellar Implicit Code (MUSIC) based on widely applicable two- and three-dimensional compressible hydrodynamics problems relevant to stellar interiors. MUSIC is an implicit large eddy simulation code that uses implicit time integration, implemented as a Jacobian-free Newton Krylov method. A physics based preconditioning technique which can be adjusted to target varying physics is used to improve the performance of the solver. The problems used for this benchmark study include the Rayleigh-Taylor and Kelvin-Helmholtz instabilities, and the decay of the Taylor-Green vortex. Additionally we show a test of hydrostatic equilibrium, in a stellar environment which is dominated by radiative effects. In this setting the flexibility of the preconditioning technique is demonstrated. This work aims to bridge the gap between the hydrodynamic test problems typically used during development of numerical methods and the complex flows of stellar interiors. A series of multidimensional tests were performed and analysed. Each of these test cases was analysed with a simple, scalar diagnostic, with the aim of enabling direct code comparisons. As the tests performed do not have analytic solutions, we verify MUSIC by comparing it to established codes including ATHENA and the PENCIL code. MUSIC is able to both reproduce behaviour from established and widely-used codes as well as results expected from theoretical predictions. This benchmarking study concludes a series of papers describing the development of the MUSIC code and provides confidence in future applications.

  7. Monte Carlo benchmarking: Validation and progress

    International Nuclear Information System (INIS)

    Sala, P.

    2010-01-01

    Document available in abstract form only. Full text of publication follows: Calculational tools for radiation shielding at accelerators are faced with new challenges from the present and next generations of particle accelerators. All the details of particle production and transport play a role when dealing with huge power facilities, therapeutic ion beams, radioactive beams and so on. Besides the traditional calculations required for shielding, activation predictions have become an increasingly critical component. Comparison and benchmarking with experimental data is obviously mandatory in order to build up confidence in the computing tools, and to assess their reliability and limitations. Thin target particle production data are often the best tools for understanding the predictive power of individual interaction models and improving their performances. Complex benchmarks (e.g. thick target data, deep penetration, etc.) are invaluable in assessing the overall performances of calculational tools when all ingredients are put at work together. A review of the validation procedures of Monte Carlo tools will be presented with practical and real life examples. The interconnections among benchmarks, model development and impact on shielding calculations will be highlighted. (authors)

  8. Space Weather Action Plan Solar Radio Burst Phase 1 Benchmarks and the Steps to Phase 2

    Science.gov (United States)

    Biesecker, D. A.; White, S. M.; Gopalswamy, N.; Black, C.; Love, J. J.; Pierson, J.

    2017-12-01

    Solar radio bursts, when at the right frequency and when strong enough, can interfere with radar, communication, and tracking signals. In severe cases, radio bursts can inhibit the successful use of radio communications and disrupt a wide range of systems that are reliant on Position, Navigation, and Timing services on timescales ranging from minutes to hours across wide areas on the dayside of Earth. The White House's Space Weather Action Plan asked for solar radio burst intensity benchmarks for an event occurrence frequency of 1 in 100 years and also a theoretical maximum intensity benchmark. The benchmark team has developed preliminary (phase 1) benchmarks for the VHF (30-300 MHz), UHF (300-3000 MHz), GPS (1176-1602 MHz), F10.7 (2800 MHz), and Microwave (4000-20000) bands. The preliminary benchmarks were derived based on previously published work. Limitations in the published work will be addressed in phase 2 of the benchmark process. In addition, deriving theoretical maxima requires additional work, where it is even possible to, in order to meet the Action Plan objectives. In this presentation, we will present the phase 1 benchmarks, the basis used to derive them, and the limitations of that work. We will also discuss the work that needs to be done to complete the phase 2 benchmarks.

  9. 2 D 1/2 graphical benchmark

    International Nuclear Information System (INIS)

    Brochard, P.; Colin De Verdiere, G.; Nomine, J.P.; Perros, J.P.

    1993-01-01

    Within the framework of the development of a new version of the Psyche software, the author reports a benchmark study on different graphical libraries and systems and on the Psyche application. The author outlines the current context of development of graphical tools which still lacks of standardisation. This makes the comparison somehow limited and finally related to envisaged applications. The author presents the various systems and libraries, test principles, and characteristics of machines. Results and interpretations are then presented with reference to faced problems

  10. Benchmarking Nuclear Power Plants

    International Nuclear Information System (INIS)

    Jakic, I.

    2016-01-01

    One of the main tasks an owner have is to keep its business competitive on the market while delivering its product. Being owner of nuclear power plant bear the same (or even more complex and stern) responsibility due to safety risks and costs. In the past, nuclear power plant managements could (partly) ignore profit or it was simply expected and to some degree assured through the various regulatory processes governing electricity rate design. It is obvious now that, with the deregulation, utility privatization and competitive electricity market, key measure of success used at nuclear power plants must include traditional metrics of successful business (return on investment, earnings and revenue generation) as well as those of plant performance, safety and reliability. In order to analyze business performance of (specific) nuclear power plant, benchmarking, as one of the well-established concept and usual method was used. Domain was conservatively designed, with well-adjusted framework, but results have still limited application due to many differences, gaps and uncertainties. (author).

  11. Virtual machine performance benchmarking.

    Science.gov (United States)

    Langer, Steve G; French, Todd

    2011-10-01

    The attractions of virtual computing are many: reduced costs, reduced resources and simplified maintenance. Any one of these would be compelling for a medical imaging professional attempting to support a complex practice on limited resources in an era of ever tightened reimbursement. In particular, the ability to run multiple operating systems optimized for different tasks (computational image processing on Linux versus office tasks on Microsoft operating systems) on a single physical machine is compelling. However, there are also potential drawbacks. High performance requirements need to be carefully considered if they are to be executed in an environment where the running software has to execute through multiple layers of device drivers before reaching the real disk or network interface. Our lab has attempted to gain insight into the impact of virtualization on performance by benchmarking the following metrics on both physical and virtual platforms: local memory and disk bandwidth, network bandwidth, and integer and floating point performance. The virtual performance metrics are compared to baseline performance on "bare metal." The results are complex, and indeed somewhat surprising.

  12. AER benchmark specification sheet

    International Nuclear Information System (INIS)

    Aszodi, A.; Toth, S.

    2009-01-01

    In the VVER-440/213 type reactors, the core outlet temperature field is monitored with in-core thermocouples, which are installed above 210 fuel assemblies. These measured temperatures are used in determination of the fuel assembly powers and they have important role in the reactor power limitation. For these reasons, correct interpretation of the thermocouple signals is an important question. In order to interpret the signals in correct way, knowledge of the coolant mixing in the assembly heads is necessary. Computational Fluid Dynamics (CFD) codes and experiments can help to understand better these mixing processes and they can provide information which can support the more adequate interpretation of the thermocouple signals. This benchmark deals with the 3D CFD modeling of the coolant mixing in the heads of the profiled fuel assemblies with 12.2 mm rod pitch. Two assemblies of the 23rd cycle of the Paks NPP's Unit 3 are investigated. One of them has symmetrical pin power profile and another possesses inclined profile. (authors)

  13. AER Benchmark Specification Sheet

    International Nuclear Information System (INIS)

    Aszodi, A.; Toth, S.

    2009-01-01

    In the WWER-440/213 type reactors, the core outlet temperature field is monitored with in-core thermocouples, which are installed above 210 fuel assemblies. These measured temperatures are used in determination of the fuel assembly powers and they have important role in the reactor power limitation. For these reasons, correct interpretation of the thermocouple signals is an important question. In order to interpret the signals in correct way, knowledge of the coolant mixing in the assembly heads is necessary. Computational fluid dynamics codes and experiments can help to understand better these mixing processes and they can provide information which can support the more adequate interpretation of the thermocouple signals. This benchmark deals with the 3D computational fluid dynamics modeling of the coolant mixing in the heads of the profiled fuel assemblies with 12.2 mm rod pitch. Two assemblies of the twenty third cycle of the Paks NPPs Unit 3 are investigated. One of them has symmetrical pin power profile and another possesses inclined profile. (Authors)

  14. Benchmarking in academic pharmacy departments.

    Science.gov (United States)

    Bosso, John A; Chisholm-Burns, Marie; Nappi, Jean; Gubbins, Paul O; Ross, Leigh Ann

    2010-10-11

    Benchmarking in academic pharmacy, and recommendations for the potential uses of benchmarking in academic pharmacy departments are discussed in this paper. Benchmarking is the process by which practices, procedures, and performance metrics are compared to an established standard or best practice. Many businesses and industries use benchmarking to compare processes and outcomes, and ultimately plan for improvement. Institutions of higher learning have embraced benchmarking practices to facilitate measuring the quality of their educational and research programs. Benchmarking is used internally as well to justify the allocation of institutional resources or to mediate among competing demands for additional program staff or space. Surveying all chairs of academic pharmacy departments to explore benchmarking issues such as department size and composition, as well as faculty teaching, scholarly, and service productivity, could provide valuable information. To date, attempts to gather this data have had limited success. We believe this information is potentially important, urge that efforts to gather it should be continued, and offer suggestions to achieve full participation.

  15. Benchmarking - a validation of UTDefect

    International Nuclear Information System (INIS)

    Niklasson, Jonas; Bostroem, Anders; Wirdelius, Haakan

    2006-06-01

    New and stronger demands on reliability of used NDE/NDT procedures and methods have stimulated the development of simulation tools of NDT. Modelling of ultrasonic non-destructive testing is useful for a number of reasons, e.g. physical understanding, parametric studies and in the qualification of procedures and personnel. The traditional way of qualifying a procedure is to generate a technical justification by employing experimental verification of the chosen technique. The manufacturing of test pieces is often very expensive and time consuming. It also tends to introduce a number of possible misalignments between the actual NDT situation and the proposed experimental simulation. The UTDefect computer code (SUNDT/simSUNDT) has been developed, together with the Dept. of Mechanics at Chalmers Univ. of Technology, during a decade and simulates the entire ultrasonic testing situation. A thorough validated model has the ability to be an alternative and a complement to the experimental work in order to reduce the extensive cost. The validation can be accomplished by comparisons with other models, but ultimately by comparisons with experiments. This project addresses the last alternative but provides an opportunity to, in a later stage, compare with other software when all data are made public and available. The comparison has been with experimental data from an international benchmark study initiated by the World Federation of NDE Centers. The experiments have been conducted with planar and spherically focused immersion transducers. The defects considered are side-drilled holes, flat-bottomed holes, and a spherical cavity. The data from the experiments are a reference signal used for calibration (the signal from the front surface of the test block at normal incidence) and the raw output from the scattering experiment. In all, more than forty cases have been compared. The agreement between UTDefect and the experiments was in general good (deviation less than 2dB) when the

  16. HyspIRI Low Latency Concept and Benchmarks

    Science.gov (United States)

    Mandl, Dan

    2010-01-01

    Topics include HyspIRI low latency data ops concept, HyspIRI data flow, ongoing efforts, experiment with Web Coverage Processing Service (WCPS) approach to injecting new algorithms into SensorWeb, low fidelity HyspIRI IPM testbed, compute cloud testbed, open cloud testbed environment, Global Lambda Integrated Facility (GLIF) and OCC collaboration with Starlight, delay tolerant network (DTN) protocol benchmarking, and EO-1 configuration for preliminary DTN prototype.

  17. The implementation of benchmarking process in marketing education services by Ukrainian universities

    OpenAIRE

    G.V. Okhrimenko

    2016-01-01

    The aim of the article. The consideration of theoretical and practical aspects of benchmarking at universities is the main task of this research. At first, the researcher identified the essence of benchmarking. It involves comparing the characteristics of college or university leading competitors in the industry and copying of proven designs. Benchmarking tries to eliminate the fundamental problem of comparison – the impossibility of being better than the one from whom they borrow solution. B...

  18. Benchmarking and energy management schemes in SMEs

    Energy Technology Data Exchange (ETDEWEB)

    Huenges Wajer, Boudewijn [SenterNovem (Netherlands); Helgerud, Hans Even [New Energy Performance AS (Norway); Lackner, Petra [Austrian Energy Agency (Austria)

    2007-07-01

    Many companies are reluctant to focus on energy management or to invest in energy efficiency measures. Nevertheless, there are many good examples proving that the right approach to implementing energy efficiency can very well be combined with the business-priorities of most companies. SMEs in particular can benefit from a facilitated European approach because they normally have a lack of resources and time to invest in energy efficiency. In the EU supported pilot project BESS, 60 SMEs from 11 European countries of the food and drink industries successfully tested a package of interactive instruments which offers such a facilitated approach. A number of pilot companies show a profit increase of 3 up to 10 %. The package includes a user-friendly and web based E-learning scheme for implementing energy management as well as a benchmarking module for company specific comparison of energy performance indicators. Moreover, it has several practical and tested tools to support the cycle of continuous improvement of energy efficiency in the company such as checklists, sector specific measure lists, templates for auditing and energy conservation plans. An important feature and also a key trigger for companies is the possibility for SMEs to benchmark anonymously their energy situation against others of the same sector. SMEs can participate in a unique web based benchmarking system to interactively benchmark in a way which fully guarantees confidentiality and safety of company data. Furthermore, the available data can contribute to a bottom-up approach to support the objectives of (national) monitoring and targeting and thereby also contributing to the EU Energy Efficiency and Energy Services Directive. A follow up project to expand the number of participating SMEs of various sectors is currently being developed.

  19. Issues in Benchmark Metric Selection

    Science.gov (United States)

    Crolotte, Alain

    It is true that a metric can influence a benchmark but will esoteric metrics create more problems than they will solve? We answer this question affirmatively by examining the case of the TPC-D metric which used the much debated geometric mean for the single-stream test. We will show how a simple choice influenced the benchmark and its conduct and, to some extent, DBMS development. After examining other alternatives our conclusion is that the “real” measure for a decision-support benchmark is the arithmetic mean.

  20. A Heterogeneous Medium Analytical Benchmark

    International Nuclear Information System (INIS)

    Ganapol, B.D.

    1999-01-01

    A benchmark, called benchmark BLUE, has been developed for one-group neutral particle (neutron or photon) transport in a one-dimensional sub-critical heterogeneous plane parallel medium with surface illumination. General anisotropic scattering is accommodated through the Green's Function Method (GFM). Numerical Fourier transform inversion is used to generate the required Green's functions which are kernels to coupled integral equations that give the exiting angular fluxes. The interior scalar flux is then obtained through quadrature. A compound iterative procedure for quadrature order and slab surface source convergence provides highly accurate benchmark qualities (4- to 5- places of accuracy) results

  1. A Global Vision over Benchmarking Process: Benchmarking Based Enterprises

    OpenAIRE

    Sitnikov, Catalina; Giurca Vasilescu, Laura

    2008-01-01

    Benchmarking uses the knowledge and the experience of others to improve the enterprise. Starting from the analysis of the performance and underlying the strengths and weaknesses of the enterprise it should be assessed what must be done in order to improve its activity. Using benchmarking techniques, an enterprise looks at how processes in the value chain are performed. The approach based on the vision “from the whole towards the parts” (a fragmented image of the enterprise’s value chain) redu...

  2. Benchmarking and Learning in Public Healthcare

    DEFF Research Database (Denmark)

    Buckmaster, Natalie; Mouritsen, Jan

    2017-01-01

    This research investigates the effects of learning-oriented benchmarking in public healthcare settings. Benchmarking is a widely adopted yet little explored accounting practice that is part of the paradigm of New Public Management. Extant studies are directed towards mandated coercive benchmarking...... applications. The present study analyses voluntary benchmarking in a public setting that is oriented towards learning. The study contributes by showing how benchmarking can be mobilised for learning and offers evidence of the effects of such benchmarking for performance outcomes. It concludes that benchmarking...... can enable learning in public settings but that this requires actors to invest in ensuring that benchmark data are directed towards improvement....

  3. Performance Targets and External Benchmarking

    DEFF Research Database (Denmark)

    Friis, Ivar; Hansen, Allan; Vámosi, Tamás S.

    Research on relative performance measures, transfer pricing, beyond budgeting initiatives, target costing, piece rates systems and value based management has for decades underlined the importance of external benchmarking in performance management. Research conceptualises external benchmarking...... as a market mechanism that can be brought inside the firm to provide incentives for continuous improvement and the development of competitive advances. However, whereas extant research primarily has focused on the importance and effects of using external benchmarks, less attention has been directed towards...... the conditions upon which the market mechanism is performing within organizations. This paper aims to contribute to research by providing more insight to the conditions for the use of external benchmarking as an element in performance management in organizations. Our study explores a particular type of external...

  4. Benchmarking and Sustainable Transport Policy

    DEFF Research Database (Denmark)

    Gudmundsson, Henrik; Wyatt, Andrew; Gordon, Lucy

    2004-01-01

    Order to learn from the best. In 2000 the European Commission initiated research to explore benchmarking as a tool to promote policies for ‘sustainable transport’. This paper reports findings and recommendations on how to address this challenge. The findings suggest that benchmarking is a valuable...... tool that may indeed help to move forward the transport policy agenda. However, there are major conditions and limitations. First of all it is not always so straightforward to delimit, measure and compare transport services in order to establish a clear benchmark. Secondly ‘sustainable transport......’ evokes a broad range of concerns that are hard to address fully at the level of specific practices. Thirdly policies are not directly comparable across space and context. For these reasons attempting to benchmark ‘sustainable transport policies’ against one another would be a highly complex task, which...

  5. Benchmarking in pathology: development of a benchmarking complexity unit and associated key performance indicators.

    Science.gov (United States)

    Neil, Amanda; Pfeffer, Sally; Burnett, Leslie

    2013-01-01

    This paper details the development of a new type of pathology laboratory productivity unit, the benchmarking complexity unit (BCU). The BCU provides a comparative index of laboratory efficiency, regardless of test mix. It also enables estimation of a measure of how much complex pathology a laboratory performs, and the identification of peer organisations for the purposes of comparison and benchmarking. The BCU is based on the theory that wage rates reflect productivity at the margin. A weighting factor for the ratio of medical to technical staff time was dynamically calculated based on actual participant site data. Given this weighting, a complexity value for each test, at each site, was calculated. The median complexity value (number of BCUs) for that test across all participating sites was taken as its complexity value for the Benchmarking in Pathology Program. The BCU allowed implementation of an unbiased comparison unit and test listing that was found to be a robust indicator of the relative complexity for each test. Employing the BCU data, a number of Key Performance Indicators (KPIs) were developed, including three that address comparative organisational complexity, analytical depth and performance efficiency, respectively. Peer groups were also established using the BCU combined with simple organisational and environmental metrics. The BCU has enabled productivity statistics to be compared between organisations. The BCU corrects for differences in test mix and workload complexity of different organisations and also allows for objective stratification into peer groups.

  6. Benchmarking: contexts and details matter.

    Science.gov (United States)

    Zheng, Siyuan

    2017-07-05

    Benchmarking is an essential step in the development of computational tools. We take this opportunity to pitch in our opinions on tool benchmarking, in light of two correspondence articles published in Genome Biology.Please see related Li et al. and Newman et al. correspondence articles: www.dx.doi.org/10.1186/s13059-017-1256-5 and www.dx.doi.org/10.1186/s13059-017-1257-4.

  7. Handbook of critical experiments benchmarks

    International Nuclear Information System (INIS)

    Durst, B.M.; Bierman, S.R.; Clayton, E.D.

    1978-03-01

    Data from critical experiments have been collected together for use as benchmarks in evaluating calculational techniques and nuclear data. These benchmarks have been selected from the numerous experiments performed on homogeneous plutonium systems. No attempt has been made to reproduce all of the data that exists. The primary objective in the collection of these data is to present representative experimental data defined in a concise, standardized format that can easily be translated into computer code input

  8. Analysis of Benchmark 2 results

    International Nuclear Information System (INIS)

    Bacha, F.; Lefievre, B.; Maillard, J.; Silva, J.

    1994-01-01

    The code GEANT315 has been compared to different codes in two benchmarks. We analyze its performances through our results, especially in the thick target case. In spite of gaps in nucleus-nucleus interaction theories at intermediate energies, benchmarks allow possible improvements of physical models used in our codes. Thereafter, a scheme of radioactive waste burning system is studied. (authors). 4 refs., 7 figs., 1 tab

  9. 42 CFR 440.335 - Benchmark-equivalent health benefits coverage.

    Science.gov (United States)

    2010-10-01

    ...) Aggregate actuarial value. Benchmark-equivalent coverage is health benefits coverage that has an aggregate... planning services and supplies and other appropriate preventive services, as designated by the Secretary... State for purposes of comparison in establishing the aggregate actuarial value of the benchmark...

  10. Benchmarking survey for recycling.

    Energy Technology Data Exchange (ETDEWEB)

    Marley, Margie Charlotte; Mizner, Jack Harry

    2005-06-01

    This report describes the methodology, analysis and conclusions of a comparison survey of recycling programs at ten Department of Energy sites including Sandia National Laboratories/New Mexico (SNL/NM). The goal of the survey was to compare SNL/NM's recycling performance with that of other federal facilities, and to identify activities and programs that could be implemented at SNL/NM to improve recycling performance.

  11. Experiments on crushed salt consolidation with true triaxial testing device as a contribution to an EC Benchmark exercise

    International Nuclear Information System (INIS)

    Korthaus, E.

    1998-10-01

    The description of a Benchmark laboratory test on crushed salt consolidation is given that was performed twice with the true triaxial testing device developed by INE. The test was defined as an anisothermal hydrostatic multi-step test, with six creeping periods, and 45 days total duration. In the repetition test, an additional technique was applied for the first time in order to further reduce wall friction effects in the triaxial device. In both tests the sample strains were measured with high precision, allowing a reliable determination of the consolidation rates during the creeping periods. Changes in consolidation rates during load reductions were used to calculate the stress exponent of the constitutive model. Elastic compression moduli were determined at three compaction stages in the first test with the use of fast stress changes. The test results are compared with the model calculations performed by INE before the test under the Benchmark project. A preliminary comparison of the test results with those of the other participants is given. The comparison of the results of both tests shows that wall friction has only a moderate effect in the measurements with the true triaxial device. (orig.) [de

  12. Multiscale benchmarking of drug delivery vectors.

    Science.gov (United States)

    Summers, Huw D; Ware, Matthew J; Majithia, Ravish; Meissner, Kenith E; Godin, Biana; Rees, Paul

    2016-10-01

    Cross-system comparisons of drug delivery vectors are essential to ensure optimal design. An in-vitro experimental protocol is presented that separates the role of the delivery vector from that of its cargo in determining the cell response, thus allowing quantitative comparison of different systems. The technique is validated through benchmarking of the dose-response of human fibroblast cells exposed to the cationic molecule, polyethylene imine (PEI); delivered as a free molecule and as a cargo on the surface of CdSe nanoparticles and Silica microparticles. The exposure metrics are converted to a delivered dose with the transport properties of the different scale systems characterized by a delivery time, τ. The benchmarking highlights an agglomeration of the free PEI molecules into micron sized clusters and identifies the metric determining cell death as the total number of PEI molecules presented to cells, determined by the delivery vector dose and the surface density of the cargo. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. Towards benchmarking an in-stream water quality model

    Directory of Open Access Journals (Sweden)

    2007-01-01

    Full Text Available A method of model evaluation is presented which utilises a comparison with a benchmark model. The proposed benchmarking concept is one that can be applied to many hydrological models but, in this instance, is implemented in the context of an in-stream water quality model. The benchmark model is defined in such a way that it is easily implemented within the framework of the test model, i.e. the approach relies on two applications of the same model code rather than the application of two separate model codes. This is illustrated using two case studies from the UK, the Rivers Aire and Ouse, with the objective of simulating a water quality classification, general quality assessment (GQA, which is based on dissolved oxygen, biochemical oxygen demand and ammonium. Comparisons between the benchmark and test models are made based on GQA, as well as a step-wise assessment against the components required in its derivation. The benchmarking process yields a great deal of important information about the performance of the test model and raises issues about a priori definition of the assessment criteria.

  14. Bench-marking beam-beam simulations using coherent quadrupole effects

    International Nuclear Information System (INIS)

    Krishnagopal, S.; Chin, Y.H.

    1992-06-01

    Computer simulations are used extensively in the study of the beam-beam interaction. The proliferation of such codes raises the important question of their reliability, and motivates the development of a dependable set of bench-marks. We argue that rather than detailed quantitative comparisons, the ability of different codes to predict the same qualitative physics should be used as a criterion for such bench-marks. We use the striking phenomenon of coherent quadrupole oscillations as one such bench-mark, and demonstrate that our codes do indeed observe this behaviour. We also suggest some other tests that could be used as bench-marks

  15. Bench-marking beam-beam simulations using coherent quadrupole effects

    International Nuclear Information System (INIS)

    Krishnagopal, S.; Chin, Y.H.

    1992-01-01

    Computer simulations are used extensively in the study of the beam-beam interaction. The proliferation of such codes raises the important question of their reliability, and motivates the development of a dependable set of bench-marks. We argue that rather than detailed quantitative comparisons, the ability of different codes to predict the same qualitative physics should be used as a criterion for such bench-marks. We use the striking phenomenon of coherent quadrupole oscillations as one such bench-mark, and demonstrate that our codes do indeed observe this behavior. We also suggest some other tests that could be used as bench-marks

  16. Benchmarking of the FENDL-3 Neutron Cross-section Data Starter Library for Fusion Applications

    Energy Technology Data Exchange (ETDEWEB)

    Fischer, U., E-mail: ulrich.fischer@kit.edu [Association KIT-Euratom, Karlsruhe Institute of Technology, Hermann-von-Helmholtz-Platz 1, 76344 Eggenstein-Leopoldshafen (Germany); Angelone, M. [Associazione ENEA-Euratom, ENEA Fusion Division, Via E. Fermi 27, I-00044 Frascati (Italy); Bohm, T. [University of Wisconsin-Madison, 1500 Engineering Dr, Madison, WI 53706 (United States); Kondo, K. [Association KIT-Euratom, Karlsruhe Institute of Technology, Hermann-von-Helmholtz-Platz 1, 76344 Eggenstein-Leopoldshafen (Germany); Konno, C. [Japan Atomic Energy Agency, Tokai-mura, Naka-gun, Ibaraki-ken 319-1195 (Japan); Sawan, M. [University of Wisconsin-Madison, 1500 Engineering Dr, Madison, WI 53706 (United States); Villari, R. [Associazione ENEA-Euratom, ENEA Fusion Division, Via E. Fermi 27, I-00044 Frascati (Italy); Walker, B. [University of Wisconsin-Madison, 1500 Engineering Dr, Madison, WI 53706 (United States)

    2014-06-15

    This paper summarizes the benchmark analyses performed in a joint effort of ENEA (Italy), JAEA (Japan), KIT (Germany), and the University of Wisconsin (USA) on a computational ITER benchmark and a series of 14 MeV neutron benchmark experiments. The computational benchmark revealed a modest increase of the neutron flux levels in the deep penetration regions and a substantial increase of the gas production in steel components. The comparison to experimental results showed good agreement with no substantial differences between FENDL-3.0 and FENDL-2.1 for most of the responses. In general, FENDL-3 shows an improved performance for fusion neutronics applications.

  17. ACAMPROSATE AND BACLOFEN WERE NOT EFFECTIVE IN THE TREATMENT OF PATHOLOGICAL GAMBLING: PRELIMINARY BLIND RATER COMPARISON STUDY

    OpenAIRE

    Pinhas N Dannon; Pinhas N Dannon

    2011-01-01

    Objectives: Pathological gambling (PG) is a highly prevalent and disabling impulse control disorder. A range of psychopharmacological options are available for the treatment of PG, including selective serotonin reuptake inhibitors (SSRI), opioid receptor antagonists, anti-addiction drugs and mood stabilizers. In our preliminary study, we examined the efficacy of two anti-addiction drugs, Baclofen and Acamprosate, in the treatment of PG. Materials & Methods: 17 male gamblers were randomly ...

  18. Digital assessment of preliminary impression accuracy for edentulous jaws: Comparisons of 3-dimensional surfaces between study and working casts.

    Science.gov (United States)

    Matsuda, Takashi; Goto, Takaharu; Kurahashi, Kosuke; Kashiwabara, Toshiya; Watanabe, Megumi; Tomotake, Yoritoki; Nagao, Kan; Ichikawa, Tetsuo

    2016-07-01

    The aim of this study was to compare 3-dimensional surfaces of study and working casts for edentulous jaws and to evaluate the accuracy of preliminary impressions with a view to the future application of digital dentistry for edentulous jaws. Forty edentulous volunteers were serially recruited. Nine dentists took preliminary and final impressions in a routine clinical work-up. The study and working casts were digitized using a dental 3-dimensional scanner. The two surface images were superimposed through a least-square algorithm using imaging software and compared qualitatively. Furthermore, the surface of each jaw was divided into 6 sections, and the difference between the 2 images was quantitatively evaluated. Overall inspection showed that the difference around residual ridges was small and that around borders were large. The mean differences in the upper and lower jaws were 0.26mm and 0.45mm, respectively. The maximum values of the differences showed that the upward change mainly occurred in the anterior residual ridge, and the downward change mainly in the posterior border seal, and the labial and buccal vestibules, whereas every border of final impression was shortened in the lower jaw. The accuracy in all areas except the border, which forms the foundation, was estimated to be less than 0.25mm. Using digital technology, we here showed the overall and sectional accuracy of the preliminary impression for edentulous jaws. In our clinic, preliminary impressions have been made using an alginate material while ensuring that the requisite impression area was covered. Copyright © 2016 Japan Prosthodontic Society. Published by Elsevier Ltd. All rights reserved.

  19. Defect assessment benchmark studies

    International Nuclear Information System (INIS)

    Hooton, D.G.; Sharples, J.K.

    1995-01-01

    Assessments of the resistance to fast fracture of the beltline region of a PWR vessel subjected to a pressurized thermal shock (PTS) transient have been carried out using the procedures of French (RCC-M) and German (KTA) design codes, and comparisons made with results obtained using the R6 procedure as applied for Sizewell B. The example chosen for these comparisons is of a generic nature, and is taken as the PTS identified by the Hirsch addendum to the Second Marshall report (1987) as the most severe transient with regard to vessel integrity. All assessment methods show the beltline region of the vessel to be safe from the risk of fast fracture, but by varying factors of safety. These factors are discussed in terms of margins between limiting and reference defect sizes, fracture toughness and stress intensity factor, and material temperature and temperature at the onset of upper-shelf materials behaviour. Based on these studies, consideration is given to issues involved in the harmonization of those sections of the design codes which are concerned with methods for the demonstration of the avoidance of the risk of failure by fast fracture. (author)

  20. Regional Competitive Intelligence: Benchmarking and Policymaking

    OpenAIRE

    Huggins , Robert

    2010-01-01

    Benchmarking exercises have become increasingly popular within the sphere of regional policymaking in recent years. The aim of this paper is to analyse the concept of regional benchmarking and its links with regional policymaking processes. It develops a typology of regional benchmarking exercises and regional benchmarkers, and critically reviews the literature, both academic and policy oriented. It is argued that critics who suggest regional benchmarking is a flawed concept and technique fai...

  1. Assessment of Usability Benchmarks: Combining Standardized Scales with Specific Questions

    Directory of Open Access Journals (Sweden)

    Stephanie Bettina Linek

    2011-12-01

    Full Text Available The usability of Web sites and online services is of rising importance. When creating a completely new Web site, qualitative data are adequate for identifying the most usability problems. However, changes of an existing Web site should be evaluated by a quantitative benchmarking process. The proposed paper describes the creation of a questionnaire that allows a quantitative usability benchmarking, i.e. a direct comparison of the different versions of a Web site and an orientation on general standards of usability. The questionnaire is also open for qualitative data. The methodology will be explained by the digital library services of the ZBW.

  2. Validation of NESTLE against static reactor benchmark problems

    International Nuclear Information System (INIS)

    Mosteller, R.D.

    1996-01-01

    The NESTLE advanced modal code was developed at North Carolina State University with support from Los Alamos National Laboratory and Idaho National Engineering Laboratory. It recently has been benchmarked successfully against measured data from pressurized water reactors (PWRs). However, NESTLE's geometric capabilities are very flexible, and it can be applied to a variety of other types of reactors. This study presents comparisons of NESTLE results with those from other codes for static benchmark problems for PWRs, boiling water reactors (BWRs), high-temperature gas-cooled reactors (HTGRs) and CANDU heavy- water reactors (HWRs)

  3. Burn-up TRIGA Mark II benchmark experiment

    International Nuclear Information System (INIS)

    Persic, A.; Ravnik, M.; Zagar, T.

    1998-01-01

    Different reactor codes are used for calculations of reactor parameters. The accuracy of the programs is tested through comparison of the calculated values with the experimental results. Well-defined and accurately measured benchmarks are required. The experimental results of reactivity measurements, fuel element reactivity worth distribution and fuel-up measurements are presented in this paper. The experiments were performed with partly burnt reactor core. The experimental conditions were well defined, so that the results can be used as a burn-up benchmark test case for a TRIGA Mark II reactor calculations.(author)

  4. Validation of NESTLE against static reactor benchmark problems

    International Nuclear Information System (INIS)

    Mosteller, R.D.

    1996-01-01

    The NESTLE advanced nodal code was developed at North Carolina State University with support from Los Alamos National Laboratory and Idaho National Engineering Laboratory. It recently has been benchmarked successfully against measured data from pressurized water reactors (PWRs). However, NESTLE's geometric capabilities are very flexible, and it can be applied to a variety of other types of reactors. This study presents comparisons of NESTLE results with those from other codes for static benchmark problems for PWRs, boiling water reactors (BWRs), high-temperature gas-cooled reactors (HTGRs), and Canada deuterium uranium (CANDU) heavy-water reactors (HWRs)

  5. Benchmarking of human resources management

    Directory of Open Access Journals (Sweden)

    David M. Akinnusi

    2008-11-01

    Full Text Available This paper reviews the role of human resource management (HRM which, today, plays a strategic partnership role in management. The focus of the paper is on HRM in the public sector, where much hope rests on HRM as a means of transforming the public service and achieving much needed service delivery. However, a critical evaluation of HRM practices in the public sector reveals that these services leave much to be desired. The paper suggests the adoption of benchmarking as a process to revamp HRM in the public sector so that it is able to deliver on its promises. It describes the nature and process of benchmarking and highlights the inherent difficulties in applying benchmarking in HRM. It concludes with some suggestions for a plan of action. The process of identifying “best” practices in HRM requires the best collaborative efforts of HRM practitioners and academicians. If used creatively, benchmarking has the potential to bring about radical and positive changes in HRM in the public sector. The adoption of the benchmarking process is, in itself, a litmus test of the extent to which HRM in the public sector has grown professionally.

  6. Benchmark simulation models, quo vadis?

    Science.gov (United States)

    Jeppsson, U; Alex, J; Batstone, D J; Benedetti, L; Comas, J; Copp, J B; Corominas, L; Flores-Alsina, X; Gernaey, K V; Nopens, I; Pons, M-N; Rodríguez-Roda, I; Rosen, C; Steyer, J-P; Vanrolleghem, P A; Volcke, E I P; Vrecko, D

    2013-01-01

    As the work of the IWA Task Group on Benchmarking of Control Strategies for wastewater treatment plants (WWTPs) is coming to an end, it is essential to disseminate the knowledge gained. For this reason, all authors of the IWA Scientific and Technical Report on benchmarking have come together to provide their insights, highlighting areas where knowledge may still be deficient and where new opportunities are emerging, and to propose potential avenues for future development and application of the general benchmarking framework and its associated tools. The paper focuses on the topics of temporal and spatial extension, process modifications within the WWTP, the realism of models, control strategy extensions and the potential for new evaluation tools within the existing benchmark system. We find that there are major opportunities for application within all of these areas, either from existing work already being done within the context of the benchmarking simulation models (BSMs) or applicable work in the wider literature. Of key importance is increasing capability, usability and transparency of the BSM package while avoiding unnecessary complexity.

  7. The International Criticality Safety Benchmark Evaluation Project

    International Nuclear Information System (INIS)

    Briggs, B. J.; Dean, V. F.; Pesic, M. P.

    2001-01-01

    In order to properly manage the risk of a nuclear criticality accident, it is important to establish the conditions for which such an accident becomes possible for any activity involving fissile material. Only when this information is known is it possible to establish the likelihood of actually achieving such conditions. It is therefore important that criticality safety analysts have confidence in the accuracy of their calculations. Confidence in analytical results can only be gained through comparison of those results with experimental data. The Criticality Safety Benchmark Evaluation Project (CSBEP) was initiated in October of 1992 by the US Department of Energy. The project was managed through the Idaho National Engineering and Environmental Laboratory (INEEL), but involved nationally known criticality safety experts from Los Alamos National Laboratory, Lawrence Livermore National Laboratory, Savannah River Technology Center, Oak Ridge National Laboratory and the Y-12 Plant, Hanford, Argonne National Laboratory, and the Rocky Flats Plant. An International Criticality Safety Data Exchange component was added to the project during 1994 and the project became what is currently known as the International Criticality Safety Benchmark Evaluation Project (ICSBEP). Representatives from the United Kingdom, France, Japan, the Russian Federation, Hungary, Kazakhstan, Korea, Slovenia, Yugoslavia, Spain, and Israel are now participating on the project In December of 1994, the ICSBEP became an official activity of the Organization for Economic Cooperation and Development - Nuclear Energy Agency's (OECD-NEA) Nuclear Science Committee. The United States currently remains the lead country, providing most of the administrative support. The purpose of the ICSBEP is to: (1) identify and evaluate a comprehensive set of critical benchmark data; (2) verify the data, to the extent possible, by reviewing original and subsequently revised documentation, and by talking with the

  8. Towards benchmarking of multivariable controllers in chemical/biochemical industries: Plantwide control for ethylene glycol production

    DEFF Research Database (Denmark)

    Huusom, Jakob Kjøbsted; Bialas, Dawid Jan; Jørgensen, John Bagterp

    2011-01-01

    In this paper we discuss a simple yet realistic benchmark plant for evaluation and comparison of advanced multivariable control for chemical and biochemical processes. The benchmark plant is based on recycle-separator-recycle systems for ethylene glycol production and implemented in Matlab...... for education purposes (operator training, student education, etc) as well as scientific research into chemical process control where it enables rapid evaluation and comparison of advanced multivariable controllers as demonstrated in this study....

  9. Preliminary performance assessment for the Waste Isolation Pilot Plant, December 1992. Vol. 1: Third comparison with 40 CFR 191, Subpart B

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1992-12-15

    Before disposing of transuranic radioactive wastes in the Waste Isolation Pilot Plant (WIPP), the United States Department of Energy (DOE) must evaluate compliance with applicable long-term regulations of the United States Environmental Protection Agency (EPA). Sandia National Laboratories is conducting iterative performance assessments of the WIPP for the DOE to provide interim guidance while preparing for final compliance evaluations. This volume contains an overview of WIPP performance assessment and a preliminary comparison with the long-term requirements of the Environmental Radiation Protection Standards for Management and Disposal of Spent Nuclear Fuel, High-Level and Transuranic Radioactive Wastes (40 CFR 191, Subpart B). Detailed information about the technical basis for the preliminary comparison is contained in Volume 2. The reference data base and values for input parameters used in the modeling system are contained in Volume 3. Uncertainty and sensitivity analyses related to 40 CFR 191B are contained in Volume 4. Volume 5 contains uncertainty and sensitivity analyses of gas and brine migration for undisturbed performance. Finally, guidance derived from the entire 1992 performance assessment is presented in Volume 6. Results of the 1992 performance assessment are preliminary, and are not suitable for final comparison with 40 CFR 191, Subpart B. Portions of the modeling system and the data base remain incomplete, and the level of confidence in the performance estimates is not sufficient for a defensible compliance evaluation. Results are, however, suitable for providing guidance to the WIPP Project. All results are conditional on the models and data used, and are presented for preliminary comparison to the Containment Requirements of 40 CFR 191, Subpart B as mean complementary cumulative distribution functions (CCDFs) displaying estimated probabilistic releases of radionuclides to the accessible environment. Results compare three conceptual models for

  10. Preliminary performance assessment for the Waste Isolation Pilot Plant, December 1992. Vol. 1: Third comparison with 40 CFR 191, Subpart B

    International Nuclear Information System (INIS)

    1992-12-01

    Before disposing of transuranic radioactive wastes in the Waste Isolation Pilot Plant (WIPP), the United States Department of Energy (DOE) must evaluate compliance with applicable long-term regulations of the United States Environmental Protection Agency (EPA). Sandia National Laboratories is conducting iterative performance assessments of the WIPP for the DOE to provide interim guidance while preparing for final compliance evaluations. This volume contains an overview of WIPP performance assessment and a preliminary comparison with the long-term requirements of the Environmental Radiation Protection Standards for Management and Disposal of Spent Nuclear Fuel, High-Level and Transuranic Radioactive Wastes (40 CFR 191, Subpart B). Detailed information about the technical basis for the preliminary comparison is contained in Volume 2. The reference data base and values for input parameters used in the modeling system are contained in Volume 3. Uncertainty and sensitivity analyses related to 40 CFR 191B are contained in Volume 4. Volume 5 contains uncertainty and sensitivity analyses of gas and brine migration for undisturbed performance. Finally, guidance derived from the entire 1992 performance assessment is presented in Volume 6. Results of the 1992 performance assessment are preliminary, and are not suitable for final comparison with 40 CFR 191, Subpart B. Portions of the modeling system and the data base remain incomplete, and the level of confidence in the performance estimates is not sufficient for a defensible compliance evaluation. Results are, however, suitable for providing guidance to the WIPP Project. All results are conditional on the models and data used, and are presented for preliminary comparison to the Containment Requirements of 40 CFR 191, Subpart B as mean complementary cumulative distribution functions (CCDFs) displaying estimated probabilistic releases of radionuclides to the accessible environment. Results compare three conceptual models for

  11. Thermal reactor benchmark tests on JENDL-2

    International Nuclear Information System (INIS)

    Takano, Hideki; Tsuchihashi, Keichiro; Yamane, Tsuyoshi; Akino, Fujiyoshi; Ishiguro, Yukio; Ido, Masaru.

    1983-11-01

    A group constant library for the thermal reactor standard nuclear design code system SRAC was produced by using the evaluated nuclear data JENDL-2. Furthermore, the group constants for 235 U were calculated also from ENDF/B-V. Thermal reactor benchmark calculations were performed using the produced group constant library. The selected benchmark cores are two water-moderated lattices (TRX-1 and 2), two heavy water-moderated cores (DCA and ETA-1), two graphite-moderated cores (SHE-8 and 13) and eight critical experiments for critical safety. The effective multiplication factors and lattice cell parameters were calculated and compared with the experimental values. The results are summarized as follows. (1) Effective multiplication factors: The results by JENDL-2 are considerably improved in comparison with ones by ENDF/B-IV. The best agreement is obtained by using JENDL-2 and ENDF/B-V (only 235 U) data. (2) Lattice cell parameters: For the rho 28 (the ratio of epithermal to thermal 238 U captures) and C* (the ratio of 238 U captures to 235 U fissions), the values calculated by JENDL-2 are in good agreement with the experimental values. The rho 28 (the ratio of 238 U to 235 U fissions) are overestimated as found also for the fast reactor benchmarks. The rho 02 (the ratio of epithermal to thermal 232 Th captures) calculated by JENDL-2 or ENDF/B-IV are considerably underestimated. The functions of the SRAC system have been continued to be extended according to the needs of its users. A brief description will be given, in Appendix B, to the extended parts of the SRAC system together with the input specification. (author)

  12. Radiation Detection Computational Benchmark Scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Shaver, Mark W.; Casella, Andrew M.; Wittman, Richard S.; McDonald, Ben S.

    2013-09-24

    Modeling forms an important component of radiation detection development, allowing for testing of new detector designs, evaluation of existing equipment against a wide variety of potential threat sources, and assessing operation performance of radiation detection systems. This can, however, result in large and complex scenarios which are time consuming to model. A variety of approaches to radiation transport modeling exist with complementary strengths and weaknesses for different problems. This variety of approaches, and the development of promising new tools (such as ORNL’s ADVANTG) which combine benefits of multiple approaches, illustrates the need for a means of evaluating or comparing different techniques for radiation detection problems. This report presents a set of 9 benchmark problems for comparing different types of radiation transport calculations, identifying appropriate tools for classes of problems, and testing and guiding the development of new methods. The benchmarks were drawn primarily from existing or previous calculations with a preference for scenarios which include experimental data, or otherwise have results with a high level of confidence, are non-sensitive, and represent problem sets of interest to NA-22. From a technical perspective, the benchmarks were chosen to span a range of difficulty and to include gamma transport, neutron transport, or both and represent different important physical processes and a range of sensitivity to angular or energy fidelity. Following benchmark identification, existing information about geometry, measurements, and previous calculations were assembled. Monte Carlo results (MCNP decks) were reviewed or created and re-run in order to attain accurate computational times and to verify agreement with experimental data, when present. Benchmark information was then conveyed to ORNL in order to guide testing and development of hybrid calculations. The results of those ADVANTG calculations were then sent to PNNL for

  13. 3-D neutron transport benchmarks

    International Nuclear Information System (INIS)

    Takeda, T.; Ikeda, H.

    1991-03-01

    A set of 3-D neutron transport benchmark problems proposed by the Osaka University to NEACRP in 1988 has been calculated by many participants and the corresponding results are summarized in this report. The results of K eff , control rod worth and region-averaged fluxes for the four proposed core models, calculated by using various 3-D transport codes are compared and discussed. The calculational methods used were: Monte Carlo, Discrete Ordinates (Sn), Spherical Harmonics (Pn), Nodal Transport and others. The solutions of the four core models are quite useful as benchmarks for checking the validity of 3-D neutron transport codes

  14. Strategic behaviour under regulatory benchmarking

    Energy Technology Data Exchange (ETDEWEB)

    Jamasb, T. [Cambridge Univ. (United Kingdom). Dept. of Applied Economics; Nillesen, P. [NUON NV (Netherlands); Pollitt, M. [Cambridge Univ. (United Kingdom). Judge Inst. of Management

    2004-09-01

    In order to improve the efficiency of electricity distribution networks, some regulators have adopted incentive regulation schemes that rely on performance benchmarking. Although regulation benchmarking can influence the ''regulation game,'' the subject has received limited attention. This paper discusses how strategic behaviour can result in inefficient behaviour by firms. We then use the Data Envelopment Analysis (DEA) method with US utility data to examine implications of illustrative cases of strategic behaviour reported by regulators. The results show that gaming can have significant effects on the measured performance and profitability of firms. (author)

  15. Recommendations for Benchmarking Preclinical Studies of Nanomedicines.

    Science.gov (United States)

    Dawidczyk, Charlene M; Russell, Luisa M; Searson, Peter C

    2015-10-01

    Nanoparticle-based delivery systems provide new opportunities to overcome the limitations associated with traditional small-molecule drug therapy for cancer and to achieve both therapeutic and diagnostic functions in the same platform. Preclinical trials are generally designed to assess therapeutic potential and not to optimize the design of the delivery platform. Consequently, progress in developing design rules for cancer nanomedicines has been slow, hindering progress in the field. Despite the large number of preclinical trials, several factors restrict comparison and benchmarking of different platforms, including variability in experimental design, reporting of results, and the lack of quantitative data. To solve this problem, we review the variables involved in the design of preclinical trials and propose a protocol for benchmarking that we recommend be included in in vivo preclinical studies of drug-delivery platforms for cancer therapy. This strategy will contribute to building the scientific knowledge base that enables development of design rules and accelerates the translation of new technologies. ©2015 American Association for Cancer Research.

  16. Results of LWR core transient benchmarks

    International Nuclear Information System (INIS)

    Finnemann, H.; Bauer, H.; Galati, A.; Martinelli, R.

    1993-10-01

    LWR core transient (LWRCT) benchmarks, based on well defined problems with a complete set of input data, are used to assess the discrepancies between three-dimensional space-time kinetics codes in transient calculations. The PWR problem chosen is the ejection of a control assembly from an initially critical core at hot zero power or at full power, each for three different geometrical configurations. The set of problems offers a variety of reactivity excursions which efficiently test the coupled neutronic/thermal - hydraulic models of the codes. The 63 sets of submitted solutions are analyzed by comparison with a nodal reference solution defined by using a finer spatial and temporal resolution than in standard calculations. The BWR problems considered are reactivity excursions caused by cold water injection and pressurization events. In the present paper, only the cold water injection event is discussed and evaluated in some detail. Lacking a reference solution the evaluation of the 8 sets of BWR contributions relies on a synthetic comparative discussion. The results of this first phase of LWRCT benchmark calculations are quite satisfactory, though there remain some unresolved issues. It is therefore concluded that even more challenging problems can be successfully tackled in a suggested second test phase. (authors). 46 figs., 21 tabs., 3 refs

  17. Comparison of force, power, and striking efficiency for a Kung Fu strike performed by novice and experienced practitioners: preliminary analysis.

    Science.gov (United States)

    Neto, Osmar Pinto; Magini, Marcio; Saba, Marcelo M F; Pacheco, Marcos Tadeu Tavares

    2008-02-01

    This paper presents a comparison of force, power, and efficiency values calculated from Kung Fu Yau-Man palm strikes, when performed by 7 experienced and 6 novice men. They performed 5 palm strikes to a freestanding basketball, recorded by high-speed camera at 1000 Hz. Nonparametric comparisons and correlations showed experienced practitioners presented larger values of mean muscle force, mean impact force, mean muscle power, mean impact power, and mean striking efficiency, as is noted in evidence obtained for other martial arts. Also, an interesting result was that for experienced Kung Fu practitioners, muscle power was linearly correlated with impact power (p = .98) but not for the novice practitioners (p = .46).

  18. Prismatic Core Coupled Transient Benchmark

    International Nuclear Information System (INIS)

    Ortensi, J.; Pope, M.A.; Strydom, G.; Sen, R.S.; DeHart, M.D.; Gougar, H.D.; Ellis, C.; Baxter, A.; Seker, V.; Downar, T.J.; Vierow, K.; Ivanov, K.

    2011-01-01

    The Prismatic Modular Reactor (PMR) is one of the High Temperature Reactor (HTR) design concepts that have existed for some time. Several prismatic units have operated in the world (DRAGON, Fort St. Vrain, Peach Bottom) and one unit is still in operation (HTTR). The deterministic neutronics and thermal-fluids transient analysis tools and methods currently available for the design and analysis of PMRs have lagged behind the state of the art compared to LWR reactor technologies. This has motivated the development of more accurate and efficient tools for the design and safety evaluations of the PMR. In addition to the work invested in new methods, it is essential to develop appropriate benchmarks to verify and validate the new methods in computer codes. The purpose of this benchmark is to establish a well-defined problem, based on a common given set of data, to compare methods and tools in core simulation and thermal hydraulics analysis with a specific focus on transient events. The benchmark-working group is currently seeking OECD/NEA sponsorship. This benchmark is being pursued and is heavily based on the success of the PBMR-400 exercise.

  19. Preliminary Structural Design Using Topology Optimization with a Comparison of Results from Gradient and Genetic Algorithm Methods

    Science.gov (United States)

    Burt, Adam O.; Tinker, Michael L.

    2014-01-01

    In this paper, genetic algorithm based and gradient-based topology optimization is presented in application to a real hardware design problem. Preliminary design of a planetary lander mockup structure is accomplished using these methods that prove to provide major weight savings by addressing the structural efficiency during the design cycle. This paper presents two alternative formulations of the topology optimization problem. The first is the widely-used gradient-based implementation using commercially available algorithms. The second is formulated using genetic algorithms and internally developed capabilities. These two approaches are applied to a practical design problem for hardware that has been built, tested and proven to be functional. Both formulations converged on similar solutions and therefore were proven to be equally valid implementations of the process. This paper discusses both of these formulations at a high level.

  20. Comparison between digital subtraction angiography and magnetic resonance angiography in investigation of nonlacunar ischemic stroke in young patients: preliminary results.

    Science.gov (United States)

    Conforto, Adriana Bastos; Fregni, Felipe; Puglia, Paulo; Leite, Claudia da Costa; Yamamoto, Fabio Iuji; Coracini, Karen F; Scaff, Milberto

    2006-06-01

    We preliminarily investigated the relevance of performing digital subtraction angiography (DSA) in addition to magnetic resonance angiography (MRA) in definition of ischemic stroke etiology in young patients. DSAs and MRAs from 17 young patients with nonlacunar ischemic stroke were blindly analyzed and their impact on stroke management was evaluated. Etiologies were the same considering results of either DSA or MRA in 12/17 cases. In 15/17 patients no changes would have been made in treatment, regardless of the modality of angiography considered. These preliminary results suggest that DSA may be redundant in two thirds of ischemic strokes in young patients. Further larger prospective studies are necessary to determine indications of DSA in this age group.

  1. A preliminary comparison of F region plasma drifts and E region irregularity drifts in the auroral zone

    International Nuclear Information System (INIS)

    Ecklund, W.L.; Balsley, B.B.; Carter, D.A.

    1977-01-01

    During several days in April--May 1976 the Chatanika, Alaska, incoherent scatter radar and a temporary Doppler auroral radar located at Aniak, Alaska, were directed toward ionospheric volumes along a common magnetic field line in order to compare E region and F region drifts and associated electric fields. The Chatanika radar measured F region plasma drifts via the incoherent scatter technique, while the Aniak radar measured the drifts of E region irregularities (i.e., the radar aurora). The radar geometry was arranged so that both radars measured approximately the same velocity component of a magnetically westward or eastward motion. Preliminary data show good agreement between the drift velocity components measured by the two techniques during most of the experimental period. This result indicates that relatively modest auroral radar systems may be used, with some qualifications, to determine auroral electric fields

  2. Comparison of patient-reported outcomes between immediately and conventionally loaded mandibular two-implant overdentures: A preliminary study.

    Science.gov (United States)

    Omura, Yuri; Kanazawa, Manabu; Sato, Daisuke; Kasugai, Shohei; Minakuchi, Shunsuke

    2016-07-01

    The aim of this preliminary study is to compare patient-reported outcomes between immediately and conventionally loaded mandibular two-implant overdentures retained by magnetic attachments. Nineteen participants with edentulous mandibles were randomly assigned into either an immediate loading group (immediate group) or a conventional loading group (conventional group). Each participant received 2 implants in the inter-foraminal region by means of flapless surgery. Prostheses in the immediate and conventional groups were loaded using magnetic attachments on the same day as implant placement or 3 months after surgery, respectively. All participants completed questionnaires (the Japanese version of the Oral Health Impact Profile for edentulous [OHIP-EDENT-J], the patient's denture assessment [PDA], and general satisfaction) before implant placement (baseline) and 1, 2, 3, 4, 5, 6, and 12 months after surgery. The median differences between baseline and each monthly score were compared using the Mann-Whitney U test. The differences in median and 95% confidence interval between two groups were analyzed. The immediate group showed slightly lower OHIP-EDENT-J summary score at 1 and 3 months than the conventional group (P=0.09). In the lower denture domain of PDA, the immediate group showed a statistically higher score at 3 months (P=0.04). There was no statistically significant difference in general satisfaction between the two groups. Based on this preliminary study, immediate loading of mandibular two-implant overdentures with magnetic attachments tends to improve oral health-related quality of life and patient assessment earlier than observed with a conventional loading protocol. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  3. Benchmarking the Collocation Stand-Alone Library and Toolkit (CSALT)

    Science.gov (United States)

    Hughes, Steven; Knittel, Jeremy; Shoan, Wendy; Kim, Youngkwang; Conway, Claire; Conway, Darrel J.

    2017-01-01

    This paper describes the processes and results of Verification and Validation (VV) efforts for the Collocation Stand Alone Library and Toolkit (CSALT). We describe the test program and environments, the tools used for independent test data, and comparison results. The VV effort employs classical problems with known analytic solutions, solutions from other available software tools, and comparisons to benchmarking data available in the public literature. Presenting all test results are beyond the scope of a single paper. Here we present high-level test results for a broad range of problems, and detailed comparisons for selected problems.

  4. Validation of Refractivity Profiles Retrieved from FORMOSAT-3/COSMIC Radio Occultation Soundings: Preliminary Results of Statistical Comparisons Utilizing Balloon-Borne Observations

    Directory of Open Access Journals (Sweden)

    Hiroo Hayashi

    2009-01-01

    Full Text Available The GPS radio occultation (RO soundings by the FORMOSAT-3/COSMIC (Taiwan¡¦s Formosa Satellite Misssion #3/Constellation Observing System for Meteorology, Ionosphere and Climate satellites launched in mid-April 2006 are compared with high-resolution balloon-borne (radiosonde and ozonesonde observations. This paper presents preliminary results of validation of the COSMIC RO measurements in terms of refractivity through the troposphere and lower stratosphere. With the use of COSMIC RO soundings within 2 hours and 300 km of sonde profiles, statistical comparisons between the collocated refractivity profiles are erformed for some tropical regions (Malaysia and Western Pacific islands where moisture-rich air is expected in the lower troposphere and for both northern and southern polar areas with a very dry troposphere. The results of the comparisons show good agreement between COSMIC RO and sonde refractivity rofiles throughout the troposphere (1 - 1.5% difference at most with a positive bias generally becoming larger at progressively higher altitudes in the lower stratosphere (1 - 2% difference around 25 km, and a very small standard deviation (about 0.5% or less for a few kilometers below the tropopause level. A large standard deviation of fractional differences in the lowermost troposphere, which reaches up to as much as 3.5 - 5%at 3 km, is seen in the tropics while a much smaller standard deviation (1 - 2% at most is evident throughout the polar troposphere.

  5. Benchmarking a geostatistical procedure for the homogenisation of annual precipitation series

    Science.gov (United States)

    Caineta, Júlio; Ribeiro, Sara; Henriques, Roberto; Soares, Amílcar; Costa, Ana Cristina

    2014-05-01

    The European project COST Action ES0601, Advances in homogenisation methods of climate series: an integrated approach (HOME), has brought to attention the importance of establishing reliable homogenisation methods for climate data. In order to achieve that, a benchmark data set, containing monthly and daily temperature and precipitation data, was created to be used as a comparison basis for the effectiveness of those methods. Several contributions were submitted and evaluated by a number of performance metrics, validating the results against realistic inhomogeneous data. HOME also led to the development of new homogenisation software packages, which included feedback and lessons learned during the project. Preliminary studies have suggested a geostatistical stochastic approach, which uses Direct Sequential Simulation (DSS), as a promising methodology for the homogenisation of precipitation data series. Based on the spatial and temporal correlation between the neighbouring stations, DSS calculates local probability density functions at a candidate station to detect inhomogeneities. The purpose of the current study is to test and compare this geostatistical approach with the methods previously presented in the HOME project, using surrogate precipitation series from the HOME benchmark data set. The benchmark data set contains monthly precipitation surrogate series, from which annual precipitation data series were derived. These annual precipitation series were subject to exploratory analysis and to a thorough variography study. The geostatistical approach was then applied to the data set, based on different scenarios for the spatial continuity. Implementing this procedure also promoted the development of a computer program that aims to assist on the homogenisation of climate data, while minimising user interaction. Finally, in order to compare the effectiveness of this methodology with the homogenisation methods submitted during the HOME project, the obtained results

  6. OECD/DOE/CEA VVER-1000 coolant transient (V1000CT) benchmark - a consistent approach for assessing coupled codes for RIA analysis

    International Nuclear Information System (INIS)

    Boyan D Ivanov; Kostadin N Ivanov; Eric Royer; Sylvie Aniel; Nikola Kolev; Pavlin Groudev

    2005-01-01

    assumptions to enhance the code-to-code comparisons. The paper presents an overview of the benchmark activities and describes different exercises within the framework of the two benchmark phases. Selected comparative analysis of the submitted participants' results for the three exercises of Phase 1 is presented with emphasis on the observed modeling issues and deviations form the measured data. Preliminary results for Exercise 1 of Phase 2 obtained with CFD codes will be also presented. (authors)

  7. Benchmarking computer platforms for lattice QCD applications

    International Nuclear Information System (INIS)

    Hasenbusch, M.; Jansen, K.; Pleiter, D.; Wegner, P.; Wettig, T.

    2003-09-01

    We define a benchmark suite for lattice QCD and report on benchmark results from several computer platforms. The platforms considered are apeNEXT, CRAY T3E, Hitachi SR8000, IBM p690, PC-Clusters, and QCDOC. (orig.)

  8. Benchmarking computer platforms for lattice QCD applications

    International Nuclear Information System (INIS)

    Hasenbusch, M.; Jansen, K.; Pleiter, D.; Stueben, H.; Wegner, P.; Wettig, T.; Wittig, H.

    2004-01-01

    We define a benchmark suite for lattice QCD and report on benchmark results from several computer platforms. The platforms considered are apeNEXT, CRAY T3E; Hitachi SR8000, IBM p690, PC-Clusters, and QCDOC

  9. Tourism Destination Benchmarking: Evaluation and Selection of the Benchmarking Partners

    Directory of Open Access Journals (Sweden)

    Luštický Martin

    2012-03-01

    Full Text Available Tourism development has an irreplaceable role in regional policy of almost all countries. This is due to its undeniable benefits for the local population with regards to the economic, social and environmental sphere. Tourist destinations compete for visitors at tourism market and subsequently get into a relatively sharp competitive struggle. The main goal of regional governments and destination management institutions is to succeed in this struggle by increasing the competitiveness of their destination. The quality of strategic planning and final strategies is a key factor of competitiveness. Even though the tourism sector is not the typical field where the benchmarking methods are widely used, such approaches could be successfully applied. The paper focuses on key phases of the benchmarking process which lies in the search for suitable referencing partners. The partners are consequently selected to meet general requirements to ensure the quality if strategies. Following from this, some specific characteristics are developed according to the SMART approach. The paper tests this procedure with an expert evaluation of eight selected regional tourism strategies of regions in the Czech Republic, Slovakia and Great Britain. In this way it validates the selected criteria in the frame of the international environment. Hence, it makes it possible to find strengths and weaknesses of selected strategies and at the same time facilitates the discovery of suitable benchmarking partners.

  10. BONFIRE: benchmarking computers and computer networks

    OpenAIRE

    Bouckaert, Stefan; Vanhie-Van Gerwen, Jono; Moerman, Ingrid; Phillips, Stephen; Wilander, Jerker

    2011-01-01

    The benchmarking concept is not new in the field of computing or computer networking. With “benchmarking tools”, one usually refers to a program or set of programs, used to evaluate the performance of a solution under certain reference conditions, relative to the performance of another solution. Since the 1970s, benchmarking techniques have been used to measure the performance of computers and computer networks. Benchmarking of applications and virtual machines in an Infrastructure-as-a-Servi...

  11. ACAMPROSATE AND BACLOFEN WERE NOT EFFECTIVE IN THE TREATMENT OF PATHOLOGICAL GAMBLING: PRELIMINARY BLIND RATER COMPARISON STUDY

    Directory of Open Access Journals (Sweden)

    Pinhas N Dannon

    2011-06-01

    Full Text Available Objectives: Pathological gambling (PG is a highly prevalent and disabling impulse control disorder. A range of psychopharmacological options are available for the treatment of PG, including selective serotonin reuptake inhibitors (SSRI, opioid receptor antagonists, anti-addiction drugs and mood stabilizers. In our preliminary study, we examined the efficacy of two anti-addiction drugs, Baclofen and Acamprosate, in the treatment of PG. Materials & Methods: 17 male gamblers were randomly divided into two groups. Each group received one of the two drugs without being blind to treatment. All patients underwent a comprehensive psychiatric diagnostic evaluation and completed a series of semi-structured interviews. During the six months of study, monthly evaluations were carried out to assess improvement and relapses. Relapse was defined as recurrent gambling behavior. Results: None of the 17 patients reached the six months abstinence. One patient receiving Baclofen sustained abstinence for 4 months. 14 patients succeeded in sustaining abstinence for 1-3 months. 2 patients stopped attending monthly evaluations. Conclusion: Baclofen and Acamprosate did not prove efficient in treating pathological gamblers.

  12. Categorical and dimensional psychopathology in Dutch and US offspring of parents with bipolar disorder: A preliminary cross-national comparison.

    Science.gov (United States)

    Mesman, Esther; Birmaher, Boris B; Goldstein, Benjamin I; Goldstein, Tina; Derks, Eske M; Vleeschouwer, Marloes; Hickey, Mary Beth; Axelson, David; Monk, Kelly; Diler, Rasim; Hafeman, Danella; Sakolsky, Dara J; Reichart, Catrien G; Wals, Marjolein; Verhulst, Frank C; Nolen, Willem A; Hillegers, Manon H J

    2016-11-15

    Accumulating evidence suggests cross-national differences in adults with bipolar disorder (BD), but also in the susceptibility of their offspring (bipolar offspring). This study aims to explore and clarify cross-national variation in the prevalence of categorical and dimensional psychopathology between bipolar offspring in the US and The Netherlands. We compared levels of psychopathology in offspring of the Pittsburgh Bipolar Offspring Study (n=224) and the Dutch Bipolar Offspring Study (n=136) (age 10-18). Categorical psychopathology was ascertained through interviews using the Schedule for Affective Disorders and Schizophrenia for School Age Children (K-SADS-PL), dimensional psychopathology by parental reports using the Child Behavior Checklist (CBCL). Higher rates of categorical psychopathology were observed in the US versus the Dutch samples (66% versus 44%). We found no differences in the overall prevalence of mood disorders, including BD-I or -II, but more comorbidity in mood disorders in US versus Dutch offspring (80% versus 34%). The strongest predictors of categorical psychopathology were maternal BD (OR: 1.72, ppsychopathology based on CBCL reports. Preliminary measure of inter-site reliability. We found cross-national differences in prevalence of categorical diagnoses of non-mood disorders in bipolar offspring, but not in mood disorder diagnoses nor in parent-reported dimensional psychopathology. Cross-national variation was only partially explained by between-sample differences. Cultural and methodological explanations for these findings warrant further study. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. Transoral robotic surgery for the base of tongue squamous cell carcinoma: a preliminary comparison between da Vinci Xi and Si.

    Science.gov (United States)

    Alessandrini, Marco; Pavone, Isabella; Micarelli, Alessandro; Caporale, Claudio

    2017-09-13

    Considering the emerging advantages related to da Vinci Xi robotic platform, the aim of this study is to compare for the first time the operative outcomes of this tool to the previous da Vinci Si during transoral robotic surgery (TORS), both performed for squamous cell carcinomas (SCC) of the base of tongue (BOT). Intra- and peri-operative outcomes of eight patients with early stage (T1-T2) of the BOT carcinoma and undergoing TORS by means of the da Vinci Xi robotic platform (Xi-TORS) are compared with the da Vinci Si group ones (Si-TORS). With respect to Si-TORS group, Xi-TORS group demonstrated a significantly shorter overall operative time, console time, and intraoperative blood loss, as well as peri-operative pain intensity and length of mean hospital stays and nasogastric tube positioning. Considering recent advantages offered by surgical robotic techniques, the da Vinci Xi Surgical System preliminary outcomes could suggest its possible future routine implementation in BOT squamous cell carcinoma procedures.

  14. Acamprosate and Baclofen were Not Effective in the Treatment of Pathological Gambling: Preliminary Blind Rater Comparison Study.

    Science.gov (United States)

    Dannon, Pinhas N; Rosenberg, Oded; Schoenfeld, Netta; Kotler, Moshe

    2011-01-01

    Pathological gambling (PG) is a highly prevalent and disabling impulse control disorder. A range of psychopharmacological options are available for the treatment of PG, including selective serotonin reuptake inhibitors, opioid receptor antagonists, anti-addiction drugs, and mood stabilizers. In our preliminary study, we examined the efficacy of two anti-addiction drugs, baclofen and acamprosate, in the treatment of PG. Seventeen male gamblers were randomly divided into two groups. Each group received one of the two drugs without being blind to treatment. All patients underwent a comprehensive psychiatric diagnostic evaluation and completed a series of semi-structured interviews. During the 6-months of study, monthly evaluations were carried out to assess improvement and relapses. Relapse was defined as recurrent gambling behavior. None of the 17 patients reached the 6-months abstinence. One patient receiving baclofen sustained abstinence for 4 months. Fourteen patients succeeded in sustaining abstinence for 1-3 months. Two patients stopped attending monthly evaluations. Baclofen and acamprosate did not prove efficient in treating pathological gamblers.

  15. Biomedical instruments versus toys:a preliminary comparison of force platforms and the nintendo wii balance board - biomed 2011.

    Science.gov (United States)

    Pagnacco, Guido; Oggero, Elena; Wright, Cameron H G

    2011-01-01

    Biomedical sciences rely heavily on devices to acquire and analyze the physiological data needed to understand and model the biological processes of humans and animals. Therefore, the results of the investigations, clinical or academic, depend heavily on the instrumentation used. Unfortunately, all too often the users do not understand their instruments and end up compromising the results of their investigations by choosing an inadequate instrument or by not using it appropriately. One field where this is particularly apparent is posturography: the misconceptions about instruments are so widespread and deep that just recently there have been articles published in scientific journals suggesting the use of a “toy”, the Nintendo Wii Balance Board, instead of instrument grade force platform to acquire posturographic data. Characterizing the tools used for research becomes the first and probably the most important step in producing sound research and clinical results, and in the case of posturographic force platforms and the Nintendo Wii Balance Board a simple experimental setup can be used to find their characteristics. Furthermore, based on the preliminary results of this investigation, a mathematical formula can be used to predict the behavior of a posturographic tool, once its noise characteristics and “dead weight” response are known.

  16. How Benchmarking and Higher Education Came Together

    Science.gov (United States)

    Levy, Gary D.; Ronco, Sharron L.

    2012-01-01

    This chapter introduces the concept of benchmarking and how higher education institutions began to use benchmarking for a variety of purposes. Here, benchmarking is defined as a strategic and structured approach whereby an organization compares aspects of its processes and/or outcomes to those of another organization or set of organizations to…

  17. WWER-1000 Burnup Credit Benchmark (CB5)

    International Nuclear Information System (INIS)

    Manolova, M.A.

    2002-01-01

    In the paper the specification of WWER-1000 Burnup Credit Benchmark first phase (depletion calculations), given. The second phase - criticality calculations for the WWER-1000 fuel pin cell, will be given after the evaluation of the results, obtained at the first phase. The proposed benchmark is a continuation of the WWER benchmark activities in this field (Author)

  18. Benchmarking and Learning in Public Healthcare

    DEFF Research Database (Denmark)

    Buckmaster, Natalie; Mouritsen, Jan

    2017-01-01

    This research investigates the effects of learning-oriented benchmarking in public healthcare settings. Benchmarking is a widely adopted yet little explored accounting practice that is part of the paradigm of New Public Management. Extant studies are directed towards mandated coercive benchmarking...

  19. A preliminary comparison of Na lidar and meteor radar zonal winds during geomagnetic quiet and disturbed conditions

    Science.gov (United States)

    Kishore Kumar, G.; Nesse Tyssøy, H.; Williams, Bifford P.

    2018-03-01

    We investigate the possibility that sufficiently large electric fields and/or ionization during geomagnetic disturbed conditions may invalidate the assumptions applied in the retrieval of neutral horizontal winds from meteor and/or lidar measurements. As per our knowledge, the possible errors in the wind estimation have never been reported. In the present case study, we have been using co-located meteor radar and sodium resonance lidar zonal wind measurements over Andenes (69.27°N, 16.04°E) during intense substorms in the declining phase of the January 2005 solar proton event (21-22 January 2005). In total, 14 h of measurements are available for the comparison, which covers both quiet and disturbed conditions. For comparison, the lidar zonal wind measurements are averaged over the same time and altitude as the meteor radar wind measurements. High cross correlations (∼0.8) are found in all height regions. The discrepancies can be explained in light of differences in the observational volumes of the two instruments. Further, we extended the comparison to address the electric field and/or ionization impact on the neutral wind estimation. For the periods of low ionization, the neutral winds estimated with both instruments are quite consistent with each other. During periods of elevated ionization, comparatively large differences are noticed at the highermost altitude, which might be due to the electric field and/or ionization impact on the wind estimation. At present, one event is not sufficient to make any firm conclusion. Further study with more co-located measurements are needed to test the statistical significance of the result.

  20. Geothermal Heat Pump Benchmarking Report

    Energy Technology Data Exchange (ETDEWEB)

    None

    1997-01-17

    A benchmarking study was conducted on behalf of the Department of Energy to determine the critical factors in successful utility geothermal heat pump programs. A Successful program is one that has achieved significant market penetration. Successfully marketing geothermal heat pumps has presented some major challenges to the utility industry. However, select utilities have developed programs that generate significant GHP sales. This benchmarking study concludes that there are three factors critical to the success of utility GHP marking programs: (1) Top management marketing commitment; (2) An understanding of the fundamentals of marketing and business development; and (3) An aggressive competitive posture. To generate significant GHP sales, competitive market forces must by used. However, because utilities have functioned only in a regulated arena, these companies and their leaders are unschooled in competitive business practices. Therefore, a lack of experience coupled with an intrinsically non-competitive culture yields an industry environment that impedes the generation of significant GHP sales in many, but not all, utilities.

  1. The development of code benchmarks

    International Nuclear Information System (INIS)

    Glass, R.E.

    1986-01-01

    Sandia National Laboratories has undertaken a code benchmarking effort to define a series of cask-like problems having both numerical solutions and experimental data. The development of the benchmarks includes: (1) model problem definition, (2) code intercomparison, and (3) experimental verification. The first two steps are complete and a series of experiments are planned. The experiments will examine the elastic/plastic behavior of cylinders for both the end and side impacts resulting from a nine meter drop. The cylinders will be made from stainless steel and aluminum to give a range of plastic deformations. This paper presents the results of analyses simulating the model's behavior using materials properties for stainless steel and aluminum

  2. Benchmarking Variable Selection in QSAR.

    Science.gov (United States)

    Eklund, Martin; Norinder, Ulf; Boyer, Scott; Carlsson, Lars

    2012-02-01

    Variable selection is important in QSAR modeling since it can improve model performance and transparency, as well as reduce the computational cost of model fitting and predictions. Which variable selection methods that perform well in QSAR settings is largely unknown. To address this question we, in a total of 1728 benchmarking experiments, rigorously investigated how eight variable selection methods affect the predictive performance and transparency of random forest models fitted to seven QSAR datasets covering different endpoints, descriptors sets, types of response variables, and number of chemical compounds. The results show that univariate variable selection methods are suboptimal and that the number of variables in the benchmarked datasets can be reduced with about 60 % without significant loss in model performance when using multivariate adaptive regression splines MARS and forward selection. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Closed-loop neuromorphic benchmarks

    CSIR Research Space (South Africa)

    Stewart, TC

    2015-11-01

    Full Text Available Benchmarks   Terrence C. Stewart 1* , Travis DeWolf 1 , Ashley Kleinhans 2 , Chris Eliasmith 1   1 University of Waterloo, Canada, 2 Council for Scientific and Industrial Research, South Africa   Submitted to Journal:   Frontiers in Neuroscience   Specialty... Eliasmith 1 1Centre for Theoretical Neuroscience, University of Waterloo, Waterloo, ON, Canada 2Mobile Intelligent Autonomous Systems group, Council for Scientific and Industrial Research, Pretoria, South Africa Correspondence*: Terrence C. Stewart Centre...

  4. Investible benchmarks & hedge fund liquidity

    OpenAIRE

    Freed, Marc S; McMillan, Ben

    2011-01-01

    A lack of commonly accepted benchmarks for hedge fund performance has permitted hedge fund managers to attribute to skill returns that may actually accrue from market risk factors and illiquidity. Recent innovations in hedge fund replication permits us to estimate the extent of this misattribution. Using an option-based model, we find evidence that the value of liquidity options that investors implicitly grant managers when they invest may account for part or even all hedge fund returns. C...

  5. A simplified approach to WWER-440 fuel assembly head benchmark

    International Nuclear Information System (INIS)

    Muehlbauer, P.

    2010-01-01

    The WWER-440 fuel assembly head benchmark was simulated with FLUENT 12 code as a first step of validation of the code for nuclear reactor safety analyses. Results of the benchmark together with comparison of results provided by other participants and results of measurement will be presented in another paper by benchmark organisers. This presentation is therefore focused on our approach to this simulation as illustrated on the case 323-34, which represents a peripheral assembly with five neighbours. All steps of the simulation and some lessons learned are described. Geometry of the computational region supplied as STEP file by organizers of the benchmark was first separated into two parts (inlet part with spacer grid, and the rest of assembly head) in order to keep the size of the computational mesh manageable with regard to the hardware available (HP Z800 workstation with Intel Zeon four-core CPU 3.2 GHz, 32 GB of RAM) and then further modified at places where shape of the geometry would probably lead to highly distorted cells. Both parts of the geometry were connected via boundary profile file generated at cross section, where effect of grid spacers is still felt but the effect of out flow boundary condition used in the computations of the inlet part of geometry is negligible. Computation proceeded in several steps: start with basic mesh, standard k-ε model of turbulence with standard wall functions and first order upwind numerical schemes; after convergence (scaled residuals lower than 10-3) and near-wall meshes local adaptation when needed, realizable k-ε of turbulence was used with second order upwind numerical schemes for momentum and energy equations. During iterations, area-average temperature of thermocouples and area-averaged outlet temperature which are the main figures of merit of the benchmark were also monitored. In this 'blind' phase of the benchmark, effect of spacers was neglected. After results of measurements are available, standard validation

  6. Preliminary report of the comparison of multiple non-destructive assay techniques on LANL Plutonium Facility waste drums

    International Nuclear Information System (INIS)

    Bonner, C.; Schanfein, M.; Estep, R.

    1999-01-01

    Prior to disposal, nuclear waste must be accurately characterized to identify and quantify the radioactive content. The DOE Complex faces the daunting task of measuring nuclear material with both a wide range of masses and matrices. Similarly daunting can be the selection of a non-destructive assay (NDA) technique(s) to efficiently perform the quantitative assay over the entire waste population. In fulfilling its role of a DOE Defense Programs nuclear User Facility/Technology Development Center, the Los Alamos National Laboratory Plutonium Facility recently tested three commercially built and owned, mobile nondestructive assay (NDA) systems with special nuclear materials (SNM). Two independent commercial companies financed the testing of their three mobile NDA systems at the site. Contained within a single trailer is Canberra Industries segmented gamma scanner/waste assay system (SGS/WAS) and neutron waste drum assay system (WDAS). The third system is a BNFL Instruments Inc. (formerly known as Pajarito Scientific Corporation) differential die-away imaging passive/active neutron (IPAN) counter. In an effort to increase the value of this comparison, additional NDA techniques at LANL were also used to measure these same drums. These are comprised of three tomographic gamma scanners (one mobile unit and two stationary) and one developmental differential die-away system. Although not certified standards, the authors hope that such a comparison will provide valuable data for those considering these different NDA techniques to measure their waste as well as the developers of the techniques

  7. Poster - 56: Preliminary comparison of FF- and FFF-VMAT for prostate plans with higher rectal dose

    International Nuclear Information System (INIS)

    Liu, Baochang; Darko, Johnson; Osei, Ernest

    2016-01-01

    Purpose: A recent retrospective study found 53 patients previously treated to 78Gy/39 using flattened filtered (FF) 6X-VMAT at GRRCC had rectal DVH more than one standard deviation higher than the average. This study was to investigate if using 6FFFor10FFF beams could reduce these DVHs without compromising target coverage. Methods: Twenty patients’ plans were re-planed with 2-arc 6X-VMAT, 6FFF-VMAT and 10FFF-VMAT using the Eclipse TPS following departmental protocol. All plans had the same optimization and normalization, and were evaluated against the acceptance criteria from the QUANTEC and Emami. Statistical differences in the mean dose to OARs (D m ) and PTV homogeneity index (HI) between energies were tested using the paired sample Wilcoxon signed rank statistical method (p<0.05). Beam delivery accuracy was checked on five patients using portal dosimetry (PD). Results: The PTV HI for the 10FFF shows no statistical difference from the 6X. All the OARs, except left femoral head with 6FFF, have significantly lower Dm using 6FFF and 10FFF .There is no difference in the maximum doses to rectum and bladder and are limited by the prescribed doses. Measurements show good agreements in the gamma evaluation (3%/3mm) for all energies. Conclusion: This preliminary study shows that doses to the OARs are reduced using 10FFF for the same target coverage. The plans using 6FFF result in lower doses to some OARs, and statistically different PTV HI. All plans showed very good agreement with measurements.

  8. Poster - 56: Preliminary comparison of FF- and FFF-VMAT for prostate plans with higher rectal dose

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Baochang; Darko, Johnson; Osei, Ernest [Grand River Regional Cancer Centre, Kitchener, Ontario (Canada)

    2016-08-15

    Purpose: A recent retrospective study found 53 patients previously treated to 78Gy/39 using flattened filtered (FF) 6X-VMAT at GRRCC had rectal DVH more than one standard deviation higher than the average. This study was to investigate if using 6FFFor10FFF beams could reduce these DVHs without compromising target coverage. Methods: Twenty patients’ plans were re-planed with 2-arc 6X-VMAT, 6FFF-VMAT and 10FFF-VMAT using the Eclipse TPS following departmental protocol. All plans had the same optimization and normalization, and were evaluated against the acceptance criteria from the QUANTEC and Emami. Statistical differences in the mean dose to OARs (D{sub m}) and PTV homogeneity index (HI) between energies were tested using the paired sample Wilcoxon signed rank statistical method (p<0.05). Beam delivery accuracy was checked on five patients using portal dosimetry (PD). Results: The PTV HI for the 10FFF shows no statistical difference from the 6X. All the OARs, except left femoral head with 6FFF, have significantly lower Dm using 6FFF and 10FFF .There is no difference in the maximum doses to rectum and bladder and are limited by the prescribed doses. Measurements show good agreements in the gamma evaluation (3%/3mm) for all energies. Conclusion: This preliminary study shows that doses to the OARs are reduced using 10FFF for the same target coverage. The plans using 6FFF result in lower doses to some OARs, and statistically different PTV HI. All plans showed very good agreement with measurements.

  9. Comparison of image quality between mammography dedicated monitor and UHD 4K monitor, using standard mammographic phantom: A preliminary study

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Ji Young; Cha, Soon Joo; Hong, Sung Hwan; Kim, Su Young; Kim, Yong Hoon; Kim, You Sung; Kim, Jeong A [Dept. of Radiology, Inje Unveristy Ilsan Paik Hospital, Goyang (Korea, Republic of)

    2017-03-15

    Using standard mammographic phantom images, we compared the image quality obtained between a mammography dedicated 5 megapixel monitor (5M) and a UHD 4K (4K) monitor with digital imaging and communications in medicine display, to investigate the possibility of clinical application of 4K monitors. Three different exposures (autoexposure, overexposure and underexposure) images of mammographic phantom were obtained, and six radiologists independently evaluated the images in 5M and 4K without image modulation, by scoring of fibers, groups of specks and masses within the phantom image. The mean score of each object on both monitors was independently analyzed, using t-test and interobserver reliability by intraclass correlation coefficient (ICC) of SPSS. The overall mean scores of fiber, group of specks, and mass in 5M were 4.25, 3.92, and 3.28 respectively, and scores obtained in 4K monitor were 3.81, 3.58, and 3.14, respectively. No statistical difference was seen in scores of fiber and mass between the two monitors at all exposure conditions, but the score of group of specks in 4K was statistically lower in the overall (p = 0.0492) and in underexposure conditions (p = 0.012). The ICC for interobserver reliability was excellent (0.874). Our study suggests that since the mammographic phantom images are appropriate with no significant difference in image quality observed between the two monitors, the 4K monitor could be used for clinical studies. Since this is a small preliminary study using phantom images, the result may differ in actual mammographic images, and subsequent investigation with clinical mammographic images is required.

  10. A Preliminary Comparison of Motor Learning Across Different Non-invasive Brain Stimulation Paradigms Shows No Consistent Modulations

    Directory of Open Access Journals (Sweden)

    Virginia Lopez-Alonso

    2018-04-01

    Full Text Available Non-invasive brain stimulation (NIBS has been widely explored as a way to safely modulate brain activity and alter human performance for nearly three decades. Research using NIBS has grown exponentially within the last decade with promising results across a variety of clinical and healthy populations. However, recent work has shown high inter-individual variability and a lack of reproducibility of previous results. Here, we conducted a small preliminary study to explore the effects of three of the most commonly used excitatory NIBS paradigms over the primary motor cortex (M1 on motor learning (Sequential Visuomotor Isometric Pinch Force Tracking Task and secondarily relate changes in motor learning to changes in cortical excitability (MEP amplitude and SICI. We compared anodal transcranial direct current stimulation (tDCS, paired associative stimulation (PAS25, and intermittent theta burst stimulation (iTBS, along with a sham tDCS control condition. Stimulation was applied prior to motor learning. Participants (n = 28 were randomized into one of the four groups and were trained on a skilled motor task. Motor learning was measured immediately after training (online, 1 day after training (consolidation, and 1 week after training (retention. We did not find consistent differential effects on motor learning or cortical excitability across groups. Within the boundaries of our small sample sizes, we then assessed effect sizes across the NIBS groups that could help power future studies. These results, which require replication with larger samples, are consistent with previous reports of small and variable effect sizes of these interventions on motor learning.

  11. A Preliminary Comparison of Motor Learning Across Different Non-invasive Brain Stimulation Paradigms Shows No Consistent Modulations

    Science.gov (United States)

    Lopez-Alonso, Virginia; Liew, Sook-Lei; Fernández del Olmo, Miguel; Cheeran, Binith; Sandrini, Marco; Abe, Mitsunari; Cohen, Leonardo G.

    2018-01-01

    Non-invasive brain stimulation (NIBS) has been widely explored as a way to safely modulate brain activity and alter human performance for nearly three decades. Research using NIBS has grown exponentially within the last decade with promising results across a variety of clinical and healthy populations. However, recent work has shown high inter-individual variability and a lack of reproducibility of previous results. Here, we conducted a small preliminary study to explore the effects of three of the most commonly used excitatory NIBS paradigms over the primary motor cortex (M1) on motor learning (Sequential Visuomotor Isometric Pinch Force Tracking Task) and secondarily relate changes in motor learning to changes in cortical excitability (MEP amplitude and SICI). We compared anodal transcranial direct current stimulation (tDCS), paired associative stimulation (PAS25), and intermittent theta burst stimulation (iTBS), along with a sham tDCS control condition. Stimulation was applied prior to motor learning. Participants (n = 28) were randomized into one of the four groups and were trained on a skilled motor task. Motor learning was measured immediately after training (online), 1 day after training (consolidation), and 1 week after training (retention). We did not find consistent differential effects on motor learning or cortical excitability across groups. Within the boundaries of our small sample sizes, we then assessed effect sizes across the NIBS groups that could help power future studies. These results, which require replication with larger samples, are consistent with previous reports of small and variable effect sizes of these interventions on motor learning. PMID:29740271

  12. Comparison of effects of uncomplicated canine babesiosis and canine normovolaemic anaemia on abdominal splanchnic Doppler characteristics - a preliminary investigation

    Directory of Open Access Journals (Sweden)

    L.M. Koma

    2005-06-01

    Full Text Available A preliminary study was conducted to compare uncomplicated canine babesiosis (CB and experimentally induced normovolaemic anaemia (EA using Doppler ultrasonography of abdominal splanchnic vessels. Fourteen dogs with uncomplicated CB were investigated together with 11 healthy Beagles during severe EA, moderate EA and the physiological state as a control group. Canine babesiosis was compared with severe EA, moderate EA and the physiological state using Doppler variables of the abdominal aorta, cranial mesenteric artery (CMA, coeliac, left renal and interlobar, and hilar splenic arteries, and the main portal vein. Patterns of haemodynamic changes during CB and EA were broadly similar and were characterised by elevations in velocities and reductions in resistance indices in all vessels except the renal arteries when compared with the physiological state. Aortic and CMA peak systolic velocities and CMA end diastolic and time-averaged mean velocities in CB were significantly lower (P < 0.023 than those in severe EA. Patterns of renal haemodynamic changes during CB and EA were similar. However, the renal patterns differed from those of aortic and gastrointestinal arteries, having elevations in vascular resistance indices, a reduction in end diastolic velocity and unchanged time-averaged mean velocity. The left renal artery resistive index in CB was significantly higher (P < 0.025 than those in EA and the physiological state. Renal interlobar artery resistive and pulsatility indices in CB were significantly higher (P < 0.016 than those of moderate EA and the physiological state. The similar haemodynamic patterns in CB and EA are attributable to anaemia, while significant differencesmayadditionally be attributed to pathophysiological factors peculiar to CB.

  13. Benchmark models, planes lines and points for future SUSY searches at the LHC

    International Nuclear Information System (INIS)

    AbdusSalam, S.S.; Allanach, B.C.; Dreiner, H.K.

    2012-03-01

    We define benchmark models for SUSY searches at the LHC, including the CMSSM, NUHM, mGMSB, mAMSB, MM-AMSB and p19MSSM, as well as models with R-parity violation and the NMSSM. Within the parameter spaces of these models, we propose benchmark subspaces, including planes, lines and points along them. The planes may be useful for presenting results of the experimental searches in different SUSY scenarios, while the specific benchmark points may serve for more detailed detector performance tests and comparisons. We also describe algorithms for defining suitable benchmark points along the proposed lines in the parameter spaces, and we define a few benchmark points motivated by recent fits to existing experimental data.

  14. Benchmark models, planes lines and points for future SUSY searches at the LHC

    Energy Technology Data Exchange (ETDEWEB)

    AbdusSalam, S.S. [The Abdus Salam International Centre for Theoretical Physics, Trieste (Italy); Allanach, B.C. [Cambridge Univ. (United Kingdom). Dept. of Applied Mathematics and Theoretical Physics; Dreiner, H.K. [Bonn Univ. (DE). Bethe Center for Theoretical Physics and Physikalisches Inst.] (and others)

    2012-03-15

    We define benchmark models for SUSY searches at the LHC, including the CMSSM, NUHM, mGMSB, mAMSB, MM-AMSB and p19MSSM, as well as models with R-parity violation and the NMSSM. Within the parameter spaces of these models, we propose benchmark subspaces, including planes, lines and points along them. The planes may be useful for presenting results of the experimental searches in different SUSY scenarios, while the specific benchmark points may serve for more detailed detector performance tests and comparisons. We also describe algorithms for defining suitable benchmark points along the proposed lines in the parameter spaces, and we define a few benchmark points motivated by recent fits to existing experimental data.

  15. Benchmark Models, Planes, Lines and Points for Future SUSY Searches at the LHC

    CERN Document Server

    AbdusSalam, S S; Dreiner, H K; Ellis, J; Ellwanger, U; Gunion, J; Heinemeyer, S; Krämer, M; Mangano, M L; Olive, K A; Rogerson, S; Roszkowski, L; Schlaffer, M; Weiglein, G

    2011-01-01

    We define benchmark models for SUSY searches at the LHC, including the CMSSM, NUHM, mGMSB, mAMSB, MM-AMSB and p19MSSM, as well as models with R-parity violation and the NMSSM. Within the parameter spaces of these models, we propose benchmark subspaces, including planes, lines and points along them. The planes may be useful for presenting results of the experimental searches in different SUSY scenarios, while the specific benchmark points may serve for more detailed detector performance tests and comparisons. We also describe algorithms for defining suitable benchmark points along the proposed lines in the parameter spaces, and we define a few benchmark points motivated by recent fits to existing experimental data.

  16. HS06 Benchmark for an ARM Server

    Science.gov (United States)

    Kluth, Stefan

    2014-06-01

    We benchmarked an ARM cortex-A9 based server system with a four-core CPU running at 1.1 GHz. The system used Ubuntu 12.04 as operating system and the HEPSPEC 2006 (HS06) benchmarking suite was compiled natively with gcc-4.4 on the system. The benchmark was run for various settings of the relevant gcc compiler options. We did not find significant influence from the compiler options on the benchmark result. The final HS06 benchmark result is 10.4.

  17. HS06 benchmark for an ARM server

    International Nuclear Information System (INIS)

    Kluth, Stefan

    2014-01-01

    We benchmarked an ARM cortex-A9 based server system with a four-core CPU running at 1.1 GHz. The system used Ubuntu 12.04 as operating system and the HEPSPEC 2006 (HS06) benchmarking suite was compiled natively with gcc-4.4 on the system. The benchmark was run for various settings of the relevant gcc compiler options. We did not find significant influence from the compiler options on the benchmark result. The final HS06 benchmark result is 10.4.

  18. Argonne Code Center: Benchmark problem book.

    Energy Technology Data Exchange (ETDEWEB)

    None, None

    1977-06-01

    This book is an outgrowth of activities of the Computational Benchmark Problems Committee of the Mathematics and Computation Division of the American Nuclear Society. This is the second supplement of the original benchmark book which was first published in February, 1968 and contained computational benchmark problems in four different areas. Supplement No. 1, which was published in December, 1972, contained corrections to the original benchmark book plus additional problems in three new areas. The current supplement. Supplement No. 2, contains problems in eight additional new areas. The objectives of computational benchmark work and the procedures used by the committee in pursuing the objectives are outlined in the original edition of the benchmark book (ANL-7416, February, 1968). The members of the committee who have made contributions to Supplement No. 2 are listed below followed by the contributors to the earlier editions of the benchmark book.

  19. Benchmarks

    Data.gov (United States)

    Earth Data Analysis Center, University of New Mexico — The National Flood Hazard Layer (NFHL) data incorporates all Digital Flood Insurance Rate Map(DFIRM) databases published by FEMA, and any Letters Of Map Revision...

  20. TU-CD-304-03: Dosimetric Verification and Preliminary Comparison of Dynamic Wave Arc for SBRT Treatments

    Energy Technology Data Exchange (ETDEWEB)

    Burghelea, M [UZ BRUSSEL, Brussels (Belgium); BRAINLAB AG, Munich (Germany); Babes Bolyai University, Cluj-Napoca (Romania); Poels, K; Gevaert, T; Tournel, K; Dhont, J; De Ridder, M; Verellen, D [UZ BRUSSEL, Brussels (Belgium); Hung, C [BRAINLAB AG, Munich (Germany); Eriksson, K [RAYSEARCH LABORATORIES AB, Stockholm (Sweden); Simon, V [Babes Bolyai University, Cluj-Napoca (Romania)

    2015-06-15

    Purpose: To evaluate the potential dosimetric benefits and verify the delivery accuracy of Dynamic Wave Arc, a novel treatment delivery approach for the Vero SBRT system. Methods: Dynamic Wave Arc (DWA) combines simultaneous movement of gantry/ring with inverse planning optimization, resulting in an uninterrupted non-coplanar arc delivery technique. Thirteen SBRT complex cases previously treated with 8–10 conformal static beams (CRT) were evaluated in this study. Eight primary centrally-located NSCLC (prescription dose 4×12Gy or 8×7.5Gy) and five oligometastatic cases (2×2 lesions, 10×5Gy) were selected. DWA and coplanar VMAT plans, partially with dual arcs, were generated for each patient using identical objective functions for target volumes and OARs on the same TPS (RayStation, RaySearch Laboratories). Dosimetric differences and delivery time among these three planning schemes were evaluated. The DWA delivery accuracy was assessed using the Delta4 diode array phantom (ScandiDos AB). The gamma analysis was performed with the 3%/3mm dose and distance-to-agreement criteria. Results: The target conformity for CRT, VMAT and DWA were 0.95±0.07, 0.96±0.04 and 0.97±0.04, while the low dose spillage gradient were 5.52±1.36, 5.44±1.11, and 5.09±0.98 respectively. Overall, the bronchus, esophagus and spinal cord maximum doses were similar between VMAT and DWA, but highly reduced compared with CRT. For the lung cases, the mean dose and V20Gy were lower for the arc techniques compares with CRT, while for the liver cases, the mean dose and the V30Gy presented slightly higher values. The average delivery time of VMAT and DWA were 2.46±1.10 min and 4.25±1.67 min, VMAT presenting shorter treatment time in all cases. The DWA dosimetric verification presented an average gamma index passing rate of 95.73±1.54% (range 94.2%–99.8%). Conclusion: Our preliminary data indicated that the DWA is deliverable with clinically acceptable accuracy and has the potential to

  1. TU-CD-304-03: Dosimetric Verification and Preliminary Comparison of Dynamic Wave Arc for SBRT Treatments

    International Nuclear Information System (INIS)

    Burghelea, M; Poels, K; Gevaert, T; Tournel, K; Dhont, J; De Ridder, M; Verellen, D; Hung, C; Eriksson, K; Simon, V

    2015-01-01

    Purpose: To evaluate the potential dosimetric benefits and verify the delivery accuracy of Dynamic Wave Arc, a novel treatment delivery approach for the Vero SBRT system. Methods: Dynamic Wave Arc (DWA) combines simultaneous movement of gantry/ring with inverse planning optimization, resulting in an uninterrupted non-coplanar arc delivery technique. Thirteen SBRT complex cases previously treated with 8–10 conformal static beams (CRT) were evaluated in this study. Eight primary centrally-located NSCLC (prescription dose 4×12Gy or 8×7.5Gy) and five oligometastatic cases (2×2 lesions, 10×5Gy) were selected. DWA and coplanar VMAT plans, partially with dual arcs, were generated for each patient using identical objective functions for target volumes and OARs on the same TPS (RayStation, RaySearch Laboratories). Dosimetric differences and delivery time among these three planning schemes were evaluated. The DWA delivery accuracy was assessed using the Delta4 diode array phantom (ScandiDos AB). The gamma analysis was performed with the 3%/3mm dose and distance-to-agreement criteria. Results: The target conformity for CRT, VMAT and DWA were 0.95±0.07, 0.96±0.04 and 0.97±0.04, while the low dose spillage gradient were 5.52±1.36, 5.44±1.11, and 5.09±0.98 respectively. Overall, the bronchus, esophagus and spinal cord maximum doses were similar between VMAT and DWA, but highly reduced compared with CRT. For the lung cases, the mean dose and V20Gy were lower for the arc techniques compares with CRT, while for the liver cases, the mean dose and the V30Gy presented slightly higher values. The average delivery time of VMAT and DWA were 2.46±1.10 min and 4.25±1.67 min, VMAT presenting shorter treatment time in all cases. The DWA dosimetric verification presented an average gamma index passing rate of 95.73±1.54% (range 94.2%–99.8%). Conclusion: Our preliminary data indicated that the DWA is deliverable with clinically acceptable accuracy and has the potential to

  2. Benchmarking in Thoracic Surgery. Third Edition.

    Science.gov (United States)

    Freixinet Gilart, Jorge; Varela Simó, Gonzalo; Rodríguez Suárez, Pedro; Embún Flor, Raúl; Rivas de Andrés, Juan José; de la Torre Bravos, Mercedes; Molins López-Rodó, Laureano; Pac Ferrer, Joaquín; Izquierdo Elena, José Miguel; Baschwitz, Benno; López de Castro, Pedro E; Fibla Alfara, Juan José; Hernando Trancho, Florentino; Carvajal Carrasco, Ángel; Canalís Arrayás, Emili; Salvatierra Velázquez, Ángel; Canela Cardona, Mercedes; Torres Lanzas, Juan; Moreno Mata, Nicolás

    2016-04-01

    Benchmarking entails continuous comparison of efficacy and quality among products and activities, with the primary objective of achieving excellence. To analyze the results of benchmarking performed in 2013 on clinical practices undertaken in 2012 in 17 Spanish thoracic surgery units. Study data were obtained from the basic minimum data set for hospitalization, registered in 2012. Data from hospital discharge reports were submitted by the participating groups, but staff from the corresponding departments did not intervene in data collection. Study cases all involved hospital discharges recorded in the participating sites. Episodes included were respiratory surgery (Major Diagnostic Category 04, Surgery), and those of the thoracic surgery unit. Cases were labelled using codes from the International Classification of Diseases, 9th revision, Clinical Modification. The refined diagnosis-related groups classification was used to evaluate differences in severity and complexity of cases. General parameters (number of cases, mean stay, complications, readmissions, mortality, and activity) varied widely among the participating groups. Specific interventions (lobectomy, pneumonectomy, atypical resections, and treatment of pneumothorax) also varied widely. As in previous editions, practices among participating groups varied considerably. Some areas for improvement emerge: admission processes need to be standardized to avoid urgent admissions and to improve pre-operative care; hospital discharges should be streamlined and discharge reports improved by including all procedures and complications. Some units have parameters which deviate excessively from the norm, and these sites need to review their processes in depth. Coding of diagnoses and comorbidities is another area where improvement is needed. Copyright © 2015 SEPAR. Published by Elsevier Espana. All rights reserved.

  3. Preliminary comparison of the conventional and quasi-snowflake divertor configurations with the 2D code EDGE2D/EIRENE in the FAST tokamak

    Energy Technology Data Exchange (ETDEWEB)

    Viola, B.; Maddaluno, G.; Pericoli Ridolfini, V. [EURATOM-ENEA Association, C.R. Frascati, Via E. Fermi 45, 00044 Frascati (Rome) (Italy); Corrigan, G.; Harting, D. [Culham Centre of Fusion Energy, EURATOM-Association, Abingdon (United Kingdom); Mattia, M. [Dipartimento di Informatica, Sistemi e Produzione, Universita di Roma, Tor Vergata, Via del Politecnico, 00133 Roma (Italy); Zagorski, R. [Institute of Plasma Physics and Laser Microfusion-EURATOM Association, 01-497 Warsaw (Poland)

    2014-06-15

    The new magnetic configurations for tokamak divertors, snowflake and super-X, proposed to mitigate the problem of the power exhaust in reactors have clearly evidenced the need for an accurate and reliable modeling of the physics governing the interaction with the plates. The initial effort undertaken jointly by ENEA and IPPLM has been focused to exploit a simple and versatile modeling tool, namely the 2D TECXY code, to obtain preliminary comparison between the conventional and snowflake configurations for the proposed new device FAST that should realize an edge plasma with properties quite close to those of a reactor. The very interesting features found for the snowflake, namely a power load mitigation much larger than expected directly from the change of the magnetic topology, has further pushed us to check these results with the more sophisticated computational tool EDGE2D coupled with the neutral code module EIRENE. After a preparatory work that has been carried out in order to adapt this code combination to deal with non-conventional, single null equilibria and in particular with second order nulls in the poloidal field generated in the snowflake configuration, in this paper we describe the first activity to compare these codes and discuss the first results obtained for FAST. The outcome of these EDGE2D runs is in qualitative agreement with those of TECXY, confirming the potential benefit obtainable from a snowflake configuration. (copyright 2014 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)

  4. Comparison of thoracic kyphosis degree, trunk muscle strength and joint position sense among healthy and osteoporotic elderly women: a cross-sectional preliminary study.

    Science.gov (United States)

    Granito, Renata Neves; Aveiro, Mariana Chaves; Renno, Ana Claudia Muniz; Oishi, Jorge; Driusso, Patricia

    2012-01-01

    Increased thoracic kyphosis is one of the most disfiguring consequences of osteoporotic spine fractures in the elderly. However, mechanisms involved in the increasing of the kyphosis degree among osteoporotic women are not completely understood. Then, the aims of this cross-sectional preliminary study were comparing thoracic kyphosis degree, trunk muscle peak torque and joint position sense among healthy and osteoporotic elderly women and investigating possible factors affecting the kyphosis degree. Twenty women were selected for 2 groups: healthy (n=10) and osteoporotic (n=10) elderly women. Bone mineral density (BMD), thoracic kyphosis degree, trunk muscles peak torque and joint position sense were measured. Differences among groups were analyzed by Student's Test T for unpaired data. Correlations between variables were performed by Pearson's coefficient correlation. The level of significance used for all comparisons was 5% (p≤0.05). We observed that the osteoporotic women demonstrated a significantly higher degree of kyphosis and lower trunk extensor muscle peak torque. Moreover, it was found that the BMD had a negative correlation with the thoracic kyphosis degree. Kyphosis degree showed a negative correlation between extensor muscle strength and joint position sense index. This study suggests that lower BMD may be associated to higher degree of kyphosis degree, lower trunk extensors muscle strength and an impaired joint position sense. It is also suggested that lower extensor muscle strength may be a factor that contributes to the increasing in kyphosis thoracic degree. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  5. Preliminary Analysis of IR Cloudtop Temperatures of Sprite Producing Storms over Argentina Observed from Brazil, and Comparison with US Case Study

    Science.gov (United States)

    Sao Sabbas, F. T.; Pautet, P.; Taylor, M. J.; Pinto, O.; Thomas, J.; Solorzano, N.; Holzworth, R.; Bailey, M.; Schuch, N.; Michels, M.

    2006-12-01

    We will present the preliminary results of the cloudtop temperature characteristics of two very active sprite- producing Mesoscale Convective System (MCS) which occurred over Argentina in the evening of February 22, and March 04, 2006. These were prolific storms, e.g. the first one produced more than 400 TLEs, including sprites, halos and possibly elves. The events were observed from the INPE Observatorio Espacial Sul-OES (Southern Space Observatory), located at the center of Rio Grande do Sul State, the most Southern State of Brazil. Except for the lack of triangulated locations for the sprites and halos recorded, the methodology used for this study is the same as for the paper Sao Sabbas and Sentman [2003], where a sprite producing storm over the central U.S. was observed during the night of July 22, 1996. We analyzed the IR satellite data provided by GOES-12 and the lightning information from the Brazilian Lightning Detection Network in combination with data from the World Wide Lightning Location Network WWLLN. We will also show a comparison between the obtained results and the results presented at the Sao Sabbas and Sentman [2003] paper. Sao Sabbas, F.T. and D. D. Sentman, Dynamical Relationship of IR Cloudtop Temperatures With Occurrence Rates of Cloud-to-Ground Lightning and Sprites, Geophys. Res. Lett., 30 (5), 40-1-40-4, 2003.

  6. Quality benchmarking methodology: Case study of finance and culture industries in Latvia

    Directory of Open Access Journals (Sweden)

    Ieva Zemīte

    2011-01-01

    Full Text Available Political, socio-economic and cultural changes that have taken place in the world during the last years have influenced all the spheres. Constant improvements are necessary to sustain in rival and shrinking markets. This sets high quality standards for the service industries. Therefore it is important to conduct comparison of quality criteria to ascertain which practices are achieving superior performance levels. At present companies in Latvia do not carry out mutual benchmarking, and as a result of that do not know how they rank against their peers in terms of quality, as well as they do not see benefits in sharing of information and in benchmarking.The purpose of this paper is to determine the criteria of qualitative benchmarking, and to investigate the use of the benchmarking quality in service industries, particularly: finance and culture sectors in Latvia in order to determine the key driving factors of quality, to explore internal and foreign benchmarks, and to reveal the full potential of inputs’ reduction and efficiency growth for the aforementioned industries.Case study and other tools are used to define the readiness of the company for benchmarking. Certain key factors are examined for their impact on quality criteria. The results are based on the research conducted in professional associations in defined fields (insurance and theatre.Originality/value – this is the first study that adopts the benchmarking models for measuring quality criteria and readiness for mutual comparison in insurance and theatre industries in Latvia.

  7. VENUS-2 Benchmark Problem Analysis with HELIOS-1.9

    International Nuclear Information System (INIS)

    Jeong, Hyeon-Jun; Choe, Jiwon; Lee, Deokjung

    2014-01-01

    Since there are reliable results of benchmark data from the OECD/NEA report of the VENUS-2 MOX benchmark problem, by comparing benchmark results users can identify the credibility of code. In this paper, the solution of the VENUS-2 benchmark problem from HELIOS 1.9 using the ENDF/B-VI library(NJOY91.13) is compared with the result from HELIOS 1.7 with consideration of the MCNP-4B result as reference data. The comparison contains the results of pin cell calculation, assembly calculation, and core calculation. The eigenvalues from those are considered by comparing the results from other codes. In the case of UOX and MOX assemblies, the differences from the MCNP-4B results are about 10 pcm. However, there is some inaccuracy in baffle-reflector condition, and relatively large differences were found in the MOX-reflector assembly and core calculation. Although HELIOS 1.9 utilizes an inflow transport correction, it seems that it has a limited effect on the error in baffle-reflector condition

  8. A biosegmentation benchmark for evaluation of bioimage analysis methods

    Directory of Open Access Journals (Sweden)

    Kvilekval Kristian

    2009-11-01

    Full Text Available Abstract Background We present a biosegmentation benchmark that includes infrastructure, datasets with associated ground truth, and validation methods for biological image analysis. The primary motivation for creating this resource comes from the fact that it is very difficult, if not impossible, for an end-user to choose from a wide range of segmentation methods available in the literature for a particular bioimaging problem. No single algorithm is likely to be equally effective on diverse set of images and each method has its own strengths and limitations. We hope that our benchmark resource would be of considerable help to both the bioimaging researchers looking for novel image processing methods and image processing researchers exploring application of their methods to biology. Results Our benchmark consists of different classes of images and ground truth data, ranging in scale from subcellular, cellular to tissue level, each of which pose their own set of challenges to image analysis. The associated ground truth data can be used to evaluate the effectiveness of different methods, to improve methods and to compare results. Standard evaluation methods and some analysis tools are integrated into a database framework that is available online at http://bioimage.ucsb.edu/biosegmentation/. Conclusion This online benchmark will facilitate integration and comparison of image analysis methods for bioimages. While the primary focus is on biological images, we believe that the dataset and infrastructure will be of interest to researchers and developers working with biological image analysis, image segmentation and object tracking in general.

  9. Compilation of benchmark results for fusion related Nuclear Data

    International Nuclear Information System (INIS)

    Maekawa, Fujio; Wada, Masayuki; Oyama, Yukio; Ichihara, Chihiro; Makita, Yo; Takahashi, Akito

    1998-11-01

    This report compiles results of benchmark tests for validation of evaluated nuclear data to be used in nuclear designs of fusion reactors. Parts of results were obtained under activities of the Fusion Neutronics Integral Test Working Group organized by the members of both Japan Nuclear Data Committee and the Reactor Physics Committee. The following three benchmark experiments were employed used for the tests: (i) the leakage neutron spectrum measurement experiments from slab assemblies at the D-T neutron source at FNS/JAERI, (ii) in-situ neutron and gamma-ray measurement experiments (so-called clean benchmark experiments) also at FNS, and (iii) the pulsed sphere experiments for leakage neutron and gamma-ray spectra at the D-T neutron source facility of Osaka University, OKTAVIAN. Evaluated nuclear data tested were JENDL-3.2, JENDL Fusion File, FENDL/E-1.0 and newly selected data for FENDL/E-2.0. Comparisons of benchmark calculations with the experiments for twenty-one elements, i.e., Li, Be, C, N, O, F, Al, Si, Ti, V, Cr, Mn, Fe, Co, Ni, Cu, Zr, Nb, Mo, W and Pb, are summarized. (author). 65 refs

  10. Benchmarking of radiological departments. Starting point for successful process optimization

    International Nuclear Information System (INIS)

    Busch, Hans-Peter

    2010-01-01

    Continuous optimization of the process of organization and medical treatment is part of the successful management of radiological departments. The focus of this optimization can be cost units such as CT and MRI or the radiological parts of total patient treatment. Key performance indicators for process optimization are cost- effectiveness, service quality and quality of medical treatment. The potential for improvements can be seen by comparison (benchmark) with other hospitals and radiological departments. Clear definitions of key data and criteria are absolutely necessary for comparability. There is currently little information in the literature regarding the methodology and application of benchmarks especially from the perspective of radiological departments and case-based lump sums, even though benchmarking has frequently been applied to radiological departments by hospital management. The aim of this article is to describe and discuss systematic benchmarking as an effective starting point for successful process optimization. This includes the description of the methodology, recommendation of key parameters and discussion of the potential for cost-effectiveness analysis. The main focus of this article is cost-effectiveness (efficiency and effectiveness) with respect to cost units and treatment processes. (orig.)

  11. Blood flow in cerebral aneurysms: comparison of phase contrast magnetic resonance and computational fluid dynamics - preliminary experience

    Energy Technology Data Exchange (ETDEWEB)

    Karmonik, C.; Benndorf, G. [The Methodist Hospital Research Inst., Houston (United States). Radiology; Klucznik, R. [The Methodist Hospital, Houston (United States). Radiology

    2008-03-15

    Purpose: computational fluid dynamics (CFD) simulations are increasingly used to model cerebral aneurysm hemodynamics. We investigated the capability of phase contrast magnetic resonance imaging (pcMRI), guided by specialized software for optimal slice definition (NOVA, Vassol Inc.) as a non-invasive method to measure intra-aneurysmal blood flow patterns in-vivo. In a novel approach, these blood flow patterns measured with pcMRI were qualitatively compared to the ones calculated with CFD. Materials end methods: the volumetric inflow rates into three unruptured cerebral aneurysms and the temporal variations of the intra-aneurysmal blood flow patterns were recorded with pcMRI. Transient CFD simulations were performed on geometric models of these aneurysms derived from 3D digital subtraction angiograms. Calculated intra-aneurysmal blood flow patterns were compared at the times of maximum and minimum arterial inflow to the ones measured with pcMRI and the temporal variations of these patterns during the cardiac cycle were investigated. Results: in all three aneurysms, the main features of intra-aneurysmal flow patterns obtained with pcMRI consisted of areas with positive velocities components and areas with negative velocities components. The measured velocities ranged from approx. {+-}60 to {+-}100 cm/sec. Comparison with calculated CFD simulations showed good correlation with regard to the spatial distribution of these areas, while differences in calculated magnitudes of velocities were found. (orig.)

  12. Blood flow in cerebral aneurysms: comparison of phase contrast magnetic resonance and computational fluid dynamics - preliminary experience

    International Nuclear Information System (INIS)

    Karmonik, C.; Benndorf, G.; Klucznik, R.

    2008-01-01

    Purpose: computational fluid dynamics (CFD) simulations are increasingly used to model cerebral aneurysm hemodynamics. We investigated the capability of phase contrast magnetic resonance imaging (pcMRI), guided by specialized software for optimal slice definition (NOVA, Vassol Inc.) as a non-invasive method to measure intra-aneurysmal blood flow patterns in-vivo. In a novel approach, these blood flow patterns measured with pcMRI were qualitatively compared to the ones calculated with CFD. Materials end methods: the volumetric inflow rates into three unruptured cerebral aneurysms and the temporal variations of the intra-aneurysmal blood flow patterns were recorded with pcMRI. Transient CFD simulations were performed on geometric models of these aneurysms derived from 3D digital subtraction angiograms. Calculated intra-aneurysmal blood flow patterns were compared at the times of maximum and minimum arterial inflow to the ones measured with pcMRI and the temporal variations of these patterns during the cardiac cycle were investigated. Results: in all three aneurysms, the main features of intra-aneurysmal flow patterns obtained with pcMRI consisted of areas with positive velocities components and areas with negative velocities components. The measured velocities ranged from approx. ±60 to ±100 cm/sec. Comparison with calculated CFD simulations showed good correlation with regard to the spatial distribution of these areas, while differences in calculated magnitudes of velocities were found. (orig.)

  13. A comparison of intellectual assessments over video conferencing and in-person for individuals with ID: preliminary data.

    Science.gov (United States)

    Temple, V; Drummond, C; Valiquette, S; Jozsvai, E

    2010-06-01

    Video conferencing (VC) technology has great potential to increase accessibility to healthcare services for those living in rural or underserved communities. Previous studies have had some success in validating a small number of psychological tests for VC administration; however, VC has not been investigated for use with persons with intellectual disabilities (ID). A comparison of test results for two well known and widely used assessment instruments was undertaken to establish if scores for VC administration would differ significantly from in-person assessments. Nineteen individuals with ID aged 23-63 were assessed once in-person and once over VC using the Wechsler Abbreviated Scale of Intelligence (WASI) and the Beery-Buktenica Test of Visual-Motor Integration (VMI). Highly similar results were found for test scores. Full-scale IQ on the WASI and standard scores for the VMI were found to be very stable across the two administration conditions, with a mean difference of less than one IQ point/standard score. Video conferencing administration does not appear to alter test results significantly for overall score on a brief intelligence test or a test of visual-motor integration.

  14. Value of micro-CT as an Investigative Tool for Osteochondritis Dissecans. A preliminary study with comparison to histology

    International Nuclear Information System (INIS)

    Mohr, A.; Bergmann, I.; Muhle, C.; Heller, M.; Heiss, C.; Schrader, C.; Roemer, F.W.; Lynch, J.A.; Genant, H.K.

    2003-01-01

    Purpose: To evaluate micro computed tomography (micro-CT) for the assessment of osteochondritis dissecans in comparison with histology. Material and Methods: Osteochondritis dissecans lesions of 3 patients were evaluated using micro-CT (0.125 mA, 40 keV, 60 m slice thickness, 60 m isotropic resolution, entire sample) and light microscopy (toluidine blue, 3-5 m slice thickness). The methods were compared regarding preparation time, detectability of tissue types and morphologic features of bone and cartilage. Results: Non-destructive micro-CT imaging of the entire sample was faster than histologic preparation of a single slice for light microscopy. Morphologic features of bone and cartilage could be imaged in a comparable way to histology. It was not possible to image cells or different tissue types of bone and cartilage with micro-CT. Conclusion: Micro-CT is a fast, non-destructive tool that may be a supplement or, if detailed histologic information is not necessary, an alternative to light microscopy for the investigation of osteochondritis dissecans. Osteochondritis dissecans micro-CT histology comparative investigation

  15. Stationary PWR-calculations by means of LWRSIM at the NEACRP 3D-LWRCT benchmark

    International Nuclear Information System (INIS)

    Van de Wetering, T.F.H.

    1993-01-01

    Within the framework of participation in an international benchmark, calculations were executed by means of an adjusted version of the computer code Light Water Reactor SIMulation (LWRSIM) for three-dimensional reactor core calculations of pressurized water reactors. The 3-D LWR Core Transient Benchmark was set up aimed at the comparison of 3-D computer codes for transient calculations in LWRs. Participation in the benchmark provided more insight in the accuracy of the code when applied for other pressurized water reactors than applied for the nuclear power plant Borssele in the Netherlands, for which the code has been developed and used originally

  16. Gamma ray benchmark on the spent fuel shipping cask TN 12

    International Nuclear Information System (INIS)

    Blum, P.; Cagnon, R.; Cladel, C.; Ermont, G.; Nimal, J.C.

    1983-05-01

    The purpose of this benchmark is to compare measurements and calculation of gamma-ray dose rates around a shipping cask loaded with 12 spent fuel elements of FESSENHEIM PWR type. The benchmark provides a means to verify gamma-ray sources and gamma-ray transport calculation methods in shipping cask configurations. The comparison between measurements and calculations shows a good agreement except near the fuel element top where the discrepancy reaches a factor 2

  17. Lesson learned from the SARNET wall condensation benchmarks

    International Nuclear Information System (INIS)

    Ambrosini, W.; Forgione, N.; Merli, F.; Oriolo, F.; Paci, S.; Kljenak, I.; Kostka, P.; Vyskocil, L.; Travis, J.R.; Lehmkuhl, J.; Kelm, S.; Chin, Y.-S.; Bucci, M.

    2014-01-01

    Highlights: • The results of the benchmarking activity on wall condensation are reported. • The work was performed in the frame of SARNET. • General modelling techniques for condensation are discussed. • Results of University of Pisa and of other benchmark participants are discussed. • The lesson learned is drawn. - Abstract: The prediction of condensation in the presence of noncondensable gases has received continuing attention in the frame of the Severe Accident Research Network of Excellence, both in the first (2004–2008) and in the second (2009–2013) EC integrated projects. Among the different reasons for considering so relevant this basic phenomenon, coped with by classical treatments dated in the first decades of the last century, there is the interest for developing updated CFD models for reactor containment analysis, requiring validating at a different level the available modelling techniques. In the frame of SARNET, benchmarking activities were undertaken taking advantage of the work performed at different institutions in setting up and developing models for steam condensation in conditions of interest for nuclear reactor containment. Four steps were performed in the activity, involving: (1) an idealized problem freely inspired at the actual conditions occurring in an experimental facility, CONAN, installed at the University of Pisa; (2) a first comparison with experimental data purposely collected by the CONAN facility; (3) a second comparison with data available from experimental campaigns performed in the same apparatus before the inclusion of the activities in SARNET; (4) a third exercise involving data obtained at lower mixture velocity than in previous campaigns, aimed at providing conditions closer to those addressed in reactor containment analyses. The last step of the benchmarking activity required to change the configuration of the experimental apparatus to achieve the lower flow rates involved in the new test specifications. The

  18. Treating posttraumatic stress disorder with MDMA-assisted psychotherapy: A preliminary meta-analysis and comparison to prolonged exposure therapy.

    Science.gov (United States)

    Amoroso, Timothy; Workman, Michael

    2016-07-01

    Since the wars in Iraq and Afghanistan, posttraumatic stress disorder (PTSD) has become a major area of research and development. The most widely accepted treatment for PTSD is prolonged exposure (PE) therapy, but for many patients it is intolerable or ineffective. ±3,4-methylenedioxymethamphetamine (MDMA)-assisted psychotherapy (MDMA-AP) has recently re-emerged as a new treatment option, with two clinical trials having been published and both producing promising results. However, these results have yet to be compared to existing treatments. The present paper seeks to bridge this gap in the literature. Often the statistical significance of clinical trials is overemphasized, while the magnitude of the treatment effects is overlooked. The current meta-analysis aims to provide a comparison of the cumulative effect size of the MDMA-AP studies with those of PE. Effect sizes were calculated for primary and secondary outcome measures in the MDMA-AP clinical trials and compared to those of a meta-analysis including several PE clinical trials. It was found that MDMA-AP had larger effect sizes in both clinician-observed outcomes than PE did (Hedges' g=1.17 vs. g=1.08, respectively) and patient self-report outcomes (Hedges' g=0.87 vs. g=0.77, respectively). The dropout rates of PE and MDMA-AP were also compared, revealing that MDMA-AP had a considerably lower percentage of patients dropping out than PE did. These results suggest that MDMA-AP offers a promising treatment for PTSD. © The Author(s) 2016.

  19. NASA Software Engineering Benchmarking Effort

    Science.gov (United States)

    Godfrey, Sally; Rarick, Heather

    2012-01-01

    Benchmarking was very interesting and provided a wealth of information (1) We did see potential solutions to some of our "top 10" issues (2) We have an assessment of where NASA stands with relation to other aerospace/defense groups We formed new contacts and potential collaborations (1) Several organizations sent us examples of their templates, processes (2) Many of the organizations were interested in future collaboration: sharing of training, metrics, Capability Maturity Model Integration (CMMI) appraisers, instructors, etc. We received feedback from some of our contractors/ partners (1) Desires to participate in our training; provide feedback on procedures (2) Welcomed opportunity to provide feedback on working with NASA

  20. NEACRP thermal fission product benchmark

    International Nuclear Information System (INIS)

    Halsall, M.J.; Taubman, C.J.

    1989-09-01

    The objective of the thermal fission product benchmark was to compare the range of fission product data in use at the present time. A simple homogeneous problem was set with 200 atoms H/1 atom U235, to be burnt up to 1000 days and then decay for 1000 days. The problem was repeated with 200 atoms H/1 atom Pu239, 20 atoms H/1 atom U235 and 20 atoms H/1 atom Pu239. There were ten participants and the submissions received are detailed in this report. (author)

  1. OECD/NEA benchmark for time-dependent neutron transport calculations without spatial homogenization

    Energy Technology Data Exchange (ETDEWEB)

    Hou, Jason, E-mail: jason.hou@ncsu.edu [Department of Nuclear Engineering, North Carolina State University, Raleigh, NC 27695 (United States); Ivanov, Kostadin N. [Department of Nuclear Engineering, North Carolina State University, Raleigh, NC 27695 (United States); Boyarinov, Victor F.; Fomichenko, Peter A. [National Research Centre “Kurchatov Institute”, Kurchatov Sq. 1, Moscow (Russian Federation)

    2017-06-15

    Highlights: • A time-dependent homogenization-free neutron transport benchmark was created. • The first phase, known as the kinetics phase, was described in this work. • Preliminary results for selected 2-D transient exercises were presented. - Abstract: A Nuclear Energy Agency (NEA), Organization for Economic Co-operation and Development (OECD) benchmark for the time-dependent neutron transport calculations without spatial homogenization has been established in order to facilitate the development and assessment of numerical methods for solving the space-time neutron kinetics equations. The benchmark has been named the OECD/NEA C5G7-TD benchmark, and later extended with three consecutive phases each corresponding to one modelling stage of the multi-physics transient analysis of the nuclear reactor core. This paper provides a detailed introduction of the benchmark specification of Phase I, known as the “kinetics phase”, including the geometry description, supporting neutron transport data, transient scenarios in both two-dimensional (2-D) and three-dimensional (3-D) configurations, as well as the expected output parameters from the participants. Also presented are the preliminary results for the initial state 2-D core and selected transient exercises that have been obtained using the Monte Carlo method and the Surface Harmonic Method (SHM), respectively.

  2. Benchmarking the cost efficiency of community care in Australian child and adolescent mental health services: implications for future benchmarking.

    Science.gov (United States)

    Furber, Gareth; Brann, Peter; Skene, Clive; Allison, Stephen

    2011-06-01

    The purpose of this study was to benchmark the cost efficiency of community care across six child and adolescent mental health services (CAMHS) drawn from different Australian states. Organizational, contact and outcome data from the National Mental Health Benchmarking Project (NMHBP) data-sets were used to calculate cost per "treatment hour" and cost per episode for the six participating organizations. We also explored the relationship between intake severity as measured by the Health of the Nations Outcome Scales for Children and Adolescents (HoNOSCA) and cost per episode. The average cost per treatment hour was $223, with cost differences across the six services ranging from a mean of $156 to $273 per treatment hour. The average cost per episode was $3349 (median $1577) and there were significant differences in the CAMHS organizational medians ranging from $388 to $7076 per episode. HoNOSCA scores explained at best 6% of the cost variance per episode. These large cost differences indicate that community CAMHS have the potential to make substantial gains in cost efficiency through collaborative benchmarking. Benchmarking forums need considerable financial and business expertise for detailed comparison of business models for service provision.

  3. Preliminary comparison of the therapeutic efficacy of accelerated relative to conventional fractionation radiotherapy by treatment of spontaneous canine malignancies

    International Nuclear Information System (INIS)

    Denman, David L.; Levin, Rebecca; Buncher, C. Ralph; Aron, Bernard S.

    1996-01-01

    and time frames of tumor control, recurrence, survival and degenerative normal tissue changes. Results: Of the AF canine patients currently evaluable for initial tumor response and control, all (n=8) achieved a CR regardless of the tumors basic anatomic site and histology. Recurrence was more prevalent in the head and neck rather than other sites, and for sarcomas compared to carcinomas. In addition, survival of AF patients currently evaluable (n=15) was higher than observed for those receiving CF+/-HT. The maximum acute skin response levels for AF were greater than for CF alone, but less than obtained with CF+HT. Because AF was given in roughly half the elapsed time as the CF+/-HT protocols, it's acute skin responses occurred sooner. However, the response durations and recovery times were shorter and responses didn't begin till the last few days of RT, therefore, not interfering with AF administration. The acute oral mucosal response to AF was similar to that caused by CF+HT, being about twice the reaction scoring level of CF alone. However, the oral tissue response and recovery durations were comparable to CF alone, being less than CF+HT. Conclusions: The AF regimen used was well tolerated in all evaluable canines. Though the mean skin and oral tissue response levels were greater with AF compared to CF, the response durations were shorter or comparable, respectively. In addition, the late tissue responses we previously quantitated for the comparable CF regimen were minimal and should be relatively unaffected by this direct method of RT acceleration. This study's preliminary results also indicated the use of a tumor bed CB should be readily tolerated. The AFCB regimen would deliver the same AF dose/fx to a full field, including the tumor bed and surrounding tissues at risk for disease, during the first two weeks. In the third (last) week the full field dose would be reduced to 2.4Gy each morning, followed 4 hours later with a boost dose of 2Gy to a field encompassing

  4. Reevaluation of the Jezebel Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Favorite, Jeffrey A. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2014-03-10

    Every nuclear engineering student is familiar with Jezebel, the homogeneous bare sphere of plutonium first assembled at Los Alamos in 1954-1955. The actual Jezebel assembly was neither homogeneous, nor bare, nor spherical; nor was it singular – there were hundreds of Jezebel configurations assembled. The Jezebel benchmark has been reevaluated for the International Criticality Safety Benchmark Evaluation Project (ICSBEP) Handbook. Logbooks, original drawings, mass accountability statements, internal reports, and published reports have been used to model four actual three-dimensional Jezebel assemblies with high fidelity. Because the documentation available today is often inconsistent, three major assumptions were made regarding plutonium part masses and dimensions. The first was that the assembly masses given in Los Alamos report LA-4208 (1969) were correct, and the second was that the original drawing dimension for the polar height of a certain major part was correct. The third assumption was that a change notice indicated on the original drawing was not actually implemented. This talk will describe these assumptions, the alternatives, and the implications. Since the publication of the 2013 ICSBEP Handbook, the actual masses of the major components have turned up. Our assumption regarding the assembly masses was proven correct, but we had the mass distribution incorrect. Work to incorporate the new information is ongoing, and this talk will describe the latest assessment.

  5. SCWEB, Scientific Workstation Evaluation Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Raffenetti, R C [Computing Services-Support Services Division, Argonne National Laboratory, 9700 South Cass Avenue, Argonne, Illinois 60439 (United States)

    1988-06-16

    1 - Description of program or function: The SCWEB (Scientific Workstation Evaluation Benchmark) software includes 16 programs which are executed in a well-defined scenario to measure the following performance capabilities of a scientific workstation: implementation of FORTRAN77, processor speed, memory management, disk I/O, monitor (or display) output, scheduling of processing (multiprocessing), and scheduling of print tasks (spooling). 2 - Method of solution: The benchmark programs are: DK1, DK2, and DK3, which do Fourier series fitting based on spline techniques; JC1, which checks the FORTRAN function routines which produce numerical results; JD1 and JD2, which solve dense systems of linear equations in double- and single-precision, respectively; JD3 and JD4, which perform matrix multiplication in single- and double-precision, respectively; RB1, RB2, and RB3, which perform substantial amounts of I/O processing on files other than the input and output files; RR1, which does intense single-precision floating-point multiplication in a tight loop, RR2, which initializes a 512x512 integer matrix in a manner which skips around in the address space rather than initializing each consecutive memory cell in turn; RR3, which writes alternating text buffers to the output file; RR4, which evaluates the timer routines and demonstrates that they conform to the specification; and RR5, which determines whether the workstation is capable of executing a 4-megabyte program

  6. Pynamic: the Python Dynamic Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Lee, G L; Ahn, D H; de Supinksi, B R; Gyllenhaal, J C; Miller, P J

    2007-07-10

    Python is widely used in scientific computing to facilitate application development and to support features such as computational steering. Making full use of some of Python's popular features, which improve programmer productivity, leads to applications that access extremely high numbers of dynamically linked libraries (DLLs). As a result, some important Python-based applications severely stress a system's dynamic linking and loading capabilities and also cause significant difficulties for most development environment tools, such as debuggers. Furthermore, using the Python paradigm for large scale MPI-based applications can create significant file IO and further stress tools and operating systems. In this paper, we present Pynamic, the first benchmark program to support configurable emulation of a wide-range of the DLL usage of Python-based applications for large scale systems. Pynamic has already accurately reproduced system software and tool issues encountered by important large Python-based scientific applications on our supercomputers. Pynamic provided insight for our system software and tool vendors, and our application developers, into the impact of several design decisions. As we describe the Pynamic benchmark, we will highlight some of the issues discovered in our large scale system software and tools using Pynamic.

  7. Benchmarking i eksternt regnskab og revision

    DEFF Research Database (Denmark)

    Thinggaard, Frank; Kiertzner, Lars

    2001-01-01

    løbende i en benchmarking-proces. Dette kapitel vil bredt undersøge, hvor man med nogen ret kan få benchmarking-begrebet knyttet til eksternt regnskab og revision. Afsnit 7.1 beskæftiger sig med det eksterne årsregnskab, mens afsnit 7.2 tager fat i revisionsområdet. Det sidste afsnit i kapitlet opsummerer...... betragtningerne om benchmarking i forbindelse med begge områder....

  8. Benchmark for evaluation and validation of reactor simulations (BEAVRS)

    Energy Technology Data Exchange (ETDEWEB)

    Horelik, N.; Herman, B.; Forget, B.; Smith, K. [Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, 77 Massachusetts Avenue, Cambridge, MA 02139 (United States)

    2013-07-01

    Advances in parallel computing have made possible the development of high-fidelity tools for the design and analysis of nuclear reactor cores, and such tools require extensive verification and validation. This paper introduces BEAVRS, a new multi-cycle full-core Pressurized Water Reactor (PWR) depletion benchmark based on two operational cycles of a commercial nuclear power plant that provides a detailed description of fuel assemblies, burnable absorbers, in-core fission detectors, core loading patterns, and numerous in-vessel components. This benchmark enables analysts to develop extremely detailed reactor core models that can be used for testing and validation of coupled neutron transport, thermal-hydraulics, and fuel isotopic depletion. The benchmark also provides measured reactor data for Hot Zero Power (HZP) physics tests, boron letdown curves, and three-dimensional in-core flux maps from fifty-eight instrumented assemblies. Initial comparisons between calculations performed with MIT's OpenMC Monte Carlo neutron transport code and measured cycle 1 HZP test data are presented, and these results display an average deviation of approximately 100 pcm for the various critical configurations and control rod worth measurements. Computed HZP radial fission detector flux maps also agree reasonably well with the available measured data. All results indicate that this benchmark will be extremely useful in validation of coupled-physics codes and uncertainty quantification of in-core physics computational predictions. The detailed BEAVRS specification and its associated data package is hosted online at the MIT Computational Reactor Physics Group web site (http://crpg.mit.edu/), where future revisions and refinements to the benchmark specification will be made publicly available. (authors)

  9. Developing and modeling of the 'Laguna Verde' BWR CRDA benchmark

    International Nuclear Information System (INIS)

    Solis-Rodarte, J.; Fu, H.; Ivanov, K.N.; Matsui, Y.; Hotta, A.

    2002-01-01

    Reactivity initiated accidents (RIA) and design basis transients are one of the most important aspects related to nuclear power reactor safety. These events are re-evaluated whenever core alterations (modifications) are made as part of the nuclear safety analysis performed to a new design. These modifications usually include, but are not limited to, power upgrades, longer cycles, new fuel assembly and control rod designs, etc. The results obtained are compared with pre-established bounding analysis values to see if the new core design fulfills the requirements of safety constraints imposed on the design. The control rod drop accident (CRDA) is the design basis transient for the reactivity events of BWR technology. The CRDA is a very localized event depending on the control rod insertion position and the fuel assemblies surrounding the control rod falling from the core. A numerical benchmark was developed based on the CRDA RIA design basis accident to further asses the performance of coupled 3D neutron kinetics/thermal-hydraulics codes. The CRDA in a BWR is a mostly neutronic driven event. This benchmark is based on a real operating nuclear power plant - unit 1 of the Laguna Verde (LV1) nuclear power plant (NPP). The definition of the benchmark is presented briefly together with the benchmark specifications. Some of the cross-sections were modified in order to make the maximum control rod worth greater than one dollar. The transient is initiated at steady-state by dropping the control rod with maximum worth at full speed. The 'Laguna Verde' (LV1) BWR CRDA transient benchmark is calculated using two coupled codes: TRAC-BF1/NEM and TRAC-BF1/ENTREE. Neutron kinetics and thermal hydraulics models were developed for both codes. Comparison of the obtained results is presented along with some discussion of the sensitivity of results to some modeling assumptions

  10. HPC Benchmark Suite NMx, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Intelligent Automation Inc., (IAI) and University of Central Florida (UCF) propose to develop a comprehensive numerical test suite for benchmarking current and...

  11. Establishing benchmarks and metrics for utilization management.

    Science.gov (United States)

    Melanson, Stacy E F

    2014-01-01

    The changing environment of healthcare reimbursement is rapidly leading to a renewed appreciation of the importance of utilization management in the clinical laboratory. The process of benchmarking of laboratory operations is well established for comparing organizational performance to other hospitals (peers) and for trending data over time through internal benchmarks. However, there are relatively few resources available to assist organizations in benchmarking for laboratory utilization management. This article will review the topic of laboratory benchmarking with a focus on the available literature and services to assist in managing physician requests for laboratory testing. © 2013.

  12. CEA-IPSN Participation in the MSLB Benchmark

    International Nuclear Information System (INIS)

    Royer, E.; Raimond, E.; Caruge, D.

    2001-01-01

    The OECD/NEA Main Steam Line Break (MSLB) Benchmark allows the comparison of state-of-the-art and best-estimate models used to compute reactivity accidents. The three exercises of the MSLB benchmark are defined with the aim of analyzing the space and time effects in the core and their modeling with computational tools. Point kinetics (exercise 1) simulation results in a return to power (RTP) after scram, whereas 3-D kinetics (exercises 2 and 3) does not display any RTP. The objective is to understand the reasons for the conservative solution of point kinetics and to assess the benefits of best-estimate models. First, the core vessel mixing model is analyzed; second, sensitivity studies on point kinetics are compared to 3-D kinetics; third, the core thermal hydraulics model and coupling with neutronics is presented; finally, RTP and a suitable model for MSLB are discussed

  13. Benchmarking of nuclear economics tools

    International Nuclear Information System (INIS)

    Moore, Megan; Korinny, Andriy; Shropshire, David; Sadhankar, Ramesh

    2017-01-01

    Highlights: • INPRO and GIF economic tools exhibited good alignment in total capital cost estimation. • Subtle discrepancies in the cost result from differences in financing and the fuel cycle assumptions. • A common set of assumptions was found to reduce the discrepancies to 1% or less. • Opportunities for harmonisation of economic tools exists. - Abstract: Benchmarking of the economics methodologies developed by the Generation IV International Forum (GIF) and the International Atomic Energy Agency’s International Project on Innovative Nuclear Reactors and Fuel Cycles (INPRO), was performed for three Generation IV nuclear energy systems. The Economic Modeling Working Group of GIF developed an Excel based spreadsheet package, G4ECONS (Generation 4 Excel-based Calculation Of Nuclear Systems), to calculate the total capital investment cost (TCIC) and the levelised unit energy cost (LUEC). G4ECONS is sufficiently generic in the sense that it can accept the types of projected input, performance and cost data that are expected to become available for Generation IV systems through various development phases and that it can model both open and closed fuel cycles. The Nuclear Energy System Assessment (NESA) Economic Support Tool (NEST) was developed to enable an economic analysis using the INPRO methodology to easily calculate outputs including the TCIC, LUEC and other financial figures of merit including internal rate of return, return of investment and net present value. NEST is also Excel based and can be used to evaluate nuclear reactor systems using the open fuel cycle, MOX (mixed oxide) fuel recycling and closed cycles. A Super Critical Water-cooled Reactor system with an open fuel cycle and two Fast Reactor systems, one with a break-even fuel cycle and another with a burner fuel cycle, were selected for the benchmarking exercise. Published data on capital and operating costs were used for economics analyses using G4ECONS and NEST tools. Both G4ECONS and

  14. Routine measurements of left and right ventricular output by gated blood pool emission tomography in comparison with thermodilution measurements: a preliminary study

    International Nuclear Information System (INIS)

    Mariano-Goulart, D.; Boudousq, V.; Comte, F.; Eberle, M.C.; Zanca, M.; Kotzki, P.O.; Rossi, M.; Piot, C.; Raczka, F.; Davy, J.M.

    2001-01-01

    The aim of this preliminary study was to evaluate the accuracy of left and right ventricular output computed from a semi-automatic processing of tomographic radionuclide ventriculography data (TRVG) in comparison with the conventional thermodilution method. Twenty patients with various heart diseases were prospectively included in the study. Thermodilution and TRVG acquisitions were carried out on the same day for all patients. Analysis of gated blood pool slices was performed using a watershed-based segmentation algorithm. Right and left ventricular output measured by TRVG correlated well with the measurements obtained with thermodilution (r=0.94 and 0.91 with SEE=0.38 and 0.46 l/min, respectively, P<0.001). The limits of agreement for TRVG and thermodilution measurements were -0.78-1.20 l/min for the left ventricle and -0.34-1.16 l/min for the right ventricle. No significant difference was found between the results of TRVG and thermodilution with respect to left ventricular output (P=0.09). A small but significant difference was found between right ventricular output measured by TRVG and both left ventricular output measured by TRVG (mean difference=0.17 l/min, P=0.04) and thermodilution-derived cardiac output (mean difference=0.41 l/min, P=0.0001). It is concluded that the watershed-based semi-automatic segmentation of TRVG slices provides non-invasive measurements of right and left ventricular output and stroke volumes at equilibrium, in routine clinical settings. Further studies are necessary to check whether the accuracy of these measurements is good enough to permit correct assessment of intracardiac shunts. (orig.)

  15. BENCHMARKING ORTEC ISOTOPIC MEASUREMENTS AND CALCULATIONS

    Energy Technology Data Exchange (ETDEWEB)

    Dewberry, R; Raymond Sigg, R; Vito Casella, V; Nitin Bhatt, N

    2008-09-29

    these cases the ISOTOPIC analysis program is especially valuable because it allows a rapid, defensible, reproducible analysis of radioactive content without tedious and repetitive experimental measurement of {gamma}-ray transmission through the sample and container at multiple photon energies. The ISOTOPIC analysis technique is also especially valuable in facility holdup measurements where the acquisition configuration does not fit the accepted generalized geometries where detector efficiencies have been solved exactly with good calculus. Generally in facility passive {gamma}-ray holdup measurements the acquisition geometry is only approximately reproducible, and the sample (object) is an extensive glovebox or HEPA filter component. In these cases accuracy of analyses is rarely possible, however demonstrating fissile Pu and U content within criticality safety guidelines yields valuable operating information. Demonstrating such content can be performed with broad assumptions and within broad factors (e.g. 2-8) of conservatism. The ISOTOPIC analysis program yields rapid defensible analyses of content within acceptable uncertainty and within acceptable conservatism without extensive repetitive experimental measurements. In addition to transmission correction determinations based on the mass and composition of objects, the ISOTOPIC program performs finite geometry corrections based on object shape and dimensions. These geometry corrections are based upon finite element summation to approximate exact closed form calculus. In this report we provide several benchmark comparisons to the same technique provided by the Canberra In Situ Object Counting System (ISOCS) and to the finite thickness calculations described by Russo in reference 10. This report describes the benchmark comparisons we have performed to demonstrate and to document that the ISOTOPIC analysis program yields the results we claim to our customers.

  16. FENDL neutronics benchmark: Specifications for the calculational neutronics and shielding benchmark

    International Nuclear Information System (INIS)

    Sawan, M.E.

    1994-12-01

    During the IAEA Advisory Group Meeting on ''Improved Evaluations and Integral Data Testing for FENDL'' held in Garching near Munich, Germany in the period 12-16 September 1994, the Working Group II on ''Experimental and Calculational Benchmarks on Fusion Neutronics for ITER'' recommended that a calculational benchmark representative of the ITER design should be developed. This report describes the neutronics and shielding calculational benchmark available for scientists interested in performing analysis for this benchmark. (author)

  17. OECD/DOE/CEA VVER-1000 Coolant Transient Benchmark. Summary Record of the First Workshop (V1000-CT1)

    International Nuclear Information System (INIS)

    2003-01-01

    The first workshop for the VVER-1000 Coolant Transient Benchmark TT Benchmark was hosted by the Commissariat a l'Energie Atomique, Centre d'Etudes de Saclay, France. The V1000CT benchmark defines standard problems for validation of coupled three-dimensional (3-D) neutron-kinetics/system thermal-hydraulics codes for application to Soviet-designed VVER-1000 reactors using actual plant data without any scaling. The overall objective is to access computer codes used in the safety analysis of VVER power plants, specifically for their use in reactivity transient simulations in a VVER-1000. The V1000CT benchmark consists of two phases: V1000CT-1 - simulation of the switching on of one main coolant pump (MCP) while the other three MCP are in operation, and V1000CT- 2 - calculation of coolant mixing tests and Main Steam Line Break (MSLB) scenario. Further background information on this benchmark can be found at the OECD/NEA benchmark web site . The purpose of the first workshop was to review the benchmark activities after the Starter Meeting held last year in Dresden, Germany: to discuss the participants' feedback and modifications introduced in the Benchmark Specifications on Phase 1; to present and to discuss modelling issues and preliminary results from the three exercises of Phase 1; to discuss the modelling issues of Exercise 1 of Phase 2; and to define work plan and schedule in order to complete the two phases

  18. Human factors reliability Benchmark exercise

    International Nuclear Information System (INIS)

    Poucet, A.

    1989-06-01

    The Joint Research Centre of the European Commission has organized a Human Factors Reliability Benchmark Exercise (HF-RBE) with the aim of assessing the state of the art in human reliability modelling and assessment. Fifteen teams from eleven countries, representing industry, utilities, licensing organisations and research institutes, participated in the HF-RBE. The HF-RBE was organized around two study cases: (1) analysis of routine functional Test and Maintenance (T and M) procedures: with the aim of assessing the probability of test induced failures, the probability of failures to remain unrevealed and the potential to initiate transients because of errors performed in the test; (2) analysis of human actions during an operational transient: with the aim of assessing the probability that the operators will correctly diagnose the malfunctions and take proper corrective action. This report contains the final summary reports produced by the participants in the exercise

  19. Experimental and computational benchmark tests

    International Nuclear Information System (INIS)

    Gilliam, D.M.; Briesmeister, J.F.

    1994-01-01

    A program involving principally NIST, LANL, and ORNL has been in progress for about four years now to establish a series of benchmark measurements and calculations related to the moderation and leakage of 252 Cf neutrons from a source surrounded by spherical aqueous moderators of various thicknesses and compositions. The motivation for these studies comes from problems in criticality calculations concerning arrays of multiplying components, where the leakage from one component acts as a source for the other components. This talk compares experimental and calculated values for the fission rates of four nuclides - 235 U, 239 Pu, 238 U, and 237 Np - in the leakage spectrum from moderator spheres of diameters 76.2 mm, 101.6 mm, and 127.0 mm, with either pure water or enriched B-10 solutions as the moderator. Very detailed Monte Carlo calculations were done with the MCNP code, using a open-quotes light waterclose quotes S(α,β) scattering kernel

  20. ENVIRONMENTAL BENCHMARKING FOR LOCAL AUTHORITIES

    Directory of Open Access Journals (Sweden)

    Marinela GHEREŞ

    2010-01-01

    Full Text Available This paper is an attempt to clarify and present the many definitions ofbenchmarking. It also attempts to explain the basic steps of benchmarking, toshow how this tool can be applied by local authorities as well as to discuss itspotential benefits and limitations. It is our strong belief that if cities useindicators and progressively introduce targets to improve management andrelated urban life quality, and to measure progress towards more sustainabledevelopment, we will also create a new type of competition among cities andfoster innovation. This is seen to be important because local authorities’actions play a vital role in responding to the challenges of enhancing thestate of the environment not only in policy-making, but also in the provision ofservices and in the planning process. Local communities therefore need tobe aware of their own sustainability performance levels and should be able toengage in exchange of best practices to respond effectively to the ecoeconomicalchallenges of the century.

  1. Benchmark results in radiative transfer

    International Nuclear Information System (INIS)

    Garcia, R.D.M.; Siewert, C.E.

    1986-02-01

    Several aspects of the F N method are reported, and the method is used to solve accurately some benchmark problems in radiative transfer in the field of atmospheric physics. The method was modified to solve cases of pure scattering and an improved process was developed for computing the radiation intensity. An algorithms for computing several quantities used in the F N method was done. An improved scheme to evaluate certain integrals relevant to the method is done, and a two-term recursion relation that has proved useful for the numerical evaluation of matrix elements, basic for the method, is given. The methods used to solve the encountered linear algebric equations are discussed, and the numerical results are evaluated. (M.C.K.) [pt

  2. Key performance indicators to benchmark hospital information systems - a delphi study.

    Science.gov (United States)

    Hübner-Bloder, G; Ammenwerth, E

    2009-01-01

    To identify the key performance indicators for hospital information systems (HIS) that can be used for HIS benchmarking. A Delphi survey with one qualitative and two quantitative rounds. Forty-four HIS experts from health care IT practice and academia participated in all three rounds. Seventy-seven performance indicators were identified and organized into eight categories: technical quality, software quality, architecture and interface quality, IT vendor quality, IT support and IT department quality, workflow support quality, IT outcome quality, and IT costs. The highest ranked indicators are related to clinical workflow support and user satisfaction. Isolated technical indicators or cost indicators were not seen as useful. The experts favored an interdisciplinary group of all the stakeholders, led by hospital management, to conduct the HIS benchmarking. They proposed benchmarking activities both in regular (annual) intervals as well as at defined events (for example after IT introduction). Most of the experts stated that in their institutions no HIS benchmarking activities are being performed at the moment. In the context of IT governance, IT benchmarking is gaining importance in the healthcare area. The found indicators reflect the view of health care IT professionals and researchers. Research is needed to further validate and operationalize key performance indicators, to provide an IT benchmarking framework, and to provide open repositories for a comparison of the HIS benchmarks of different hospitals.

  3. Using chemical benchmarking to determine the persistence of chemicals in a Swedish lake.

    Science.gov (United States)

    Zou, Hongyan; Radke, Michael; Kierkegaard, Amelie; MacLeod, Matthew; McLachlan, Michael S

    2015-02-03

    It is challenging to measure the persistence of chemicals under field conditions. In this work, two approaches for measuring persistence in the field were compared: the chemical mass balance approach, and a novel chemical benchmarking approach. Ten pharmaceuticals, an X-ray contrast agent, and an artificial sweetener were studied in a Swedish lake. Acesulfame K was selected as a benchmark to quantify persistence using the chemical benchmarking approach. The 95% confidence intervals of the half-life for transformation in the lake system ranged from 780-5700 days for carbamazepine to benchmarking approach agreed well with those from the mass balance approach (1-21% difference), indicating that chemical benchmarking can be a valid and useful method to measure the persistence of chemicals under field conditions. Compared to the mass balance approach, the benchmarking approach partially or completely eliminates the need to quantify mass flow of chemicals, so it is particularly advantageous when the quantification of mass flow of chemicals is difficult. Furthermore, the benchmarking approach allows for ready comparison and ranking of the persistence of different chemicals.

  4. The role of benchmarking for yardstick competition

    International Nuclear Information System (INIS)

    Burns, Phil; Jenkins, Cloda; Riechmann, Christoph

    2005-01-01

    With the increasing interest in yardstick regulation, there is a need to understand the most appropriate method for realigning tariffs at the outset. Benchmarking is the tool used for such realignment and is therefore a necessary first-step in the implementation of yardstick competition. A number of concerns have been raised about the application of benchmarking, making some practitioners reluctant to move towards yardstick based regimes. We assess five of the key concerns often discussed and find that, in general, these are not as great as perceived. The assessment is based on economic principles and experiences with applying benchmarking to regulated sectors, e.g. in the electricity and water industries in the UK, The Netherlands, Austria and Germany in recent years. The aim is to demonstrate that clarity on the role of benchmarking reduces the concern about its application in different regulatory regimes. We find that benchmarking can be used in regulatory settlements, although the range of possible benchmarking approaches that are appropriate will be small for any individual regulatory question. Benchmarking is feasible as total cost measures and environmental factors are better defined in practice than is commonly appreciated and collusion is unlikely to occur in environments with more than 2 or 3 firms (where shareholders have a role in monitoring and rewarding performance). Furthermore, any concern about companies under-recovering costs is a matter to be determined through the regulatory settlement and does not affect the case for using benchmarking as part of that settlement. (author)

  5. Benchmarking set for domestic smart grid management

    NARCIS (Netherlands)

    Bosman, M.G.C.; Bakker, Vincent; Molderink, Albert; Hurink, Johann L.; Smit, Gerardus Johannes Maria

    2010-01-01

    In this paper we propose a benchmark for domestic smart grid management. It consists of an in-depth description of a domestic smart grid, in which local energy consumers, producers and buffers can be controlled. First, from this description a general benchmark framework is derived, which can be used

  6. Medical school benchmarking - from tools to programmes.

    Science.gov (United States)

    Wilkinson, Tim J; Hudson, Judith N; Mccoll, Geoffrey J; Hu, Wendy C Y; Jolly, Brian C; Schuwirth, Lambert W T

    2015-02-01

    Benchmarking among medical schools is essential, but may result in unwanted effects. To apply a conceptual framework to selected benchmarking activities of medical schools. We present an analogy between the effects of assessment on student learning and the effects of benchmarking on medical school educational activities. A framework by which benchmarking can be evaluated was developed and applied to key current benchmarking activities in Australia and New Zealand. The analogy generated a conceptual framework that tested five questions to be considered in relation to benchmarking: what is the purpose? what are the attributes of value? what are the best tools to assess the attributes of value? what happens to the results? and, what is the likely "institutional impact" of the results? If the activities were compared against a blueprint of desirable medical graduate outcomes, notable omissions would emerge. Medical schools should benchmark their performance on a range of educational activities to ensure quality improvement and to assure stakeholders that standards are being met. Although benchmarking potentially has positive benefits, it could also result in perverse incentives with unforeseen and detrimental effects on learning if it is undertaken using only a few selected assessment tools.

  7. Benchmarking in digital circuit design automation

    NARCIS (Netherlands)

    Jozwiak, L.; Gawlowski, D.M.; Slusarczyk, A.S.

    2008-01-01

    This paper focuses on benchmarking, which is the main experimental approach to the design method and EDA-tool analysis, characterization and evaluation. We discuss the importance and difficulties of benchmarking, as well as the recent research effort related to it. To resolve several serious

  8. Benchmark Two-Good Utility Functions

    NARCIS (Netherlands)

    de Jaegher, K.

    Benchmark two-good utility functions involving a good with zero income elasticity and unit income elasticity are well known. This paper derives utility functions for the additional benchmark cases where one good has zero cross-price elasticity, unit own-price elasticity, and zero own price

  9. Repeated Results Analysis for Middleware Regression Benchmarking

    Czech Academy of Sciences Publication Activity Database

    Bulej, Lubomír; Kalibera, T.; Tůma, P.

    2005-01-01

    Roč. 60, - (2005), s. 345-358 ISSN 0166-5316 R&D Projects: GA ČR GA102/03/0672 Institutional research plan: CEZ:AV0Z10300504 Keywords : middleware benchmarking * regression benchmarking * regression testing Subject RIV: JD - Computer Applications, Robotics Impact factor: 0.756, year: 2005

  10. Benchmarking the energy efficiency of commercial buildings

    International Nuclear Information System (INIS)

    Chung, William; Hui, Y.V.; Lam, Y. Miu

    2006-01-01

    Benchmarking energy-efficiency is an important tool to promote the efficient use of energy in commercial buildings. Benchmarking models are mostly constructed in a simple benchmark table (percentile table) of energy use, which is normalized with floor area and temperature. This paper describes a benchmarking process for energy efficiency by means of multiple regression analysis, where the relationship between energy-use intensities (EUIs) and the explanatory factors (e.g., operating hours) is developed. Using the resulting regression model, these EUIs are then normalized by removing the effect of deviance in the significant explanatory factors. The empirical cumulative distribution of the normalized EUI gives a benchmark table (or percentile table of EUI) for benchmarking an observed EUI. The advantage of this approach is that the benchmark table represents a normalized distribution of EUI, taking into account all the significant explanatory factors that affect energy consumption. An application to supermarkets is presented to illustrate the development and the use of the benchmarking method

  11. Benchmarking, Total Quality Management, and Libraries.

    Science.gov (United States)

    Shaughnessy, Thomas W.

    1993-01-01

    Discussion of the use of Total Quality Management (TQM) in higher education and academic libraries focuses on the identification, collection, and use of reliable data. Methods for measuring quality, including benchmarking, are described; performance measures are considered; and benchmarking techniques are examined. (11 references) (MES)

  12. A Seafloor Benchmark for 3-dimensional Geodesy

    Science.gov (United States)

    Chadwell, C. D.; Webb, S. C.; Nooner, S. L.

    2014-12-01

    We have developed an inexpensive, permanent seafloor benchmark to increase the longevity of seafloor geodetic measurements. The benchmark provides a physical tie to the sea floor lasting for decades (perhaps longer) on which geodetic sensors can be repeatedly placed and removed with millimeter resolution. Global coordinates estimated with seafloor geodetic techniques will remain attached to the benchmark allowing for the interchange of sensors as they fail or become obsolete, or for the sensors to be removed and used elsewhere, all the while maintaining a coherent series of positions referenced to the benchmark. The benchmark has been designed to free fall from the sea surface with transponders attached. The transponder can be recalled via an acoustic command sent from the surface to release from the benchmark and freely float to the sea surface for recovery. The duration of the sensor attachment to the benchmark will last from a few days to a few years depending on the specific needs of the experiment. The recovered sensors are then available to be reused at other locations, or again at the same site in the future. Three pins on the sensor frame mate precisely and unambiguously with three grooves on the benchmark. To reoccupy a benchmark a Remotely Operated Vehicle (ROV) uses its manipulator arm to place the sensor pins into the benchmark grooves. In June 2014 we deployed four benchmarks offshore central Oregon. We used the ROV Jason to successfully demonstrate the removal and replacement of packages onto the benchmark. We will show the benchmark design and its operational capabilities. Presently models of megathrust slip within the Cascadia Subduction Zone (CSZ) are mostly constrained by the sub-aerial GPS vectors from the Plate Boundary Observatory, a part of Earthscope. More long-lived seafloor geodetic measures are needed to better understand the earthquake and tsunami risk associated with a large rupture of the thrust fault within the Cascadia subduction zone

  13. DRAGON solutions to the 3D transport benchmark over a range in parameter space

    International Nuclear Information System (INIS)

    Martin, Nicolas; Hebert, Alain; Marleau, Guy

    2010-01-01

    DRAGON solutions to the 'NEA suite of benchmarks for 3D transport methods and codes over a range in parameter space' are discussed in this paper. A description of the benchmark is first provided, followed by a detailed review of the different computational models used in the lattice code DRAGON. Two numerical methods were selected for generating the required quantities for the 729 configurations of this benchmark. First, S N calculations were performed using fully symmetric angular quadratures and high-order diamond differencing for spatial discretization. To compare S N results with those of another deterministic method, the method of characteristics (MoC) was also considered for this benchmark. Comparisons between reference solutions, S N and MoC results illustrate the advantages and drawbacks of each methods for this 3-D transport problem.

  14. BUGLE-93 (ENDF/B-VI) cross-section library data testing using shielding benchmarks

    International Nuclear Information System (INIS)

    Hunter, H.T.; Slater, C.O.; White, J.E.

    1994-01-01

    Several integral shielding benchmarks were selected to perform data testing for new multigroup cross-section libraries compiled from the ENDF/B-VI data for light water reactor (LWR) shielding and dosimetry. The new multigroup libraries, BUGLE-93 and VITAMIN-B6, were studied to establish their reliability and response to the benchmark measurements by use of radiation transport codes, ANISN and DORT. Also, direct comparisons of BUGLE-93 and VITAMIN-B6 to BUGLE-80 (ENDF/B-IV) and VITAMIN-E (ENDF/B-V) were performed. Some benchmarks involved the nuclides used in LWR shielding and dosimetry applications, and some were sensitive specific nuclear data, i.e. iron due to its dominant use in nuclear reactor systems and complex set of cross-section resonances. Five shielding benchmarks (four experimental and one calculational) are described and results are presented

  15. Vver-1000 Mox core computational benchmark

    International Nuclear Information System (INIS)

    2006-01-01

    The NEA Nuclear Science Committee has established an Expert Group that deals with the status and trends of reactor physics, fuel performance and fuel cycle issues related to disposing of weapons-grade plutonium in mixed-oxide fuel. The objectives of the group are to provide NEA member countries with up-to-date information on, and to develop consensus regarding, core and fuel cycle issues associated with burning weapons-grade plutonium in thermal water reactors (PWR, BWR, VVER-1000, CANDU) and fast reactors (BN-600). These issues concern core physics, fuel performance and reliability, and the capability and flexibility of thermal water reactors and fast reactors to dispose of weapons-grade plutonium in standard fuel cycles. The activities of the NEA Expert Group on Reactor-based Plutonium Disposition are carried out in close co-operation (jointly, in most cases) with the NEA Working Party on Scientific Issues in Reactor Systems (WPRS). A prominent part of these activities include benchmark studies. At the time of preparation of this report, the following benchmarks were completed or in progress: VENUS-2 MOX Core Benchmarks: carried out jointly with the WPRS (formerly the WPPR) (completed); VVER-1000 LEU and MOX Benchmark (completed); KRITZ-2 Benchmarks: carried out jointly with the WPRS (formerly the WPPR) (completed); Hollow and Solid MOX Fuel Behaviour Benchmark (completed); PRIMO MOX Fuel Performance Benchmark (ongoing); VENUS-2 MOX-fuelled Reactor Dosimetry Calculation (ongoing); VVER-1000 In-core Self-powered Neutron Detector Calculational Benchmark (started); MOX Fuel Rod Behaviour in Fast Power Pulse Conditions (started); Benchmark on the VENUS Plutonium Recycling Experiments Configuration 7 (started). This report describes the detailed results of the benchmark investigating the physics of a whole VVER-1000 reactor core using two-thirds low-enriched uranium (LEU) and one-third MOX fuel. It contributes to the computer code certification process and to the

  16. Comparing ourselves : using benchmarking techniques to measure performance between academic libraries. Report of the LIRG seminar: The Effective academic library held on Tuesday 12th June 2001 at Staffordshire University

    OpenAIRE

    Hart, Liz

    2001-01-01

    We can learn a lot from others. Benchmarking provides a structural framework for making comparisons with other organisations. The techniques enable us to learn from one another by looking at why there are differences in performance outcomes between organisations undertaking similar functions. This seminar concentrated on: Importance of benchmarking / benchmarking techniques, Establishment of benchmarking consortia, Utilising statistics and performance indicators and Practical examples of how ...

  17. Thermal Performance Benchmarking: Annual Report

    Energy Technology Data Exchange (ETDEWEB)

    Feng, Xuhui [National Renewable Energy Laboratory (NREL), Golden, CO (United States). Transportation and Hydrogen Systems Center

    2017-10-19

    In FY16, the thermal performance of the 2014 Honda Accord Hybrid power electronics thermal management systems were benchmarked. Both experiments and numerical simulation were utilized to thoroughly study the thermal resistances and temperature distribution in the power module. Experimental results obtained from the water-ethylene glycol tests provided the junction-to-liquid thermal resistance. The finite element analysis (FEA) and computational fluid dynamics (CFD) models were found to yield a good match with experimental results. Both experimental and modeling results demonstrate that the passive stack is the dominant thermal resistance for both the motor and power electronics systems. The 2014 Accord power electronics systems yield steady-state thermal resistance values around 42- 50 mm to the 2nd power K/W, depending on the flow rates. At a typical flow rate of 10 liters per minute, the thermal resistance of the Accord system was found to be about 44 percent lower than that of the 2012 Nissan LEAF system that was benchmarked in FY15. The main reason for the difference is that the Accord power module used a metalized-ceramic substrate and eliminated the thermal interface material layers. FEA models were developed to study the transient performance of 2012 Nissan LEAF, 2014 Accord, and two other systems that feature conventional power module designs. The simulation results indicate that the 2012 LEAF power module has lowest thermal impedance at a time scale less than one second. This is probably due to moving low thermally conductive materials further away from the heat source and enhancing the heat spreading effect from the copper-molybdenum plate close to the insulated gate bipolar transistors. When approaching steady state, the Honda system shows lower thermal impedance. Measurement results of the thermal resistance of the 2015 BMW i3 power electronic system indicate that the i3 insulated gate bipolar transistor module has significantly lower junction

  18. International benchmarking of specialty hospitals. A series of case studies on comprehensive cancer centres

    Science.gov (United States)

    2010-01-01

    Background Benchmarking is one of the methods used in business that is applied to hospitals to improve the management of their operations. International comparison between hospitals can explain performance differences. As there is a trend towards specialization of hospitals, this study examines the benchmarking process and the success factors of benchmarking in international specialized cancer centres. Methods Three independent international benchmarking studies on operations management in cancer centres were conducted. The first study included three comprehensive cancer centres (CCC), three chemotherapy day units (CDU) were involved in the second study and four radiotherapy departments were included in the final study. Per multiple case study a research protocol was used to structure the benchmarking process. After reviewing the multiple case studies, the resulting description was used to study the research objectives. Results We adapted and evaluated existing benchmarking processes through formalizing stakeholder involvement and verifying the comparability of the partners. We also devised a framework to structure the indicators to produce a coherent indicator set and better improvement suggestions. Evaluating the feasibility of benchmarking as a tool to improve hospital processes led to mixed results. Case study 1 resulted in general recommendations for the organizations involved. In case study 2, the combination of benchmarking and lean management led in one CDU to a 24% increase in bed utilization and a 12% increase in productivity. Three radiotherapy departments of case study 3, were considering implementing the recommendations. Additionally, success factors, such as a well-defined and small project scope, partner selection based on clear criteria, stakeholder involvement, simple and well-structured indicators, analysis of both the process and its results and, adapt the identified better working methods to the own setting, were found. Conclusions The improved

  19. Benchmarking NNWSI flow and transport codes: COVE 1 results

    International Nuclear Information System (INIS)

    Hayden, N.K.

    1985-06-01

    The code verification (COVE) activity of the Nevada Nuclear Waste Storage Investigations (NNWSI) Project is the first step in certification of flow and transport codes used for NNWSI performance assessments of a geologic repository for disposing of high-level radioactive wastes. The goals of the COVE activity are (1) to demonstrate and compare the numerical accuracy and sensitivity of certain codes, (2) to identify and resolve problems in running typical NNWSI performance assessment calculations, and (3) to evaluate computer requirements for running the codes. This report describes the work done for COVE 1, the first step in benchmarking some of the codes. Isothermal calculations for the COVE 1 benchmarking have been completed using the hydrologic flow codes SAGUARO, TRUST, and GWVIP; the radionuclide transport codes FEMTRAN and TRUMP; and the coupled flow and transport code TRACR3D. This report presents the results of three cases of the benchmarking problem solved for COVE 1, a comparison of the results, questions raised regarding sensitivities to modeling techniques, and conclusions drawn regarding the status and numerical sensitivities of the codes. 30 refs

  20. Verification of the code DYN3D/R with the help of international benchmarks

    International Nuclear Information System (INIS)

    Grundmann, U.; Rohde, U.

    1997-10-01

    Different benchmarks for reactors with quadratic fuel assemblies were calculated with the code DYN3D/R. In this report comparisons with the results of the reference solutions are carried out. The results of DYN3D/R and the reference calculation for the eigenvalue k eff and the power distribution are shown for the steady-state 3-dimensional IAEA-Benchmark. The results of NEACRP-Benchmarks on control rod ejections in a standard PWR were compared with the reference solutions published by the NEA Data Bank. For assessing the accuracy of DYN3D/R results in comparison to other codes the deviations to the reference solutions are considered. Detailed comparisons with the published reference solutions of the NEA-NSC Benchmarks on uncontrolled withdrawal of control rods are made. The influence of the axial nodalization is also investigated. All in all, a good agreement of the DYN3D/R results with the reference solutions can be seen for the considered benchmark problems. (orig.) [de

  1. Assessment of non-typical worsening of myocardial perfusion in rest in comparison to stress in 99mTc-MIBI SPECT studies. Preliminary report

    International Nuclear Information System (INIS)

    Dabrowski, A.; Szumilak, B.; Wnuk, J.; Konieczna, S.; Teresinska, A.

    2002-01-01

    Worsening of regional rest perfusion in comparison to stress perfusion, observed in a few percentage of myocardial perfusion 99m Tc-MIBI SPECT studies, does not have an easy clinical interpretation. Also, no reports evaluating the relationship between worsening and technical SPECT study conditions are available. The goal of our study is: 1) to assess the reproducibility of this non-typical effect - by repeating the rest study on separate day after new MIBI injection; 2) to assess reproducibility of this effect in rest perfusion images performed at different time points after one MIBI injection; 3) to propose the most probable clinical explanation for this effect. Up to now, 20 patients (100 predicted altogether) with rest perfusion worsening in routine stress-rest 99m Tc-MIBI SPECT perfusion imaging were studied. The group was clinically in homogeneous (7 patients with suspected coronary artery disease (CAD), 4 patients with CAD and no myocardial infarction (MI), 8 patients after MI, and 1 patient with developmental anomaly). Within 14 days, rest study was repeated, with data acquisition performed at 1 h and 3 hrs after MIBI injection. Regional myocardial perfusion was evaluated qualitatively, in 17 segments of the LV and compared among stress and all the three rest (BAD-I, BAD-II, BAD-III) studies. In 175 segments there was perfusion worsening in at least one of the three rest studies. In the highest percentage of these segments (n=53, 30%, ), worsening was present in all rest studies. Among stress defects with perfusion worsening in BAD-I, the highest percentage (55%, ) presented worsening also in BAD-II (performed after separate injection of MIBI, but like in BAD-I also 1 h after injection), significantly lower percentage - persistent defect in BAD-II (25%, ), and some smaller percentage - transient defect in BAD-II (20%, ). In segments with perfusion worsening present in one of the rest studies, our preliminary results show: 1) the highest probability of

  2. TU Electric reactor physics model verification: Power reactor benchmark

    International Nuclear Information System (INIS)

    Willingham, C.E.; Killgore, M.R.

    1988-01-01

    Power reactor benchmark calculations using the advanced code package CASMO-3/SIMULATE-3 have been performed for six cycles of Prairie Island Unit 1. The reload fuel designs for the selected cycles included gadolinia as a burnable absorber, natural uranium axial blankets and increased water-to-fuel ratio. The calculated results for both startup reactor physics tests (boron endpoints, control rod worths, and isothermal temperature coefficients) and full power depletion results were compared to measured plant data. These comparisons show that the TU Electric reactor physics models accurately predict important measured parameters for power reactors

  3. Benchmark assemblies of the Los Alamos Critical Assemblies Facility

    International Nuclear Information System (INIS)

    Dowdy, E.J.

    1985-01-01

    Several critical assemblies of precisely known materials composition and easily calculated and reproducible geometries have been constructed at the Los Alamos National Laboratory. Some of these machines, notably Jezebel, Flattop, Big Ten, and Godiva, have been used as benchmark assemblies for the comparison of the results of experimental measurements and computation of certain nuclear reaction parameters. These experiments are used to validate both the input nuclear data and the computational methods. The machines and the applications of these machines for integral nuclear data checks are described

  4. Benchmark assemblies of the Los Alamos critical assemblies facility

    International Nuclear Information System (INIS)

    Dowdy, E.J.

    1986-01-01

    Several critical assemblies of precisely known materials composition and easily calculated and reproducible geometries have been constructed at the Los Alamos National Laboratory. Some of these machines, notably Jezebel, Flattop, Big Ten, and Godiva, have been used as benchmark assemblies for the comparison of the results of experimental measurements and computation of certain nuclear reaction parameters. These experiments are used to validate both the input nuclear data and the computational methods. The machines and the applications of these machines for integral nuclear data checks are described. (author)

  5. Benchmark assemblies of the Los Alamos critical assemblies facility

    International Nuclear Information System (INIS)

    Dowdy, E.J.

    1985-01-01

    Several critical assemblies of precisely known materials composition and easily calculated and reproducible geometries have been constructed at the Los Alamos National Laboratory. Some of these machines, notably Jezebel, Flattop, Big Ten, and Godiva, have been used as benchmark assemblies for the comparison of the results of experimental measurements and computation of certain nuclear reaction parameters. These experiments are used to validate both the input nuclear data and the computational methods. The machines and the applications of these machines for integral nuclear data checks are described

  6. What Randomized Benchmarking Actually Measures

    International Nuclear Information System (INIS)

    Proctor, Timothy; Rudinger, Kenneth; Young, Kevin; Sarovar, Mohan; Blume-Kohout, Robin

    2017-01-01

    Randomized benchmarking (RB) is widely used to measure an error rate of a set of quantum gates, by performing random circuits that would do nothing if the gates were perfect. In the limit of no finite-sampling error, the exponential decay rate of the observable survival probabilities, versus circuit length, yields a single error metric r. For Clifford gates with arbitrary small errors described by process matrices, r was believed to reliably correspond to the mean, over all Clifford gates, of the average gate infidelity between the imperfect gates and their ideal counterparts. We show that this quantity is not a well-defined property of a physical gate set. It depends on the representations used for the imperfect and ideal gates, and the variant typically computed in the literature can differ from r by orders of magnitude. We present new theories of the RB decay that are accurate for all small errors describable by process matrices, and show that the RB decay curve is a simple exponential for all such errors. Here, these theories allow explicit computation of the error rate that RB measures (r), but as far as we can tell it does not correspond to the infidelity of a physically allowed (completely positive) representation of the imperfect gates.

  7. Benchmarking Commercial Conformer Ensemble Generators.

    Science.gov (United States)

    Friedrich, Nils-Ole; de Bruyn Kops, Christina; Flachsenberg, Florian; Sommer, Kai; Rarey, Matthias; Kirchmair, Johannes

    2017-11-27

    We assess and compare the performance of eight commercial conformer ensemble generators (ConfGen, ConfGenX, cxcalc, iCon, MOE LowModeMD, MOE Stochastic, MOE Conformation Import, and OMEGA) and one leading free algorithm, the distance geometry algorithm implemented in RDKit. The comparative study is based on a new version of the Platinum Diverse Dataset, a high-quality benchmarking dataset of 2859 protein-bound ligand conformations extracted from the PDB. Differences in the performance of commercial algorithms are much smaller than those observed for free algorithms in our previous study (J. Chem. Inf. 2017, 57, 529-539). For commercial algorithms, the median minimum root-mean-square deviations measured between protein-bound ligand conformations and ensembles of a maximum of 250 conformers are between 0.46 and 0.61 Å. Commercial conformer ensemble generators are characterized by their high robustness, with at least 99% of all input molecules successfully processed and few or even no substantial geometrical errors detectable in their output conformations. The RDKit distance geometry algorithm (with minimization enabled) appears to be a good free alternative since its performance is comparable to that of the midranked commercial algorithms. Based on a statistical analysis, we elaborate on which algorithms to use and how to parametrize them for best performance in different application scenarios.

  8. Benchmark tests of JENDL-1

    International Nuclear Information System (INIS)

    Kikuchi, Yasuyuki; Hasegawa, Akira; Takano, Hideki; Kamei, Takanobu; Hojuyama, Takeshi; Sasaki, Makoto; Seki, Yuji; Zukeran, Atsushi; Otake, Iwao.

    1982-02-01

    Various benchmark tests were made on JENDL-1. At the first stage, various core center characteristics were tested for many critical assemblies with one-dimensional model. At the second stage, applicability of JENDL-1 was further tested to more sophisticated problems for MOZART and ZPPR-3 assemblies with two-dimensional model. It was proved that JENDL-1 predicted various quantities of fast reactors satisfactorily as a whole. However, the following problems were pointed out: 1) There exists discrepancy of 0.9% in the k sub(eff)-values between the Pu- and U-cores. 2) The fission rate ratio of 239 Pu to 235 U is underestimated by 3%. 3) The Doppler reactivity coefficients are overestimated by about 10%. 4) The control rod worths are underestimated by 4%. 5) The fission rates of 235 U and 239 Pu are underestimated considerably in the outer core and radial blanket regions. 6) The negative sodium void reactivities are overestimated, when the sodium is removed from the outer core. As a whole, most of problems of JENDL-1 seem to be related with the neutron leakage and the neutron spectrum. It was found through the further study that most of these problems came from too small diffusion coefficients and too large elastic removal cross sections above 100 keV, which might be probably caused by overestimation of the total and elastic scattering cross sections for structural materials in the unresolved resonance region up to several MeV. (author)

  9. Human factors reliability benchmark exercise

    International Nuclear Information System (INIS)

    Poucet, A.

    1989-08-01

    The Joint Research Centre of the European Commission has organised a Human Factors Reliability Benchmark Exercise (HF-RBE) with the aim of assessing the state of the art in human reliability modelling and assessment. Fifteen teams from eleven countries, representing industry, utilities, licensing organisations and research institutes, participated in the HF-RBE. The HF-RBE was organised around two study cases: (1) analysis of routine functional Test and Maintenance (TPM) procedures: with the aim of assessing the probability of test induced failures, the probability of failures to remain unrevealed and the potential to initiate transients because of errors performed in the test; (2) analysis of human actions during an operational transient: with the aim of assessing the probability that the operators will correctly diagnose the malfunctions and take proper corrective action. This report summarises the contributions received from the participants and analyses these contributions on a comparative basis. The aim of this analysis was to compare the procedures, modelling techniques and quantification methods used, to obtain insight in the causes and magnitude of the variability observed in the results, to try to identify preferred human reliability assessment approaches and to get an understanding of the current state of the art in the field identifying the limitations that are still inherent to the different approaches

  10. Revaluering benchmarking - A topical theme for the construction industry

    DEFF Research Database (Denmark)

    Rasmussen, Grane Mikael Gregaard

    2011-01-01

    and questioning the concept objectively. This paper addresses the underlying nature of benchmarking, and accounts for the importance of focusing attention on the sociological impacts benchmarking has in organizations. To understand these sociological impacts, benchmarking research needs to transcend...... the perception of benchmarking systems as secondary and derivative and instead studying benchmarking as constitutive of social relations and as irredeemably social phenomena. I have attempted to do so in this paper by treating benchmarking using a calculative practice perspective, and describing how...

  11. Benchmarking of surgical complications in gynaecological oncology: prospective multicentre study.

    Science.gov (United States)

    Burnell, M; Iyer, R; Gentry-Maharaj, A; Nordin, A; Liston, R; Manchanda, R; Das, N; Gornall, R; Beardmore-Gray, A; Hillaby, K; Leeson, S; Linder, A; Lopes, A; Meechan, D; Mould, T; Nevin, J; Olaitan, A; Rufford, B; Shanbhag, S; Thackeray, A; Wood, N; Reynolds, K; Ryan, A; Menon, U

    2016-12-01

    To explore the impact of risk-adjustment on surgical complication rates (CRs) for benchmarking gynaecological oncology centres. Prospective cohort study. Ten UK accredited gynaecological oncology centres. Women undergoing major surgery on a gynaecological oncology operating list. Patient co-morbidity, surgical procedures and intra-operative (IntraOp) complications were recorded contemporaneously by surgeons for 2948 major surgical procedures. Postoperative (PostOp) complications were collected from hospitals and patients. Risk-prediction models for IntraOp and PostOp complications were created using penalised (lasso) logistic regression using over 30 potential patient/surgical risk factors. Observed and risk-adjusted IntraOp and PostOp CRs for individual hospitals were calculated. Benchmarking using colour-coded funnel plots and observed-to-expected ratios was undertaken. Overall, IntraOp CR was 4.7% (95% CI 4.0-5.6) and PostOp CR was 25.7% (95% CI 23.7-28.2). The observed CRs for all hospitals were under the upper 95% control limit for both IntraOp and PostOp funnel plots. Risk-adjustment and use of observed-to-expected ratio resulted in one hospital moving to the >95-98% CI (red) band for IntraOp CRs. Use of only hospital-reported data for PostOp CRs would have resulted in one hospital being unfairly allocated to the red band. There was little concordance between IntraOp and PostOp CRs. The funnel plots and overall IntraOp (≈5%) and PostOp (≈26%) CRs could be used for benchmarking gynaecological oncology centres. Hospital benchmarking using risk-adjusted CRs allows fairer institutional comparison. IntraOp and PostOp CRs are best assessed separately. As hospital under-reporting is common for postoperative complications, use of patient-reported outcomes is important. Risk-adjusted benchmarking of surgical complications for ten UK gynaecological oncology centres allows fairer comparison. © 2016 Royal College of Obstetricians and Gynaecologists.

  12. Energy efficiency benchmarking of energy-intensive industries in Taiwan

    International Nuclear Information System (INIS)

    Chan, David Yih-Liang; Huang, Chi-Feng; Lin, Wei-Chun; Hong, Gui-Bing

    2014-01-01

    Highlights: • Analytical tool was applied to estimate the energy efficiency indicator of energy intensive industries in Taiwan. • The carbon dioxide emission intensity in selected energy-intensive industries is also evaluated in this study. • The obtained energy efficiency indicator can serve as a base case for comparison to the other regions in the world. • This analysis results can serve as a benchmark for selected energy-intensive industries. - Abstract: Taiwan imports approximately 97.9% of its primary energy as rapid economic development has significantly increased energy and electricity demands. Increased energy efficiency is necessary for industry to comply with energy-efficiency indicators and benchmarking. Benchmarking is applied in this work as an analytical tool to estimate the energy-efficiency indicators of major energy-intensive industries in Taiwan and then compare them to other regions of the world. In addition, the carbon dioxide emission intensity in the iron and steel, chemical, cement, textile and pulp and paper industries are evaluated in this study. In the iron and steel industry, the energy improvement potential of blast furnace–basic oxygen furnace (BF–BOF) based on BPT (best practice technology) is about 28%. Between 2007 and 2011, the average specific energy consumption (SEC) of styrene monomer (SM), purified terephthalic acid (PTA) and low-density polyethylene (LDPE) was 9.6 GJ/ton, 5.3 GJ/ton and 9.1 GJ/ton, respectively. The energy efficiency of pulping would be improved by 33% if BAT (best available technology) were applied. The analysis results can serve as a benchmark for these industries and as a base case for stimulating changes aimed at more efficient energy utilization

  13. International benchmark on the natural convection test in Phenix reactor

    International Nuclear Information System (INIS)

    Tenchine, D.; Pialla, D.; Fanning, T.H.; Thomas, J.W.; Chellapandi, P.; Shvetsov, Y.; Maas, L.; Jeong, H.-Y.; Mikityuk, K.; Chenu, A.; Mochizuki, H.; Monti, S.

    2013-01-01

    Highlights: ► Phenix main characteristics, instrumentation and natural convection test are described. ► “Blind” calculations and post-test calculations from all the participants to the benchmark are compared to reactor data. ► Lessons learned from the natural convection test and the associated calculations are discussed. -- Abstract: The French Phenix sodium cooled fast reactor (SFR) started operation in 1973 and was stopped in 2009. Before the reactor was definitively shutdown, several final tests were planned and performed, including a natural convection test in the primary circuit. During this natural convection test, the heat rejection provided by the steam generators was disabled, followed several minutes later by reactor scram and coast-down of the primary pumps. The International Atomic Energy Agency (IAEA) launched a Coordinated Research Project (CRP) named “control rod withdrawal and sodium natural circulation tests performed during the Phenix end-of-life experiments”. The overall purpose of the CRP was to improve the Member States’ analytical capabilities in the field of SFR safety. An international benchmark on the natural convection test was organized with “blind” calculations in a first step, then “post-test” calculations and sensitivity studies compared with reactor measurements. Eight organizations from seven Member States took part in the benchmark: ANL (USA), CEA (France), IGCAR (India), IPPE (Russian Federation), IRSN (France), KAERI (Korea), PSI (Switzerland) and University of Fukui (Japan). Each organization performed computations and contributed to the analysis and global recommendations. This paper summarizes the findings of the CRP benchmark exercise associated with the Phenix natural convection test, including blind calculations, post-test calculations and comparisons with measured data. General comments and recommendations are pointed out to improve future simulations of natural convection in SFRs

  14. IAEA sodium void reactivity benchmark calculations

    International Nuclear Information System (INIS)

    Hill, R.N.; Finck, P.J.

    1992-01-01

    In this paper, the IAEA-1 992 ''Benchmark Calculation of Sodium Void Reactivity Effect in Fast Reactor Core'' problem is evaluated. The proposed design is a large axially heterogeneous oxide-fueled fast reactor as described in Section 2; the core utilizes a sodium plenum above the core to enhance leakage effects. The calculation methods used in this benchmark evaluation are described in Section 3. In Section 4, the calculated core performance results for the benchmark reactor model are presented; and in Section 5, the influence of steel and interstitial sodium heterogeneity effects is estimated

  15. Benchmarking gate-based quantum computers

    Science.gov (United States)

    Michielsen, Kristel; Nocon, Madita; Willsch, Dennis; Jin, Fengping; Lippert, Thomas; De Raedt, Hans

    2017-11-01

    With the advent of public access to small gate-based quantum processors, it becomes necessary to develop a benchmarking methodology such that independent researchers can validate the operation of these processors. We explore the usefulness of a number of simple quantum circuits as benchmarks for gate-based quantum computing devices and show that circuits performing identity operations are very simple, scalable and sensitive to gate errors and are therefore very well suited for this task. We illustrate the procedure by presenting benchmark results for the IBM Quantum Experience, a cloud-based platform for gate-based quantum computing.

  16. Benchmark Imagery FY11 Technical Report

    Energy Technology Data Exchange (ETDEWEB)

    Roberts, R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Pope, P. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2011-06-14

    This report details the work performed in FY11 under project LL11-GS-PD06, “Benchmark Imagery for Assessing Geospatial Semantic Extraction Algorithms.” The original LCP for the Benchmark Imagery project called for creating a set of benchmark imagery for verifying and validating algorithms that extract semantic content from imagery. More specifically, the first year was slated to deliver real imagery that had been annotated, the second year to deliver real imagery that had composited features, and the final year was to deliver synthetic imagery modeled after the real imagery.

  17. Comparison of VNEM to measured data from Ringhals unit 3. (Phase 3)

    International Nuclear Information System (INIS)

    Tsuiki, M.; Mullet, S.

    2011-01-01

    1. PWR. Comparisons have been made of a PWR core simulator CYGNUS with VNEM neutronics module to the measured data obtained from Ringhals unit 3 NPP through the cycle 1A (core average burnup = 0 through 10,507MWD/MT). The results can be summarized as: core eigenvalue = 0.99937 +/- 0.00086 before intermediate 5 months shutdown core eigenvalue = 0.99647 +/- 0.00029 after intermediate 5 months shutdown. The reason of core eigenvalue drop after the intermediate shutdown is estimated to be the build-up of fissile elements during the long shutdown. A calculation model to track some important isotopes in addition to Xe135 and Sm149 (these isotopes are tracked in the present version of CYGNUS) has to be implemented. As for the comparison of the neutron detector readings, the agreement was excellent throughout the cycle 1A as observed in Phase 1 and 2 (2008, 2009). The burnup tilt effect was not observed during the cycle 1A. The verification of the burnup tilt model of CYGNUS will be performed in the next phase of the project. 2. BWR. A preliminary 2D numerical benchmarking was performed for BWR cores. The problems were generated imitating the NEACRP MOX PWR 2D benchmark problems. The results of comparisons of VNEM to a reference transport code (FCM2D), based on the method of characteristics, were as good as those obtained in the case of PWR cores for similar benchmarking. (Author)

  18. LWR containment thermal hydraulic codes benchmark demona B3 exercise

    International Nuclear Information System (INIS)

    Della Loggia, E.; Gauvain, J.

    1988-01-01

    Recent discussion about the aerosol codes currently used for the analysis of containment retention capabilities have revealed a number of questions concerning the reliabilities and verifications of the thermal-hydraulic modules of these codes with respect to the validity of implemented physical models and the stability and effectiveness of numerical schemes. Since these codes are used for the calculation of the Source Term for the assessment of radiological consequences of severe accidents, they are an important part of reactor safety evaluation. For this reason the Commission of European Communities (CEC), following the recommendation mode by experts from Member Stades, is promoting research in this field with the aim also of establishing and increasing collaboration among Research Organisations of member countries. In view of the results of the studies, the CEC has decided to carry out a Benchmark exercise for severe accident containment thermal hydraulics codes. This exercise is based on experiment B3 in the DEMONA programme. The main objective of the benchmark exercise has been to assess the ability of the participating codes to predict atmosphere saturation levels and bulk condensation rates under conditions similar to those predicted to follow a severe accident in a PWR. This exercise follows logically on from the LA-4 exercise, which, is related to an experiment with a simpler internal geometry. We present here the results obtained so far and from them preliminary conclusions are drawn, concerning condensation temperature, pressure, flow rates, in the reactor containment

  19. Shielding benchmark tests of JENDL-3

    International Nuclear Information System (INIS)

    Kawai, Masayoshi; Hasegawa, Akira; Ueki, Kohtaro; Yamano, Naoki; Sasaki, Kenji; Matsumoto, Yoshihiro; Takemura, Morio; Ohtani, Nobuo; Sakurai, Kiyoshi.

    1994-03-01

    The integral test of neutron cross sections for major shielding materials in JENDL-3 has been performed by analyzing various shielding benchmark experiments. For the fission-like neutron source problem, the following experiments are analyzed: (1) ORNL Broomstick experiments for oxygen, iron and sodium, (2) ASPIS deep penetration experiments for iron, (3) ORNL neutron transmission experiments for iron, stainless steel, sodium and graphite, (4) KfK leakage spectrum measurements from iron spheres, (5) RPI angular neutron spectrum measurements in a graphite block. For D-T neutron source problem, the following two experiments are analyzed: (6) LLNL leakage spectrum measurements from spheres of iron and graphite, and (7) JAERI-FNS angular neutron spectrum measurements on beryllium and graphite slabs. Analyses have been performed using the radiation transport codes: ANISN(1D Sn), DIAC(1D Sn), DOT3.5(2D Sn) and MCNP(3D point Monte Carlo). The group cross sections for Sn transport calculations are generated with the code systems PROF-GROUCH-G/B and RADHEAT-V4. The point-wise cross sections for MCNP are produced with NJOY. For comparison, the analyses with JENDL-2 and ENDF/B-IV have been also carried out. The calculations using JENDL-3 show overall agreement with the experimental data as well as those with ENDF/B-IV. Particularly, JENDL-3 gives better results than JENDL-2 and ENDF/B-IV for sodium. It has been concluded that JENDL-3 is very applicable for fission and fusion reactor shielding analyses. (author)

  20. BN-600 hybrid core benchmark analyses

    International Nuclear Information System (INIS)

    Kim, Y.I.; Stanculescu, A.; Finck, P.; Hill, R.N.; Grimm, K.N.

    2003-01-01

    Benchmark analyses for the hybrid BN-600 reactor that contains three uranium enrichment zones and one plutonium zone in the core, have been performed within the frame of an IAEA sponsored Coordinated Research Project. The results for several relevant reactivity parameters obtained by the participants with their own state-of-the-art basic data and codes, were compared in terms of calculational uncertainty, and their effects on the ULOF transient behavior of the hybrid BN-600 core were evaluated. The comparison of the diffusion and transport results obtained for the homogeneous representation generally shows good agreement for most parameters between the RZ and HEX-Z models. The burnup effect and the heterogeneity effect on most reactivity parameters also show good agreement for the HEX-Z diffusion and transport theory results. A large difference noticed for the sodium and steel density coefficients is mainly due to differences in the spatial coefficient predictions for non fuelled regions. The burnup reactivity loss was evaluated to be 0.025 (4.3 $) within ∼ 5.0% standard deviation. The heterogeneity effect on most reactivity coefficients was estimated to be small. The heterogeneity treatment reduced the control rod worth by 2.3%. The heterogeneity effect on the k-eff and control rod worth appeared to differ strongly depending on the heterogeneity treatment method. A substantial spread noticed for several reactivity coefficients did not give a significant impact on the transient behavior prediction. This result is attributable to compensating effects between several reactivity effects and the specific design of the partially MOX fuelled hybrid core. (author)

  1. How benchmarking can improve patient nutrition.

    Science.gov (United States)

    Ellis, Jane

    Benchmarking is a tool that originated in business to enable organisations to compare their services with industry-wide best practice. Early last year the Department of Health published The Essence of Care, a benchmarking toolkit adapted for use in health care. It focuses on eight elements of care that are crucial to patients' experiences. Nurses and other health care professionals at a London NHS trust have begun a trust-wide benchmarking project. The aim is to improve patients' experiences of health care by sharing and comparing information, and by identifying examples of good practice and areas for improvement. The project began with two of the eight elements of The Essence of Care, with the intention of covering the rest later. This article describes the benchmarking process for nutrition and some of the consequent improvements in care.

  2. Measuring Distribution Performance? Benchmarking Warrants Your Attention

    Energy Technology Data Exchange (ETDEWEB)

    Ericson, Sean J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Alvarez, Paul [The Wired Group

    2018-04-13

    Identifying, designing, and measuring performance metrics is critical to securing customer value, but can be a difficult task. This article examines the use of benchmarks based on publicly available performance data to set challenging, yet fair, metrics and targets.

  3. Numerical methods: Analytical benchmarking in transport theory

    International Nuclear Information System (INIS)

    Ganapol, B.D.

    1988-01-01

    Numerical methods applied to reactor technology have reached a high degree of maturity. Certainly one- and two-dimensional neutron transport calculations have become routine, with several programs available on personal computer and the most widely used programs adapted to workstation and minicomputer computational environments. With the introduction of massive parallelism and as experience with multitasking increases, even more improvement in the development of transport algorithms can be expected. Benchmarking an algorithm is usually not a very pleasant experience for the code developer. Proper algorithmic verification by benchmarking involves the following considerations: (1) conservation of particles, (2) confirmation of intuitive physical behavior, and (3) reproduction of analytical benchmark results. By using today's computational advantages, new basic numerical methods have been developed that allow a wider class of benchmark problems to be considered

  4. Benchmarking Linked Open Data Management Systems

    NARCIS (Netherlands)

    R. Angles Rojas (Renzo); M.-D. Pham (Minh-Duc); P.A. Boncz (Peter)

    2014-01-01

    htmlabstractWith inherent support for storing and analysing highly interconnected data, graph and RDF databases appear as natural solutions for developing Linked Open Data applications. However, current benchmarks for these database technologies do not fully attain the desirable characteristics

  5. Benchmarking ENDF/B-VII.0

    International Nuclear Information System (INIS)

    Marck, Steven C. van der

    2006-01-01

    The new major release VII.0 of the ENDF/B nuclear data library has been tested extensively using benchmark calculations. These were based upon MCNP-4C3 continuous-energy Monte Carlo neutronics simulations, together with nuclear data processed using the code NJOY. Three types of benchmarks were used, viz., criticality safety benchmarks (fusion) shielding benchmarks, and reference systems for which the effective delayed neutron fraction is reported. For criticality safety, more than 700 benchmarks from the International Handbook of Criticality Safety Benchmark Experiments were used. Benchmarks from all categories were used, ranging from low-enriched uranium, compound fuel, thermal spectrum ones (LEU-COMP-THERM), to mixed uranium-plutonium, metallic fuel, fast spectrum ones (MIX-MET-FAST). For fusion shielding many benchmarks were based on IAEA specifications for the Oktavian experiments (for Al, Co, Cr, Cu, LiF, Mn, Mo, Si, Ti, W, Zr), Fusion Neutronics Source in Japan (for Be, C, N, O, Fe, Pb), and Pulsed Sphere experiments at Lawrence Livermore National Laboratory (for 6 Li, 7 Li, Be, C, N, O, Mg, Al, Ti, Fe, Pb, D 2 O, H 2 O, concrete, polyethylene and teflon). For testing delayed neutron data more than thirty measurements in widely varying systems were used. Among these were measurements in the Tank Critical Assembly (TCA in Japan) and IPEN/MB-01 (Brazil), both with a thermal spectrum, and two cores in Masurca (France) and three cores in the Fast Critical Assembly (FCA, Japan), all with fast spectra. In criticality safety, many benchmarks were chosen from the category with a thermal spectrum, low-enriched uranium, compound fuel (LEU-COMP-THERM), because this is typical of most current-day reactors, and because these benchmarks were previously underpredicted by as much as 0.5% by most nuclear data libraries (such as ENDF/B-VI.8, JEFF-3.0). The calculated results presented here show that this underprediction is no longer there for ENDF/B-VII.0. The average over 257

  6. Benchmarks for dynamic multi-objective optimisation

    CSIR Research Space (South Africa)

    Helbig, M

    2013-06-01

    Full Text Available When algorithms solve dynamic multi-objective optimisation problems (DMOOPs), benchmark functions should be used to determine whether the algorithm can overcome specific difficulties that can occur in real-world problems. However, for dynamic multi...

  7. Professional Performance and Bureaucratic Benchmarking Information

    DEFF Research Database (Denmark)

    Schneider, Melanie L.; Mahlendorf, Matthias D.; Schäffer, Utz

    controls because of organizational-professional conflicts. We therefore analyze the association between bureaucratic benchmarking information provision and professional performance and suggest that the association is more positive if prior professional performance was low. We test our hypotheses based...

  8. XWeB: The XML Warehouse Benchmark

    Science.gov (United States)

    Mahboubi, Hadj; Darmont, Jérôme

    With the emergence of XML as a standard for representing business data, new decision support applications are being developed. These XML data warehouses aim at supporting On-Line Analytical Processing (OLAP) operations that manipulate irregular XML data. To ensure feasibility of these new tools, important performance issues must be addressed. Performance is customarily assessed with the help of benchmarks. However, decision support benchmarks do not currently support XML features. In this paper, we introduce the XML Warehouse Benchmark (XWeB), which aims at filling this gap. XWeB derives from the relational decision support benchmark TPC-H. It is mainly composed of a test data warehouse that is based on a unified reference model for XML warehouses and that features XML-specific structures, and its associate XQuery decision support workload. XWeB's usage is illustrated by experiments on several XML database management systems.

  9. Benchmarking Danish Vocational Education and Training Programmes

    DEFF Research Database (Denmark)

    Bogetoft, Peter; Wittrup, Jesper

    This study paper discusses methods whereby Danish vocational education and training colleges can be benchmarked, and presents results from a number of models. It is conceptually complicated to benchmark vocational colleges, as the various colleges in Denmark offer a wide range of course programmes...... attempt to summarise the various effects that the colleges have in two relevant figures, namely retention rates of students and employment rates among students who have completed training programmes....

  10. A framework for benchmarking land models

    Directory of Open Access Journals (Sweden)

    Y. Q. Luo

    2012-10-01

    Full Text Available Land models, which have been developed by the modeling community in the past few decades to predict future states of ecosystems and climate, have to be critically evaluated for their performance skills of simulating ecosystem responses and feedback to climate change. Benchmarking is an emerging procedure to measure performance of models against a set of defined standards. This paper proposes a benchmarking framework for evaluation of land model performances and, meanwhile, highlights major challenges at this infant stage of benchmark analysis. The framework includes (1 targeted aspects of model performance to be evaluated, (2 a set of benchmarks as defined references to test model performance, (3 metrics to measure and compare performance skills among models so as to identify model strengths and deficiencies, and (4 model improvement. Land models are required to simulate exchange of water, energy, carbon and sometimes other trace gases between the atmosphere and land surface, and should be evaluated for their simulations of biophysical processes, biogeochemical cycles, and vegetation dynamics in response to climate change across broad temporal and spatial scales. Thus, one major challenge is to select and define a limited number of benchmarks to effectively evaluate land model performance. The second challenge is to develop metrics of measuring mismatches between models and benchmarks. The metrics may include (1 a priori thresholds of acceptable model performance and (2 a scoring system to combine data–model mismatches for various processes at different temporal and spatial scales. The benchmark analyses should identify clues of weak model performance to guide future development, thus enabling improved predictions of future states of ecosystems and climate. The near-future research effort should be on development of a set of widely acceptable benchmarks that can be used to objectively, effectively, and reliably evaluate fundamental properties

  11. EPRI depletion benchmark calculations using PARAGON

    International Nuclear Information System (INIS)

    Kucukboyaci, Vefa N.

    2015-01-01

    Highlights: • PARAGON depletion calculations are benchmarked against the EPRI reactivity decrement experiments. • Benchmarks cover a wide range of enrichments, burnups, cooling times, and burnable absorbers, and different depletion and storage conditions. • Results from PARAGON-SCALE scheme are more conservative relative to the benchmark data. • ENDF/B-VII based data reduces the excess conservatism and brings the predictions closer to benchmark reactivity decrement values. - Abstract: In order to conservatively apply burnup credit in spent fuel pool criticality analyses, code validation for both fresh and used fuel is required. Fresh fuel validation is typically done by modeling experiments from the “International Handbook.” A depletion validation can determine a bias and bias uncertainty for the worth of the isotopes not found in the fresh fuel critical experiments. Westinghouse’s burnup credit methodology uses PARAGON™ (Westinghouse 2-D lattice physics code) and its 70-group cross-section library, which have been benchmarked, qualified, and licensed both as a standalone transport code and as a nuclear data source for core design simulations. A bias and bias uncertainty for the worth of depletion isotopes, however, are not available for PARAGON. Instead, the 5% decrement approach for depletion uncertainty is used, as set forth in the Kopp memo. Recently, EPRI developed a set of benchmarks based on a large set of power distribution measurements to ascertain reactivity biases. The depletion reactivity has been used to create 11 benchmark cases for 10, 20, 30, 40, 50, and 60 GWd/MTU and 3 cooling times 100 h, 5 years, and 15 years. These benchmark cases are analyzed with PARAGON and the SCALE package and sensitivity studies are performed using different cross-section libraries based on ENDF/B-VI.3 and ENDF/B-VII data to assess that the 5% decrement approach is conservative for determining depletion uncertainty

  12. A review on the benchmarking concept in Malaysian construction safety performance

    Science.gov (United States)

    Ishak, Nurfadzillah; Azizan, Muhammad Azizi

    2018-02-01

    Construction industry is one of the major industries that propels Malaysia's economy in highly contributes to our nation's GDP growth, yet the high fatality rates on construction sites have caused concern among safety practitioners and the stakeholders. Hence, there is a need of benchmarking in performance of Malaysia's construction industry especially in terms of safety. This concept can create a fertile ground for ideas, but only in a receptive environment, organization that share good practices and compare their safety performance against other benefit most to establish improvement in safety culture. This research was conducted to study the awareness important, evaluate current practice and improvement, and also identify the constraint in implement of benchmarking on safety performance in our industry. Additionally, interviews with construction professionals were come out with different views on this concept. Comparison has been done to show the different understanding of benchmarking approach and how safety performance can be benchmarked. But, it's viewed as one mission, which to evaluate objectives identified through benchmarking that will improve the organization's safety performance. Finally, the expected result from this research is to help Malaysia's construction industry implement best practice in safety performance management through the concept of benchmarking.

  13. Toxicological benchmarks for screening contaminants of potential concern for effects on freshwater biota

    International Nuclear Information System (INIS)

    Suter, G.W. II

    1996-01-01

    An important early step in the assessment of ecological risks at contaminated sites is the screening of chemicals detected on the site to identify those that constitute a potential risk. Part of this screening process is the comparison of measured ambient concentrations to concentrations that are believed to be nonhazardous, termed benchmarks. This article discusses 13 methods by which benchmarks may be derived for aquatic biota and presents benchmarks for 105 chemicals. It then compares them with respect to their sensitivity, availability, magnitude relative to background concentrations, and conceptual bases. This compilation is limited to chemicals that have been detected on the US Department of Energy's Oak Ridge Reservation (ORR) and to benchmarks derived from studies of toxic effects on freshwater organisms. The list of chemicals includes 45 metals and 56 industrial organic chemicals but only four pesticides. Although some individual values can be shown to be too high to be protective and others are too low to be useful for screening, none of the approaches to benchmark derivation can be rejected without further definition of what constitutes adequate protection. The most appropriate screening strategy is to use multiple benchmark values along with background concentrations, knowledge of waste composition, and physicochemical properties to identify contaminants of potential concern

  14. International benchmarking of specialty hospitals. A series of case studies on comprehensive cancer centres

    NARCIS (Netherlands)

    van Lent, W.A.M.; de Beer, Relinde; van Harten, Willem H.

    2010-01-01

    Background Benchmarking is one of the methods used in business that is applied to hospitals to improve the management of their operations. International comparison between hospitals can explain performance differences. As there is a trend towards specialization of hospitals, this study examines the

  15. Benchmark analyses for EFF-1, -3 and FENDL-1, -2 beryllium data

    International Nuclear Information System (INIS)

    Fischer, U.; Wu, Y.

    1999-01-01

    The present article is part of the summary report on the Consultants' Meeting on the transport sublibrary of the Fusion Evaluated Data Library version 2.0. It reports on the comparison between beryllium benchmark experiments and Monte Carlo calculations, using different versions of the FENDL and EFF libraries

  16. Ad hoc committee on reactor physics benchmarks

    International Nuclear Information System (INIS)

    Diamond, D.J.; Mosteller, R.D.; Gehin, J.C.

    1996-01-01

    In the spring of 1994, an ad hoc committee on reactor physics benchmarks was formed under the leadership of two American Nuclear Society (ANS) organizations. The ANS-19 Standards Subcommittee of the Reactor Physics Division and the Computational Benchmark Problem Committee of the Mathematics and Computation Division had both seen a need for additional benchmarks to help validate computer codes used for light water reactor (LWR) neutronics calculations. Although individual organizations had employed various means to validate the reactor physics methods that they used for fuel management, operations, and safety, additional work in code development and refinement is under way, and to increase accuracy, there is a need for a corresponding increase in validation. Both organizations thought that there was a need to promulgate benchmarks based on measured data to supplement the LWR computational benchmarks that have been published in the past. By having an organized benchmark activity, the participants also gain by being able to discuss their problems and achievements with others traveling the same route

  17. Benchmarking for controllere: metoder, teknikker og muligheder

    DEFF Research Database (Denmark)

    Bukh, Per Nikolaj; Sandalgaard, Niels Erik; Dietrichson, Lars Grubbe

    2008-01-01

    Benchmarking indgår på mange måder i både private og offentlige virksomheders ledelsespraksis. I økonomistyring anvendes benchmark-baserede indikatorer (eller nøgletal), eksempelvis ved fastlæggelse af mål i resultatkontrakter eller for at angive det ønskede niveau for visse nøgletal i et Balanced...... Scorecard eller tilsvarende målstyringsmodeller. Artiklen redegør for begrebet benchmarking ved at præsentere og diskutere forskellige facetter af det, samt redegør for fire forskellige anvendelser af benchmarking for at vise begrebets bredde og væsentligheden af at klarlægge formålet med et...... benchmarkingprojekt. Dernæst bliver forskellen på resultatbenchmarking og procesbenchmarking behandlet, hvorefter brugen af intern hhv. ekstern benchmarking, samt brugen af benchmarking i budgetlægning og budgetopfølgning, behandles....

  18. Pool critical assembly pressure vessel facility benchmark

    International Nuclear Information System (INIS)

    Remec, I.; Kam, F.B.K.

    1997-07-01

    This pool critical assembly (PCA) pressure vessel wall facility benchmark (PCA benchmark) is described and analyzed in this report. Analysis of the PCA benchmark can be used for partial fulfillment of the requirements for the qualification of the methodology for pressure vessel neutron fluence calculations, as required by the US Nuclear Regulatory Commission regulatory guide DG-1053. Section 1 of this report describes the PCA benchmark and provides all data necessary for the benchmark analysis. The measured quantities, to be compared with the calculated values, are the equivalent fission fluxes. In Section 2 the analysis of the PCA benchmark is described. Calculations with the computer code DORT, based on the discrete-ordinates method, were performed for three ENDF/B-VI-based multigroup libraries: BUGLE-93, SAILOR-95, and BUGLE-96. An excellent agreement of the calculated (C) and measures (M) equivalent fission fluxes was obtained. The arithmetic average C/M for all the dosimeters (total of 31) was 0.93 ± 0.03 and 0.92 ± 0.03 for the SAILOR-95 and BUGLE-96 libraries, respectively. The average C/M ratio, obtained with the BUGLE-93 library, for the 28 measurements was 0.93 ± 0.03 (the neptunium measurements in the water and air regions were overpredicted and excluded from the average). No systematic decrease in the C/M ratios with increasing distance from the core was observed for any of the libraries used

  19. A comparison of gallium-67 citrate scintigraphy and indium-111 labelled leukocyte imaging for the diagnosis of prosthetic joint infection. Preliminary results

    International Nuclear Information System (INIS)

    McKillop, J.H.; Cuthbert, G.F.; Gray, H.W.; McKay, Iain; Sturrock, R.D.

    1982-01-01

    Preliminary experience in comparing Gallium-67 imaging in patients with a painful prosthetic joint to the findings on Indium-111 labelled leukocyte imaging is reported. In the small series of patients so far studied, no clear advantage has emerged for either Gallium-67 or Indium-111 leukocyte imaging in terms of sensitivity or specificity for joint prosthesis infection. Should a larger group confirm the preliminary findings, Gallium-67 imaging may be preferable to Indium-111 leukocyte imaging in the patient with the painful joint prosthesis, in view of the greater simplicity of the former technique

  20. a Proposed Benchmark Problem for Scatter Calculations in Radiographic Modelling

    Science.gov (United States)

    Jaenisch, G.-R.; Bellon, C.; Schumm, A.; Tabary, J.; Duvauchelle, Ph.

    2009-03-01

    Code Validation is a permanent concern in computer modelling, and has been addressed repeatedly in eddy current and ultrasonic modeling. A good benchmark problem is sufficiently simple to be taken into account by various codes without strong requirements on geometry representation capabilities, focuses on few or even a single aspect of the problem at hand to facilitate interpretation and to avoid that compound errors compensate themselves, yields a quantitative result and is experimentally accessible. In this paper we attempt to address code validation for one aspect of radiographic modeling, the scattered radiation prediction. Many NDT applications can not neglect scattered radiation, and the scatter calculation thus is important to faithfully simulate the inspection situation. Our benchmark problem covers the wall thickness range of 10 to 50 mm for single wall inspections, with energies ranging from 100 to 500 keV in the first stage, and up to 1 MeV with wall thicknesses up to 70 mm in the extended stage. A simple plate geometry is sufficient for this purpose, and the scatter data is compared on a photon level, without a film model, which allows for comparisons with reference codes like MCNP. We compare results of three Monte Carlo codes (McRay, Sindbad and Moderato) as well as an analytical first order scattering code (VXI), and confront them to results obtained with MCNP. The comparison with an analytical scatter model provides insights into the application domain where this kind of approach can successfully replace Monte-Carlo calculations.

  1. Performance analysis of fusion nuclear-data benchmark experiments for light to heavy materials in MeV energy region with a neutron spectrum shifter

    International Nuclear Information System (INIS)

    Murata, Isao; Ohta, Masayuki; Miyamaru, Hiroyuki; Kondo, Keitaro; Yoshida, Shigeo; Iida, Toshiyuki; Ochiai, Kentaro; Konno, Chikara

    2011-01-01

    Nuclear data are indispensable for development of fusion reactor candidate materials. However, benchmarking of the nuclear data in MeV energy region is not yet adequate. In the present study, benchmark performance in the MeV energy region was investigated theoretically for experiments by using a 14 MeV neutron source. We carried out a systematical analysis for light to heavy materials. As a result, the benchmark performance for the neutron spectrum was confirmed to be acceptable, while for gamma-rays it was not sufficiently accurate. Consequently, a spectrum shifter has to be applied. Beryllium had the best performance as a shifter. Moreover, a preliminary examination of whether it is really acceptable that only the spectrum before the last collision is considered in the benchmark performance analysis. It was pointed out that not only the last collision but also earlier collisions should be considered equally in the benchmark performance analysis.

  2. Peculiarity by Modeling of the Control Rod Movement by the Kalinin-3 Benchmark

    International Nuclear Information System (INIS)

    Nikonov, S. P.; Velkov, K.; Pautz, A.

    2010-01-01

    The paper presents an important part of the results of the OECD/NEA benchmark transient 'Switching off one main circulation pump at nominal power' analyzed as a boundary condition problem by the coupled system code ATHLET-BIPR-VVER. Some observations and comparisons with measured data for integral reactor parameters are discussed. Special attention is paid on the modeling and comparisons performed for the control rod movement and the reactor power history. (Authors)

  3. A Comparison between the Occurrence of Pauses, Repetitions and Recasts under Conditions of Face-to-Face and Computer-Mediated Communication: A Preliminary Study

    Science.gov (United States)

    Cabaroglu, Nese; Basaran, Suleyman; Roberts, Jon

    2010-01-01

    This study compares pauses, repetitions and recasts in matched task interactions under face-to-face and computer-mediated conditions. Six first-year English undergraduates at a Turkish University took part in Skype-based voice chat with a native speaker and face-to-face with their instructor. Preliminary quantitative analysis of transcripts showed…

  4. Raising Quality and Achievement. A College Guide to Benchmarking.

    Science.gov (United States)

    Owen, Jane

    This booklet introduces the principles and practices of benchmarking as a way of raising quality and achievement at further education colleges in Britain. Section 1 defines the concept of benchmarking. Section 2 explains what benchmarking is not and the steps that should be taken before benchmarking is initiated. The following aspects and…

  5. Clean Energy Manufacturing Analysis Center Benchmark Report: Framework and Methodologies

    Energy Technology Data Exchange (ETDEWEB)

    Sandor, Debra [National Renewable Energy Lab. (NREL), Golden, CO (United States); Chung, Donald [National Renewable Energy Lab. (NREL), Golden, CO (United States); Keyser, David [National Renewable Energy Lab. (NREL), Golden, CO (United States); Mann, Margaret [National Renewable Energy Lab. (NREL), Golden, CO (United States); Engel-Cox, Jill [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2017-05-23

    This report documents the CEMAC methodologies for developing and reporting annual global clean energy manufacturing benchmarks. The report reviews previously published manufacturing benchmark reports and foundational data, establishes a framework for benchmarking clean energy technologies, describes the CEMAC benchmark analysis methodologies, and describes the application of the methodologies to the manufacturing of four specific clean energy technologies.

  6. Benchmarks: The Development of a New Approach to Student Evaluation.

    Science.gov (United States)

    Larter, Sylvia

    The Toronto Board of Education Benchmarks are libraries of reference materials that demonstrate student achievement at various levels. Each library contains video benchmarks, print benchmarks, a staff handbook, and summary and introductory documents. This book is about the development and the history of the benchmark program. It has taken over 3…

  7. Storage-Intensive Supercomputing Benchmark Study

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, J; Dossa, D; Gokhale, M; Hysom, D; May, J; Pearce, R; Yoo, A

    2007-10-30

    Critical data science applications requiring frequent access to storage perform poorly on today's computing architectures. This project addresses efficient computation of data-intensive problems in national security and basic science by exploring, advancing, and applying a new form of computing called storage-intensive supercomputing (SISC). Our goal is to enable applications that simply cannot run on current systems, and, for a broad range of data-intensive problems, to deliver an order of magnitude improvement in price/performance over today's data-intensive architectures. This technical report documents much of the work done under LDRD 07-ERD-063 Storage Intensive Supercomputing during the period 05/07-09/07. The following chapters describe: (1) a new file I/O monitoring tool iotrace developed to capture the dynamic I/O profiles of Linux processes; (2) an out-of-core graph benchmark for level-set expansion of scale-free graphs; (3) an entity extraction benchmark consisting of a pipeline of eight components; and (4) an image resampling benchmark drawn from the SWarp program in the LSST data processing pipeline. The performance of the graph and entity extraction benchmarks was measured in three different scenarios: data sets residing on the NFS file server and accessed over the network; data sets stored on local disk; and data sets stored on the Fusion I/O parallel NAND Flash array. The image resampling benchmark compared performance of software-only to GPU-accelerated. In addition to the work reported here, an additional text processing application was developed that used an FPGA to accelerate n-gram profiling for language classification. The n-gram application will be presented at SC07 at the High Performance Reconfigurable Computing Technologies and Applications Workshop. The graph and entity extraction benchmarks were run on a Supermicro server housing the NAND Flash 40GB parallel disk array, the Fusion-io. The Fusion system specs are as follows

  8. Application of different measures of bioavailability to support the derivation of risk-based remedial benchmarks for PHC-contaminated sites

    Energy Technology Data Exchange (ETDEWEB)

    Stephenson, G. [Stantec Consulting Ltd., Surrey, BC (Canada)

    2009-07-01

    Risk estimates and exposure scenarios hardly ever take into consideration site-specific bioavailability of contaminants. Risk assessors frequently adopt the assumption that a contaminant in soils is 100 percent bioavailable, resulting in an overestimation of the risks associated with contamination. Remedial targets or benchmarks derived in light of this assumption are needlessly low and might be technically unattainable or prohibitive in terms of cost. This presentation discussed a research project whose goal was to develop a tool kit to measure or determine site-specific bioavailability of contaminants (PHCs) in soils to ecological receptors. Tools that were discussed included: biological measures such as toxicity tests, contaminant residues in tissues, and bioaccumulation tests. Chemical measures such as bioaccessibility tests and other biomimetic devices (SPMDs), biotic ligand modeling, and chemical extractions were also presented. Preliminary investigation results were provided. Other topics that were discussed included: single-species toxicity tests; preliminary comparisons; the site; bioaccumulation; and toxicity to earthworms. It was concluded that total soil and water-extractable concentrations did not correlate well with toxicity. tabs., figs.

  9. Benchmarking – A tool for judgment or improvement?

    DEFF Research Database (Denmark)

    Rasmussen, Grane Mikael Gregaard

    2010-01-01

    perceptions of benchmarking will be presented; public benchmarking and best practice benchmarking. These two types of benchmarking are used to characterize and discuss the Danish benchmarking system and to enhance which effects, possibilities and challenges that follow in the wake of using this kind......Change in construction is high on the agenda for the Danish government and a comprehensive effort is done in improving quality and efficiency. This has led to an initiated governmental effort in bringing benchmarking into the Danish construction sector. This paper is an appraisal of benchmarking...... as it is presently carried out in the Danish construction sector. Many different perceptions of benchmarking and the nature of the construction sector, lead to an uncertainty in how to perceive and use benchmarking, hence, generating an uncertainty in understanding the effects of benchmarking. This paper addresses...

  10. Communication: A benchmark-quality, full-dimensional ab initio potential energy surface for Ar-HOCO

    International Nuclear Information System (INIS)

    Conte, Riccardo; Bowman, Joel M.; Houston, Paul L.

    2014-01-01

    A full-dimensional, global ab initio potential energy surface (PES) for the Ar-HOCO system is presented. The PES consists of a previous intramolecular ab initio PES for HOCO [J. Li, C. Xie, J. Ma, Y. Wang, R. Dawes, D. Xie, J. M. Bowman, and H. Guo, J. Phys. Chem. A 116, 5057 (2012)], plus a new permutationally invariant interaction potential based on fitting 12 432 UCCSD(T)-F12a/aVDZ counterpoise-corrected energies. The latter has a total rms fitting error of about 25 cm −1 for fitted interaction energies up to roughly 12 000 cm −1 . Two additional fits are presented. One is a novel very compact permutational invariant representation, which contains terms only involving the Ar-atom distances. The rms fitting error for this fit is 193 cm −1 . The other fit is the widely used pairwise one. The pairwise fit to the entire data set has an rms fitting error of 427 cm −1 . All of these potentials are used in preliminary classical trajectory calculations of energy transfer with a focus on comparisons with the results using the benchmark potential

  11. Communication: A benchmark-quality, full-dimensional ab initio potential energy surface for Ar-HOCO

    Energy Technology Data Exchange (ETDEWEB)

    Conte, Riccardo, E-mail: riccardo.conte@emory.edu, E-mail: jmbowma@emory.edu; Bowman, Joel M., E-mail: riccardo.conte@emory.edu, E-mail: jmbowma@emory.edu [Department of Chemistry and Cherry L. Emerson Center for Scientific Calculation, Emory University, Atlanta, Georgia 30322 (United States); Houston, Paul L., E-mail: paul.houston@cos.gatech.edu [School of Chemistry and Biochemistry, Georgia Institute of Technology, Atlanta, Georgia 30332 (United States)

    2014-04-21

    A full-dimensional, global ab initio potential energy surface (PES) for the Ar-HOCO system is presented. The PES consists of a previous intramolecular ab initio PES for HOCO [J. Li, C. Xie, J. Ma, Y. Wang, R. Dawes, D. Xie, J. M. Bowman, and H. Guo, J. Phys. Chem. A 116, 5057 (2012)], plus a new permutationally invariant interaction potential based on fitting 12 432 UCCSD(T)-F12a/aVDZ counterpoise-corrected energies. The latter has a total rms fitting error of about 25 cm{sup −1} for fitted interaction energies up to roughly 12 000 cm{sup −1}. Two additional fits are presented. One is a novel very compact permutational invariant representation, which contains terms only involving the Ar-atom distances. The rms fitting error for this fit is 193 cm{sup −1}. The other fit is the widely used pairwise one. The pairwise fit to the entire data set has an rms fitting error of 427 cm{sup −1}. All of these potentials are used in preliminary classical trajectory calculations of energy transfer with a focus on comparisons with the results using the benchmark potential.

  12. Synthesis of the OECD/NEA-PSI CFD benchmark exercise

    Energy Technology Data Exchange (ETDEWEB)

    Andreani, Michele, E-mail: Michele.andreani@psi.ch; Badillo, Arnoldo; Kapulla, Ralf

    2016-04-01

    Highlights: • A benchmark exercise on stratification erosion in containment was conducted using a test in the PANDA facility. • Blind calculations were provided by nineteen participants. • Results were compared with experimental data. • A ranking was made. • A large spread of results was observed, with very few simulations providing accurate results for the most important variables, though not for velocities. - Abstract: The third International Benchmark Exercise (IBE-3) conducted under the auspices of OECD/NEA is based on the comparison of blind CFD simulations with experimental data addressing the erosion of a stratified layer by an off-axis buoyant jet in a large vessel. The numerical benchmark exercise is based on a dedicated experiment in the PANDA facility conducted at the Paul Scherrer Institut (PSI) in Switzerland, using only one vessel. The use of non-prototypical fluids (i.e. helium as simulant for hydrogen, and air as simulant for steam), and the consequent absence of the complex physical effects produced by steam condensation enhanced the suitability of the data for CFD validation purposes. The test started with a helium–air layer at the top of the vessel and air in the lower part. The helium-rich layer was gradually eroded by a low-momentum air/helium jet emerging at a lower elevation. Blind calculation results were submitted by nineteen participants, and the calculation results have been compared with the PANDA data. This report, adopting the format of the reports for the two previous exercises, includes a ranking of the contributions, where the largest weight is given to the time progression of the erosion of the helium-rich layer. In accordance with the limited scope of the benchmark exercise, this report is more a collection of comparisons between calculated results and data than a synthesis. Therefore, the few conclusions are based on the mere observation of the agreement of the various submissions with the test result, and do not

  13. The Concepts "Benchmarks and Benchmarking" Used in Education Planning: Teacher Education as Example

    Science.gov (United States)

    Steyn, H. J.

    2015-01-01

    Planning in education is a structured activity that includes several phases and steps that take into account several kinds of information (Steyn, Steyn, De Waal & Wolhuter, 2002: 146). One of the sets of information that are usually considered is the (so-called) "benchmarks" and "benchmarking" regarding the focus of a…

  14. Criteria of benchmark selection for efficient flexible multibody system formalisms

    Directory of Open Access Journals (Sweden)

    Valášek M.

    2007-10-01

    Full Text Available The paper deals with the selection process of benchmarks for testing and comparing efficient flexible multibody formalisms. The existing benchmarks are briefly summarized. The purposes for benchmark selection are investigated. The result of this analysis is the formulation of the criteria of benchmark selection for flexible multibody formalisms. Based on them the initial set of suitable benchmarks is described. Besides that the evaluation measures are revised and extended.

  15. VisGraB: A Benchmark for Vision-Based Grasping. Paladyn Journal of Behavioral Robotics

    DEFF Research Database (Denmark)

    Kootstra, Gert; Popovic, Mila; Jørgensen, Jimmy Alison

    2012-01-01

    that a large number of grasps can be executed and evaluated while dealing with dynamics and the noise and uncertainty present in the real world images. VisGraB enables a fair comparison among different grasping methods. The user furthermore does not need to deal with robot hardware, focusing on the vision......We present a database and a software tool, VisGraB, for benchmarking of methods for vision-based grasping of unknown objects with no prior object knowledge. The benchmark is a combined real-world and simulated experimental setup. Stereo images of real scenes containing several objects in different...

  16. Inelastic finite element analysis of a pipe-elbow assembly (benchmark problem 2)

    Energy Technology Data Exchange (ETDEWEB)

    Knapp, H P [Internationale Atomreaktorbau GmbH (INTERATOM) Bergisch Gladbach (Germany); Prij, J [Netherlands Energy Research Foundation (ECN) Petten (Netherlands)

    1979-06-01

    In the scope of the international benchmark problem effort on piping systems, benchmark problem 2 consisting of a pipe elbow assembly, subjected to a time dependent in-plane bending moment, was analysed using the finite element program MARC. Numerical results are presented and a comparison with experimental results is made. It is concluded that the main reason for the deviation between the calculated and measured values is due to the fact that creep-plasticity interaction is not taken into account in the analysis. (author)

  17. A Comparative Study of Differential Evolution, Particle Swarm Optimization, and Evolutionary Algorithms on Numerical Benchmark Problems

    DEFF Research Database (Denmark)

    Vesterstrøm, Jacob Svaneborg; Thomsen, Rene

    2004-01-01

    Several extensions to evolutionary algorithms (EAs) and particle swarm optimization (PSO) have been suggested during the last decades offering improved performance on selected benchmark problems. Recently, another search heuristic termed differential evolution (DE) has shown superior performance...... in several real-world applications. In this paper, we evaluate the performance of DE, PSO, and EAs regarding their general applicability as numerical optimization techniques. The comparison is performed on a suite of 34 widely used benchmark problems. The results from our study show that DE generally...... outperforms the other algorithms. However, on two noisy functions, both DE and PSO were outperformed by the EA....

  18. A proposal of a benchmark for calculation of the power distribution next to the absorber

    International Nuclear Information System (INIS)

    Temesvari, E.; Hordosy, G.; Maraczy, Cs.; Hegyi, Gy.; Kereszturi, A.

    1999-01-01

    A proposal of a new benchmark problem was formulated to consider the characteristics of the VVER-440 fuel assembly with enrichment zoning, i. e. to study the space dependence of the power distribution near to a control assembly. A quite detailed geometry and the material composition of the fuel and the control assemblies were modeled by the help of MCNP calculations in AEKI. The results of the MCNP calculations were built in the KARATE code system as the new albedo matrices. The comparison of the KARATE calculation results and the MCNP calculations for this benchmark is presented. (Authors)

  19. Toxicological benchmarks for wildlife: 1994 Revision

    International Nuclear Information System (INIS)

    Opresko, D.M.; Sample, B.E.; Suter, G.W. II.

    1994-09-01

    The process by which ecological risks of environmental contaminants are evaluated is two-tiered. The first tier is a screening assessment where concentrations of contaminants in the environment are compared to toxicological benchmarks which represent concentrations of chemicals in environmental media (water, sediment, soil, food, etc.) that are presumed to be nonhazardous to the surrounding biota. The second tier is a baseline ecological risk assessment where toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. The report presents toxicological benchmarks for assessment of effects of 76 chemicals on 8 representative mammalian wildlife species and 31 chemicals on 9 avian wildlife species. The chemicals are some of those that occur at United States Department of Energy waste sites; the wildlife species were chosen because they are widely distributed and provide a representative range of body sizes and diets. Further descriptions of the chosen wildlife species and chemicals are provided in the report. The benchmarks presented in this report represent values believed to be nonhazardous for the listed wildlife species. These benchmarks only consider contaminant exposure through oral ingestion of contaminated media; exposure through inhalation or direct dermal exposure are not considered in this report

  20. Toxicological benchmarks for wildlife: 1994 Revision

    Energy Technology Data Exchange (ETDEWEB)

    Opresko, D.M.; Sample, B.E.; Suter, G.W. II

    1994-09-01

    The process by which ecological risks of environmental contaminants are evaluated is two-tiered. The first tier is a screening assessment where concentrations of contaminants in the environment are compared to toxicological benchmarks which represent concentrations of chemicals in environmental media (water, sediment, soil, food, etc.) that are presumed to be nonhazardous to the surrounding biota. The second tier is a baseline ecological risk assessment where toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. The report presents toxicological benchmarks for assessment of effects of 76 chemicals on 8 representative mammalian wildlife species and 31 chemicals on 9 avian wildlife species. The chemicals are some of those that occur at United States Department of Energy waste sites; the wildlife species were chosen because they are widely distributed and provide a representative range of body sizes and diets. Further descriptions of the chosen wildlife species and chemicals are provided in the report. The benchmarks presented in this report represent values believed to be nonhazardous for the listed wildlife species. These benchmarks only consider contaminant exposure through oral ingestion of contaminated media; exposure through inhalation or direct dermal exposure are not considered in this report.

  1. A simplified 2D HTTR benchmark problem

    International Nuclear Information System (INIS)

    Zhang, Z.; Rahnema, F.; Pounders, J. M.; Zhang, D.; Ougouag, A.

    2009-01-01

    To access the accuracy of diffusion or transport methods for reactor calculations, it is desirable to create heterogeneous benchmark problems that are typical of relevant whole core configurations. In this paper we have created a numerical benchmark problem in 2D configuration typical of a high temperature gas cooled prismatic core. This problem was derived from the HTTR start-up experiment. For code-to-code verification, complex details of geometry and material specification of the physical experiments are not necessary. To this end, the benchmark problem presented here is derived by simplifications that remove the unnecessary details while retaining the heterogeneity and major physics properties from the neutronics viewpoint. Also included here is a six-group material (macroscopic) cross section library for the benchmark problem. This library was generated using the lattice depletion code HELIOS. Using this library, benchmark quality Monte Carlo solutions are provided for three different configurations (all-rods-in, partially-controlled and all-rods-out). The reference solutions include the core eigenvalue, block (assembly) averaged fuel pin fission density distributions, and absorption rate in absorbers (burnable poison and control rods). (authors)

  2. Lessons Learned on Benchmarking from the International Human Reliability Analysis Empirical Study

    International Nuclear Information System (INIS)

    Boring, Ronald L.; Forester, John A.; Bye, Andreas; Dang, Vinh N.; Lois, Erasmia

    2010-01-01

    The International Human Reliability Analysis (HRA) Empirical Study is a comparative benchmark of the prediction of HRA methods to the performance of nuclear power plant crews in a control room simulator. There are a number of unique aspects to the present study that distinguish it from previous HRA benchmarks, most notably the emphasis on a method-to-data comparison instead of a method-to-method comparison. This paper reviews seven lessons learned about HRA benchmarking from conducting the study: (1) the dual purposes of the study afforded by joining another HRA study; (2) the importance of comparing not only quantitative but also qualitative aspects of HRA; (3) consideration of both negative and positive drivers on crew performance; (4) a relatively large sample size of crews; (5) the use of multiple methods and scenarios to provide a well-rounded view of HRA performance; (6) the importance of clearly defined human failure events; and (7) the use of a common comparison language to 'translate' the results of different HRA methods. These seven lessons learned highlight how the present study can serve as a useful template for future benchmarking studies.

  3. Lessons Learned on Benchmarking from the International Human Reliability Analysis Empirical Study

    Energy Technology Data Exchange (ETDEWEB)

    Ronald L. Boring; John A. Forester; Andreas Bye; Vinh N. Dang; Erasmia Lois

    2010-06-01

    The International Human Reliability Analysis (HRA) Empirical Study is a comparative benchmark of the prediction of HRA methods to the performance of nuclear power plant crews in a control room simulator. There are a number of unique aspects to the present study that distinguish it from previous HRA benchmarks, most notably the emphasis on a method-to-data comparison instead of a method-to-method comparison. This paper reviews seven lessons learned about HRA benchmarking from conducting the study: (1) the dual purposes of the study afforded by joining another HRA study; (2) the importance of comparing not only quantitative but also qualitative aspects of HRA; (3) consideration of both negative and positive drivers on crew performance; (4) a relatively large sample size of crews; (5) the use of multiple methods and scenarios to provide a well-rounded view of HRA performance; (6) the importance of clearly defined human failure events; and (7) the use of a common comparison language to “translate” the results of different HRA methods. These seven lessons learned highlight how the present study can serve as a useful template for future benchmarking studies.

  4. Comparison of preliminary herpetofaunas of the Sierras la Madera (Oposura) and Bacadehuachi with the mainland Sierra Madre Occidental in Sonora, Mexico

    Science.gov (United States)

    Thomas R. Van Devender; Erik F. Enderson; Dale S. Turner; Roberto A. Villa; Stephen F. Hale; George M. Ferguson; Charles. Hedgcock

    2013-01-01

    Amphibians and reptiles were observed in the Sierra La Madera (59 species), an isolated Sky Island mountain range, and the Sierra Bacadéhuachi (30 species), the westernmost mountain range in the Sierra Madre Occidental (SMO) range in east-central Sonora. These preliminary herpetofaunas were compared with the herpetofauna of the Yécora area in eastern Sonora in the main...

  5. Benchmark calculations of power distribution within assemblies

    International Nuclear Information System (INIS)

    Cavarec, C.; Perron, J.F.; Verwaerde, D.; West, J.P.

    1994-09-01

    The main objective of this Benchmark is to compare different techniques for fine flux prediction based upon coarse mesh diffusion or transport calculations. We proposed 5 ''core'' configurations including different assembly types (17 x 17 pins, ''uranium'', ''absorber'' or ''MOX'' assemblies), with different boundary conditions. The specification required results in terms of reactivity, pin by pin fluxes and production rate distributions. The proposal for these Benchmark calculations was made by J.C. LEFEBVRE, J. MONDOT, J.P. WEST and the specification (with nuclear data, assembly types, core configurations for 2D geometry and results presentation) was distributed to correspondents of the OECD Nuclear Energy Agency. 11 countries and 19 companies answered the exercise proposed by this Benchmark. Heterogeneous calculations and homogeneous calculations were made. Various methods were used to produce the results: diffusion (finite differences, nodal...), transport (P ij , S n , Monte Carlo). This report presents an analysis and intercomparisons of all the results received

  6. ZZ WPPR, Pu Recycling Benchmark Results

    International Nuclear Information System (INIS)

    Lutz, D.; Mattes, M.; Delpech, Marc; Juanola, Marc

    2002-01-01

    Description of program or function: The NEA NSC Working Party on Physics of Plutonium Recycling has commissioned a series of benchmarks covering: - Plutonium recycling in pressurized-water reactors; - Void reactivity effect in pressurized-water reactors; - Fast Plutonium-burner reactors: beginning of life; - Plutonium recycling in fast reactors; - Multiple recycling in advanced pressurized-water reactors. The results have been published (see references). ZZ-WPPR-1-A/B contains graphs and tables relative to the PWR Mox pin cell benchmark, representing typical fuel for plutonium recycling, one corresponding to a first cycle, the second for a fifth cycle. These computer readable files contain the complete set of results, while the printed report contains only a subset. ZZ-WPPR-2-CYC1 are the results from cycle 1 of the multiple recycling benchmarks

  7. Interior beam searchlight semi-analytical benchmark

    International Nuclear Information System (INIS)

    Ganapol, Barry D.; Kornreich, Drew E.

    2008-01-01

    Multidimensional semi-analytical benchmarks to provide highly accurate standards to assess routine numerical particle transport algorithms are few and far between. Because of the well-established 1D theory for the analytical solution of the transport equation, it is sometimes possible to 'bootstrap' a 1D solution to generate a more comprehensive solution representation. Here, we consider the searchlight problem (SLP) as a multidimensional benchmark. A variation of the usual SLP is the interior beam SLP (IBSLP) where a beam source lies beneath the surface of a half space and emits directly towards the free surface. We consider the establishment of a new semi-analytical benchmark based on a new FN formulation. This problem is important in radiative transfer experimental analysis to determine cloud absorption and scattering properties. (authors)

  8. The national hydrologic bench-mark network

    Science.gov (United States)

    Cobb, Ernest D.; Biesecker, J.E.

    1971-01-01

    The United States is undergoing a dramatic growth of population and demands on its natural resources. The effects are widespread and often produce significant alterations of the environment. The hydrologic bench-mark network was established to provide data on stream basins which are little affected by these changes. The network is made up of selected stream basins which are not expected to be significantly altered by man. Data obtained from these basins can be used to document natural changes in hydrologic characteristics with time, to provide a better understanding of the hydrologic structure of natural basins, and to provide a comparative base for studying the effects of man on the hydrologic environment. There are 57 bench-mark basins in 37 States. These basins are in areas having a wide variety of climate and topography. The bench-mark basins and the types of data collected in the basins are described.

  9. Confidential benchmarking based on multiparty computation

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Damgård, Kasper Lyneborg; Nielsen, Kurt

    We report on the design and implementation of a system that uses multiparty computation to enable banks to benchmark their customers' confidential performance data against a large representative set of confidential performance data from a consultancy house. The system ensures that both the banks......' and the consultancy house's data stays confidential, the banks as clients learn nothing but the computed benchmarking score. In the concrete business application, the developed prototype help Danish banks to find the most efficient customers among a large and challenging group of agricultural customers with too much...... debt. We propose a model based on linear programming for doing the benchmarking and implement it using the SPDZ protocol by Damgård et al., which we modify using a new idea that allows clients to supply data and get output without having to participate in the preprocessing phase and without keeping...

  10. Energy benchmarking of South Australian WWTPs.

    Science.gov (United States)

    Krampe, J

    2013-01-01

    Optimising the energy consumption and energy generation of wastewater treatment plants (WWTPs) is a topic with increasing importance for water utilities in times of rising energy costs and pressures to reduce greenhouse gas (GHG) emissions. Assessing the energy efficiency and energy optimisation of a WWTP are difficult tasks as most plants vary greatly in size, process layout and other influencing factors. To overcome these limits it is necessary to compare energy efficiency with a statistically relevant base to identify shortfalls and optimisation potential. Such energy benchmarks have been successfully developed and used in central Europe over the last two decades. This paper demonstrates how the latest available energy benchmarks from Germany have been applied to 24 WWTPs in South Australia. It shows how energy benchmarking can be used to identify shortfalls in current performance, prioritise detailed energy assessments and help inform decisions on capital investment.

  11. A Benchmarking System for Domestic Water Use

    Directory of Open Access Journals (Sweden)

    Dexter V. L. Hunt

    2014-05-01

    Full Text Available The national demand for water in the UK is predicted to increase, exacerbated by a growing UK population, and home-grown demands for energy and food. When set against the context of overstretched existing supply sources vulnerable to droughts, particularly in increasingly dense city centres, the delicate balance of matching minimal demands with resource secure supplies becomes critical. When making changes to "internal" demands the role of technological efficiency and user behaviour cannot be ignored, yet existing benchmarking systems traditionally do not consider the latter. This paper investigates the practicalities of adopting a domestic benchmarking system (using a band rating that allows individual users to assess their current water use performance against what is possible. The benchmarking system allows users to achieve higher benchmarks through any approach that reduces water consumption. The sensitivity of water use benchmarks are investigated by making changes to user behaviour and technology. The impact of adopting localised supplies (i.e., Rainwater harvesting—RWH and Grey water—GW and including "external" gardening demands are investigated. This includes the impacts (in isolation and combination of the following: occupancy rates (1 to 4; roof size (12.5 m2 to 100 m2; garden size (25 m2 to 100 m2 and geographical location (North West, Midlands and South East, UK with yearly temporal effects (i.e., rainfall and temperature. Lessons learnt from analysis of the proposed benchmarking system are made throughout this paper, in particular its compatibility with the existing Code for Sustainable Homes (CSH accreditation system. Conclusions are subsequently drawn for the robustness of the proposed system.

  12. Benchmarking ENDF/B-VII.1, JENDL-4.0 and JEFF-3.1.1 with MCNP6

    International Nuclear Information System (INIS)

    Marck, Steven C. van der

    2012-01-01

    Recent releases of three major world nuclear reaction data libraries, ENDF/B-VII.1, JENDL-4.0, and JEFF-3.1.1, have been tested extensively using benchmark calculations. The calculations were performed with the latest release of the continuous energy Monte Carlo neutronics code MCNP, i.e. MCNP6. Three types of benchmarks were used, viz. criticality safety benchmarks, (fusion) shielding benchmarks, and reference systems for which the effective delayed neutron fraction is reported. For criticality safety, more than 2000 benchmarks from the International Handbook of Criticality Safety Benchmark Experiments were used. Benchmarks from all categories were used, ranging from low-enriched uranium, compound fuel, thermal spectrum ones (LEU-COMP-THERM), to mixed uranium-plutonium, metallic fuel, fast spectrum ones (MIX-MET-FAST). For fusion shielding many benchmarks were based on IAEA specifications for the Oktavian experiments (for Al, Co, Cr, Cu, LiF, Mn, Mo, Si, Ti, W, Zr), Fusion Neutronics Source in Japan (for Be, C, N, O, Fe, Pb), and Pulsed Sphere experiments at Lawrence Livermore National Laboratory (for 6 Li, 7 Li, Be, C, N, O, Mg, Al, Ti, Fe, Pb, D2O, H2O, concrete, polyethylene and teflon). The new functionality in MCNP6 to calculate the effective delayed neutron fraction was tested by comparison with more than thirty measurements in widely varying systems. Among these were measurements in the Tank Critical Assembly (TCA in Japan) and IPEN/MB-01 (Brazil), both with a thermal spectrum, two cores in Masurca (France) and three cores in the Fast Critical Assembly (FCA, Japan), all with fast spectra. The performance of the three libraries, in combination with MCNP6, is shown to be good. The results for the LEU-COMP-THERM category are on average very close to the benchmark value. Also for most other categories the results are satisfactory. Deviations from the benchmark values do occur in certain benchmark series, or in isolated cases within benchmark series. Such

  13. Verification of the shift Monte Carlo code with the C5G7 reactor benchmark

    International Nuclear Information System (INIS)

    Sly, N. C.; Mervin, B. T.; Mosher, S. W.; Evans, T. M.; Wagner, J. C.; Maldonado, G. I.

    2012-01-01

    Shift is a new hybrid Monte Carlo/deterministic radiation transport code being developed at Oak Ridge National Laboratory. At its current stage of development, Shift includes a parallel Monte Carlo capability for simulating eigenvalue and fixed-source multigroup transport problems. This paper focuses on recent efforts to verify Shift's Monte Carlo component using the two-dimensional and three-dimensional C5G7 NEA benchmark problems. Comparisons were made between the benchmark eigenvalues and those output by the Shift code. In addition, mesh-based scalar flux tally results generated by Shift were compared to those obtained using MCNP5 on an identical model and tally grid. The Shift-generated eigenvalues were within three standard deviations of the benchmark and MCNP5-1.60 values in all cases. The flux tallies generated by Shift were found to be in very good agreement with those from MCNP. (authors)

  14. MCNP benchmark analyses of critical experiments for the Space Nuclear Thermal Propulsion program

    International Nuclear Information System (INIS)

    Selcow, E.C.; Cerbone, R.J.; Ludewig, H.; Mughabghab, S.F.; Schmidt, E.; Todosow, M.; Parma, E.J.; Ball, R.M.; Hoovler, G.S.

    1993-01-01

    Benchmark analyses have been performed of Particle Bed Reactor (PBR) critical experiments (CX) using the MCNP radiation transport code. The experiments have been conducted at the Sandia National Laboratory reactor facility in support of the Space Nuclear Thermal Propulsion (SNTP) program. The test reactor is a nineteen element water moderated and reflected thermal system. A series of integral experiments have been carried out to test the capabilities of the radiation transport codes to predict the performance of PBR systems. MCNP was selected as the preferred radiation analysis tool for the benchmark experiments. Comparison between experimental and calculational results indicate close agreement. This paper describes the analyses of benchmark experiments designed to quantify the accuracy of the MCNP radiation transport code for predicting the performance characteristics of PBR reactors

  15. MCNP benchmark analyses of critical experiments for the Space Nuclear Thermal Propulsion program

    Science.gov (United States)

    Selcow, Elizabeth C.; Cerbone, Ralph J.; Ludewig, Hans; Mughabghab, Said F.; Schmidt, Eldon; Todosow, Michael; Parma, Edward J.; Ball, Russell M.; Hoovler, Gary S.

    1993-01-01

    Benchmark analyses have been performed of Particle Bed Reactor (PBR) critical experiments (CX) using the MCNP radiation transport code. The experiments have been conducted at the Sandia National Laboratory reactor facility in support of the Space Nuclear Thermal Propulsion (SNTP) program. The test reactor is a nineteen element water moderated and reflected thermal system. A series of integral experiments have been carried out to test the capabilities of the radiation transport codes to predict the performance of PBR systems. MCNP was selected as the preferred radiation analysis tool for the benchmark experiments. Comparison between experimental and calculational results indicate close agreement. This paper describes the analyses of benchmark experiments designed to quantify the accuracy of the MCNP radiation transport code for predicting the performance characteristics of PBR reactors.

  16. Analysis of the OECD main steam line break benchmark using ANC-K/MIDAC code

    International Nuclear Information System (INIS)

    Aoki, Shigeaki; Tahara, Yoshihisa; Suemura, Takayuki; Ogawa, Junto

    2004-01-01

    A three-dimensional (3D) neutronics and thermal-and-hydraulics (T/H) coupling code ANC-K/MIDAC has been developed. It is the combination of the 3D nodal kinetic code ANC-K and the 3D drift flux thermal hydraulic code MIDAC. In order to verify the adequacy of this code, we have performed several international benchmark problems. In this paper, we show the calculation results of ''OECD Main Steam Line Break Benchmark (MSLB benchmark)'', which gives the typical local power peaking problem. And we calculated the return-to-power scenario of the Phase II problem. The comparison of the results shows the very good agreement of important core parameters between the ANC-K/MIDAC and other participant codes. (author)

  17. Toxicological benchmarks for wildlife: 1996 Revision

    International Nuclear Information System (INIS)

    Sample, B.E.; Opresko, D.M.; Suter, G.W., II.

    1996-06-01

    The purpose of this report is to present toxicological benchmarks for assessment of effects of certain chemicals on mammalian and avian wildlife species. Publication of this document meets a milestone for the Environmental Restoration (ER) Risk Assessment Program. This document provides the ER Program with toxicological benchmarks that may be used as comparative tools in screening assessments as well as lines of evidence to support or refute the presence of ecological effects in ecological risk assessments. The chemicals considered in this report are some that occur at US DOE waste sites, and the wildlife species evaluated herein were chosen because they represent a range of body sizes and diets

  18. Benchmarking af kommunernes førtidspensionspraksis

    DEFF Research Database (Denmark)

    Gregersen, Ole

    Hvert år udgiver Den Sociale Ankestyrelse statistikken over afgørelser i sager om førtidspension. I forbindelse med årsstatistikken udgives resultater fra en benchmarking model, hvor antal tilkendelser i den enkelte kommune sammenlignes med et forventet antal tilkendelser, hvis kommunen havde haft...... samme afgørelsespraksis, som den "gennemsnitlige kommune", når vi korrigerer for den sociale struktur i kommunen. Den hidtil anvendte benchmarking model er dokumenteret i Ole Gregersen (1994): Kommunernes Pensionspraksis, Servicerapport, Socialforskningsinstituttet. I dette notat dokumenteres en...

  19. Benchmark calculations for fusion blanket development

    International Nuclear Information System (INIS)

    Sawan, M.E.; Cheng, E.T.

    1985-01-01

    Benchmark problems representing the leading fusion blanket concepts are presented. Benchmark calculations for self-cooled Li/sub 17/Pb/sub 83/ and helium-cooled blankets were performed. Multigroup data libraries generated from ENDF/B-IV and V files using the NJOY and AMPX processing codes with different weighting functions were used. The sensitivity of the TBR to group structure and weighting spectrum increases and Li enrichment decrease with up to 20% discrepancies for thin natural Li/sub 17/Pb/sub 83/ blankets

  20. Benchmark calculations for fusion blanket development

    International Nuclear Information System (INIS)

    Sawan, M.L.; Cheng, E.T.

    1986-01-01

    Benchmark problems representing the leading fusion blanket concepts are presented. Benchmark calculations for self-cooled Li 17 Pb 83 and helium-cooled blankets were performed. Multigroup data libraries generated from ENDF/B-IV and V files using the NJOY and AMPX processing codes with different weighting functions were used. The sensitivity of the tritium breeding ratio to group structure and weighting spectrum increases as the thickness and Li enrichment decrease with up to 20% discrepancies for thin natural Li 17 Pb 83 blankets. (author)