WorldWideScience

Sample records for benchmark specification sheet

  1. AER benchmark specification sheet

    International Nuclear Information System (INIS)

    Aszodi, A.; Toth, S.

    2009-01-01

    In the VVER-440/213 type reactors, the core outlet temperature field is monitored with in-core thermocouples, which are installed above 210 fuel assemblies. These measured temperatures are used in determination of the fuel assembly powers and they have important role in the reactor power limitation. For these reasons, correct interpretation of the thermocouple signals is an important question. In order to interpret the signals in correct way, knowledge of the coolant mixing in the assembly heads is necessary. Computational Fluid Dynamics (CFD) codes and experiments can help to understand better these mixing processes and they can provide information which can support the more adequate interpretation of the thermocouple signals. This benchmark deals with the 3D CFD modeling of the coolant mixing in the heads of the profiled fuel assemblies with 12.2 mm rod pitch. Two assemblies of the 23rd cycle of the Paks NPP's Unit 3 are investigated. One of them has symmetrical pin power profile and another possesses inclined profile. (authors)

  2. AER Benchmark Specification Sheet

    International Nuclear Information System (INIS)

    Aszodi, A.; Toth, S.

    2009-01-01

    In the WWER-440/213 type reactors, the core outlet temperature field is monitored with in-core thermocouples, which are installed above 210 fuel assemblies. These measured temperatures are used in determination of the fuel assembly powers and they have important role in the reactor power limitation. For these reasons, correct interpretation of the thermocouple signals is an important question. In order to interpret the signals in correct way, knowledge of the coolant mixing in the assembly heads is necessary. Computational fluid dynamics codes and experiments can help to understand better these mixing processes and they can provide information which can support the more adequate interpretation of the thermocouple signals. This benchmark deals with the 3D computational fluid dynamics modeling of the coolant mixing in the heads of the profiled fuel assemblies with 12.2 mm rod pitch. Two assemblies of the twenty third cycle of the Paks NPPs Unit 3 are investigated. One of them has symmetrical pin power profile and another possesses inclined profile. (Authors)

  3. HPCG Benchmark Technical Specification

    Energy Technology Data Exchange (ETDEWEB)

    Heroux, Michael Allen [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Dongarra, Jack [Univ. of Tennessee, Knoxville, TN (United States); Luszczek, Piotr [Univ. of Tennessee, Knoxville, TN (United States)

    2013-10-01

    The High Performance Conjugate Gradient (HPCG) benchmark [cite SNL, UTK reports] is a tool for ranking computer systems based on a simple additive Schwarz, symmetric Gauss-Seidel preconditioned conjugate gradient solver. HPCG is similar to the High Performance Linpack (HPL), or Top 500, benchmark [1] in its purpose, but HPCG is intended to better represent how today’s applications perform. In this paper we describe the technical details of HPCG: how it is designed and implemented, what code transformations are permitted and how to interpret and report results.

  4. Benchmarking

    OpenAIRE

    Meylianti S., Brigita

    1999-01-01

    Benchmarking has different meaning to different people. There are five types of benchmarking, namely internal benchmarking, competitive benchmarking, industry / functional benchmarking, process / generic benchmarking and collaborative benchmarking. Each type of benchmarking has its own advantages as well as disadvantages. Therefore it is important to know what kind of benchmarking is suitable to a specific application. This paper will discuss those five types of benchmarking in detail, includ...

  5. Benchmark Testing of the Largest Titanium Aluminide Sheet Subelement Conducted

    Science.gov (United States)

    Bartolotta, Paul A.; Krause, David L.

    2000-01-01

    To evaluate wrought titanium aluminide (gamma TiAl) as a viable candidate material for the High-Speed Civil Transport (HSCT) exhaust nozzle, an international team led by the NASA Glenn Research Center at Lewis Field successfully fabricated and tested the largest gamma TiAl sheet structure ever manufactured. The gamma TiAl sheet structure, a 56-percent subscale divergent flap subelement, was fabricated for benchmark testing in three-point bending. Overall, the subelement was 84-cm (33-in.) long by 13-cm (5-in.) wide by 8-cm (3-in.) deep. Incorporated into the subelement were features that might be used in the fabrication of a full-scale divergent flap. These features include the use of: (1) gamma TiAl shear clips to join together sections of corrugations, (2) multiple gamma TiAl face sheets, (3) double hot-formed gamma TiAl corrugations, and (4) brazed joints. The structural integrity of the gamma TiAl sheet subelement was evaluated by conducting a room-temperature three-point static bend test.

  6. Development of parallel benchmark code by sheet metal forming simulator 'ITAS'

    International Nuclear Information System (INIS)

    Watanabe, Hiroshi; Suzuki, Shintaro; Minami, Kazuo

    1999-03-01

    This report describes the development of parallel benchmark code by sheet metal forming simulator 'ITAS'. ITAS is a nonlinear elasto-plastic analysis program by the finite element method for the purpose of the simulation of sheet metal forming. ITAS adopts the dynamic analysis method that computes displacement of sheet metal at every time unit and utilizes the implicit method with the direct linear equation solver. Therefore the simulator is very robust. However, it requires a lot of computational time and memory capacity. In the development of the parallel benchmark code, we designed the code by MPI programming to reduce the computational time. In numerical experiments on the five kinds of parallel super computers at CCSE JAERI, i.e., SP2, SR2201, SX-4, T94 and VPP300, good performances are observed. The result will be shown to the public through WWW so that the benchmark results may become a guideline of research and development of the parallel program. (author)

  7. Specification for the VERA Depletion Benchmark Suite

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Kang Seog [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2015-12-17

    CASL-X-2015-1014-000 iii Consortium for Advanced Simulation of LWRs EXECUTIVE SUMMARY The CASL neutronics simulator MPACT is under development for the neutronics and T-H coupled simulation for the pressurized water reactor. MPACT includes the ORIGEN-API and internal depletion module to perform depletion calculations based upon neutron-material reaction and radioactive decay. It is a challenge to validate the depletion capability because of the insufficient measured data. One of the detoured methods to validate it is to perform a code-to-code comparison for benchmark problems. In this study a depletion benchmark suite has been developed and a detailed guideline has been provided to obtain meaningful computational outcomes which can be used in the validation of the MPACT depletion capability.

  8. FENDL neutronics benchmark: Specifications for the calculational neutronics and shielding benchmark

    International Nuclear Information System (INIS)

    Sawan, M.E.

    1994-12-01

    During the IAEA Advisory Group Meeting on ''Improved Evaluations and Integral Data Testing for FENDL'' held in Garching near Munich, Germany in the period 12-16 September 1994, the Working Group II on ''Experimental and Calculational Benchmarks on Fusion Neutronics for ITER'' recommended that a calculational benchmark representative of the ITER design should be developed. This report describes the neutronics and shielding calculational benchmark available for scientists interested in performing analysis for this benchmark. (author)

  9. Benchmarking

    OpenAIRE

    Beretta Sergio; Dossi Andrea; Grove Hugh

    2000-01-01

    Due to their particular nature, the benchmarking methodologies tend to exceed the boundaries of management techniques, and to enter the territories of managerial culture. A culture that is also destined to break into the accounting area not only strongly supporting the possibility of fixing targets, and measuring and comparing the performance (an aspect that is already innovative and that is worthy of attention), but also questioning one of the principles (or taboos) of the accounting or...

  10. The level 1 and 2 specification for parallel benchmark and a benchmark test of scalar-parallel computer SP2 based on the specifications

    International Nuclear Information System (INIS)

    Orii, Shigeo

    1998-06-01

    A benchmark specification for performance evaluation of parallel computers for numerical analysis is proposed. Level 1 benchmark, which is a conventional type benchmark using processing time, measures performance of computers running a code. Level 2 benchmark proposed in this report is to give the reason of the performance. As an example, scalar-parallel computer SP2 is evaluated with this benchmark specification in case of a molecular dynamics code. As a result, the main causes to suppress the parallel performance are maximum band width and start-up time of communication between nodes. Especially the start-up time is proportional not only to the number of processors but also to the number of particles. (author)

  11. Benchmarking Benchmarks

    NARCIS (Netherlands)

    D.C. Blitz (David)

    2011-01-01

    textabstractBenchmarking benchmarks is a bundle of six studies that are inspired by the prevalence of benchmarking in academic finance research as well as in investment practice. Three studies examine if current benchmark asset pricing models adequately describe the cross-section of stock returns.

  12. Assessment of Usability Benchmarks: Combining Standardized Scales with Specific Questions

    Directory of Open Access Journals (Sweden)

    Stephanie Bettina Linek

    2011-12-01

    Full Text Available The usability of Web sites and online services is of rising importance. When creating a completely new Web site, qualitative data are adequate for identifying the most usability problems. However, changes of an existing Web site should be evaluated by a quantitative benchmarking process. The proposed paper describes the creation of a questionnaire that allows a quantitative usability benchmarking, i.e. a direct comparison of the different versions of a Web site and an orientation on general standards of usability. The questionnaire is also open for qualitative data. The methodology will be explained by the digital library services of the ZBW.

  13. Embedded Volttron specification - benchmarking small footprint compute device for Volttron

    Energy Technology Data Exchange (ETDEWEB)

    Sanyal, Jibonananda [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Fugate, David L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Woodworth, Ken [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Nutaro, James J. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Kuruganti, Teja [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2015-08-17

    An embedded system is a small footprint computing unit that typically serves a specific purpose closely associated with measurements and control of hardware devices. These units are designed for reasonable durability and operations in a wide range of operating conditions. Some embedded systems support real-time operations and can demonstrate high levels of reliability. Many have failsafe mechanisms built to handle graceful shutdown of the device in exception conditions. The available memory, processing power, and network connectivity of these devices are limited due to the nature of their specific-purpose design and intended application. Industry practice is to carefully design the software for the available hardware capability to suit desired deployment needs. Volttron is an open source agent development and deployment platform designed to enable researchers to interact with devices and appliances without having to write drivers themselves. Hosting Volttron on small footprint embeddable devices enables its demonstration for embedded use. This report details the steps required and the experience in setting up and running Volttron applications on three small footprint devices: the Intel Next Unit of Computing (NUC), the Raspberry Pi 2, and the BeagleBone Black. In addition, the report also details preliminary investigation of the execution performance of Volttron on these devices.

  14. The Monte Carlo performance benchmark test - AIMS, specifications and first results

    Energy Technology Data Exchange (ETDEWEB)

    Hoogenboom, J. Eduard, E-mail: j.e.hoogenboom@tudelft.nl [Faculty of Applied Sciences, Delft University of Technology (Netherlands); Martin, William R., E-mail: wrm@umich.edu [Nuclear Engineering and Radiological Sciences, University of Michigan, Ann Arbor, MI (United States); Petrovic, Bojan, E-mail: Bojan.Petrovic@gatech.edu [Nuclear and Radiological Engineering, Georgia Institute of Technology, Atlanta, GA (United States)

    2011-07-01

    The Monte Carlo performance benchmark for detailed power density calculation in a full-size reactor core is organized under the auspices of the OECD NEA Data Bank. It aims at monitoring over a range of years the increase in performance, measured in terms of standard deviation and computer time, of Monte Carlo calculation of the power density in small volumes. A short description of the reactor geometry and composition is discussed. One of the unique features of the benchmark exercise is the possibility to upload results from participants at a web site of the NEA Data Bank which enables online analysis of results and to graphically display how near we are at the goal of doing a detailed power distribution calculation with acceptable statistical uncertainty in an acceptable computing time. First results are discussed which show that 10 to 100 billion histories must be simulated to reach a standard deviation of a few percent in the estimated power of most of the requested the fuel zones. Even when using a large supercomputer, a considerable speedup is still needed to reach the target of 1 hour computer time. An outlook is given of what to expect from this benchmark exercise over the years. Possible extensions of the benchmark for specific issues relevant in current Monte Carlo calculation for nuclear reactors are also discussed. (author)

  15. The Monte Carlo performance benchmark test - AIMS, specifications and first results

    International Nuclear Information System (INIS)

    Hoogenboom, J. Eduard; Martin, William R.; Petrovic, Bojan

    2011-01-01

    The Monte Carlo performance benchmark for detailed power density calculation in a full-size reactor core is organized under the auspices of the OECD NEA Data Bank. It aims at monitoring over a range of years the increase in performance, measured in terms of standard deviation and computer time, of Monte Carlo calculation of the power density in small volumes. A short description of the reactor geometry and composition is discussed. One of the unique features of the benchmark exercise is the possibility to upload results from participants at a web site of the NEA Data Bank which enables online analysis of results and to graphically display how near we are at the goal of doing a detailed power distribution calculation with acceptable statistical uncertainty in an acceptable computing time. First results are discussed which show that 10 to 100 billion histories must be simulated to reach a standard deviation of a few percent in the estimated power of most of the requested the fuel zones. Even when using a large supercomputer, a considerable speedup is still needed to reach the target of 1 hour computer time. An outlook is given of what to expect from this benchmark exercise over the years. Possible extensions of the benchmark for specific issues relevant in current Monte Carlo calculation for nuclear reactors are also discussed. (author)

  16. Guidelines for Home Energy Upgrade Professionals: Standard Work Specifications for Multifamily Energy Upgrades (Fact Sheet)

    Energy Technology Data Exchange (ETDEWEB)

    2011-08-01

    This fact sheet provides essential information about the 2011 publication of the Workforce Guidelines for Multifamily Home Energy Upgrades, including their origin, their development with the help of industry leaders to create the standard work specifications for retrofit work.

  17. Specifications for adjusted cross section and covariance libraries based upon CSEWG fast reactor and dosimetry benchmarks

    International Nuclear Information System (INIS)

    Weisbin, C.R.; Marable, J.H.; Collins, P.J.; Cowan, C.L.; Peelle, R.W.; Salvatores, M.

    1979-06-01

    The present work proposes a specific plan of cross section library adjustment for fast reactor core physics analysis using information from fast reactor and dosimetry integral experiments and from differential data evaluations. This detailed exposition of the proposed approach is intended mainly to elicit review and criticism from scientists and engineers in the research, development, and design fields. This major attempt to develop useful adjusted libraries is based on the established benchmark integral data, accurate and well documented analysis techniques, sensitivities, and quantified uncertainties for nuclear data, integral experiment measurements, and calculational methodology. The adjustments to be obtained using these specifications are intended to produce an overall improvement in the least-squares sense in the quality of the data libraries, so that calculations of other similar systems using the adjusted data base with any credible method will produce results without much data-related bias. The adjustments obtained should provide specific recommendations to the data evaluation program to be weighed in the light of newer measurements, and also a vehicle for observing how the evaluation process is converging. This report specifies the calculational methodology to be used, the integral experiments to be employed initially, and the methods and integral experiment biases and uncertainties to be used. The sources of sensitivity coefficients, as well as the cross sections to be adjusted, are detailed. The formulae for sensitivity coefficients for fission spectral parameters are developed. A mathematical formulation of the least-square adjustment problem is given including biases and uncertainties in methods

  18. HEP specific benchmarks of virtual machines on multi-core CPU architectures

    International Nuclear Information System (INIS)

    Alef, M; Gable, I

    2010-01-01

    Virtualization technologies such as Xen can be used in order to satisfy the disparate and often incompatible system requirements of different user groups in shared-use computing facilities. This capability is particularly important for HEP applications, which often have restrictive requirements. The use of virtualization adds flexibility, however, it is essential that the virtualization technology place little overhead on the HEP application. We present an evaluation of the practicality of running HEP applications in multiple Virtual Machines (VMs) on a single multi-core Linux system. We use the benchmark suite used by the HEPiX CPU Benchmarking Working Group to give a quantitative evaluation relevant to the HEP community. Benchmarks are packaged inside VMs and then the VMs are booted onto a single multi-core system. Benchmarks are then simultaneously executed on each VM to simulate highly loaded VMs running HEP applications. These techniques are applied to a variety of multi-core CPU architectures and VM configurations.

  19. Gestational age specific neonatal survival in the State of Qatar (2003-2008) - a comparative study with international benchmarks.

    Science.gov (United States)

    Rahman, Sajjad; Salameh, Khalil; Al-Rifai, Hilal; Masoud, Ahmed; Lutfi, Samawal; Salama, Husam; Abdoh, Ghassan; Omar, Fahmi; Bener, Abdulbari

    2011-09-01

    To analyze and compare the current gestational age specific neonatal survival rates between Qatar and international benchmarks. An analytical comparative study. Women's Hospital, Hamad Medical Corporation, Doha, Qatar, from 2003-2008. Six year's (2003-2008) gestational age specific neonatal mortality data was stratified for each completed week of gestation at birth from 24 weeks till term. The data from World Health Statistics by WHO (2010), Vermont Oxford Network (VON, 2007) and National Statistics United Kingdom (2006) were used as international benchmarks for comparative analysis. A total of 82,002 babies were born during the study period. Qatar's neonatal mortality rate (NMR) dropped from 6/1000 in 2003 to 4.3/1000 in 2008 (p Qatar were comparable with international benchmarks. The survival of Qatar (p=0.01 and p 32 weeks babies was better in UK (p=0.01) as compared to Qatar. The relative risk (RR) of death decreased with increasing gestational age (p Qatar. The current total and gestational age specific neonatal survival rates in the State of Qatar are comparable with international benchmarks. In Qatar, persistently high rates of low birth weight and lethal chromosomal and congenital anomalies significantly contribute towards neonatal mortality.

  20. Regulatory Benchmarking

    DEFF Research Database (Denmark)

    Agrell, Per J.; Bogetoft, Peter

    2017-01-01

    Benchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators. The appli......Benchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators....... The application of bench-marking in regulation, however, requires specific steps in terms of data validation, model specification and outlier detection that are not systematically documented in open publications, leading to discussions about regulatory stability and economic feasibility of these techniques...

  1. Regulatory Benchmarking

    DEFF Research Database (Denmark)

    Agrell, Per J.; Bogetoft, Peter

    2017-01-01

    Benchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators. The appli......Benchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators....... The application of benchmarking in regulation, however, requires specific steps in terms of data validation, model specification and outlier detection that are not systematically documented in open publications, leading to discussions about regulatory stability and economic feasibility of these techniques...

  2. Neutron Reference Benchmark Field Specification: ACRR Free-Field Environment (ACRR-FF-CC-32-CL).

    Energy Technology Data Exchange (ETDEWEB)

    Vega, Richard Manuel [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Parma, Edward J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Griffin, Patrick J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Vehar, David W. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-07-01

    This report was put together to support the International Atomic Energy Agency (IAEA) REAL- 2016 activity to validate the dosimetry community’s ability to use a consistent set of activation data and to derive consistent spectral characterizations. The report captures details of integral measurements taken in the Annular Core Research Reactor (ACRR) central cavity free-field reference neutron benchmark field. The field is described and an “a priori” calculated neutron spectrum is reported, based on MCNP6 calculations, and a subject matter expert (SME) based covariance matrix is given for this “a priori” spectrum. The results of 31 integral dosimetry measurements in the neutron field are reported.

  3. The edge- and basal-plane-specific electrochemistry of a single-layer graphene sheet

    Science.gov (United States)

    Yuan, Wenjing; Zhou, Yu; Li, Yingru; Li, Chun; Peng, Hailin; Zhang, Jin; Liu, Zhongfan; Dai, Liming; Shi, Gaoquan

    2013-01-01

    Graphene has a unique atom-thick two-dimensional structure and excellent properties, making it attractive for a variety of electrochemical applications, including electrosynthesis, electrochemical sensors or electrocatalysis, and energy conversion and storage. However, the electrochemistry of single-layer graphene has not yet been well understood, possibly due to the technical difficulties in handling individual graphene sheet. Here, we report the electrochemical behavior at single-layer graphene-based electrodes, comparing the basal plane of graphene to its edge. The graphene edge showed 4 orders of magnitude higher specific capacitance, much faster electron transfer rate and stronger electrocatalytic activity than those of graphene basal plane. A convergent diffusion effect was observed at the sub-nanometer thick graphene edge-electrode to accelerate the electrochemical reactions. Coupling with the high conductivity of a high-quality graphene basal plane, graphene edge is an ideal electrode for electrocatalysis and for the storage of capacitive charges. PMID:23896697

  4. Benchmark Studies of Induced Radioactivity Produced in LHC Materials, Pt II Specific Activities

    International Nuclear Information System (INIS)

    Brugger, M.; Mayer, S.; Roesler, S.; Ulrici, L.; Khater, H.; Prinz, A.; Vincke, H.

    2006-01-01

    A new method to estimate remanent dose rates, to be used with the Monte Carlo code FLUKA, was benchmarked against measurements from an experiment that was performed at the CERN-EU high-energy reference field facility. An extensive collection of samples of different materials were placed downstream of and laterally to a copper target, intercepting a positively charged mixed hadron beam with a momentum of 120 GeV/c. Emphasis was put on the reduction of uncertainties such as careful monitoring of the irradiation parameters, the use of different instruments to measure dose rates, detailed elemental analyses of the irradiated materials and detailed simulations of the irradiation experiment. Measured and calculated dose rates are in good agreement

  5. Parameterization of disorder predictors for large-scale applications requiring high specificity by using an extended benchmark dataset

    Directory of Open Access Journals (Sweden)

    Eisenhaber Frank

    2010-02-01

    Full Text Available Abstract Background Algorithms designed to predict protein disorder play an important role in structural and functional genomics, as disordered regions have been reported to participate in important cellular processes. Consequently, several methods with different underlying principles for disorder prediction have been independently developed by various groups. For assessing their usability in automated workflows, we are interested in identifying parameter settings and threshold selections, under which the performance of these predictors becomes directly comparable. Results First, we derived a new benchmark set that accounts for different flavours of disorder complemented with a similar amount of order annotation derived for the same protein set. We show that, using the recommended default parameters, the programs tested are producing a wide range of predictions at different levels of specificity and sensitivity. We identify settings, in which the different predictors have the same false positive rate. We assess conditions when sets of predictors can be run together to derive consensus or complementary predictions. This is useful in the framework of proteome-wide applications where high specificity is required such as in our in-house sequence analysis pipeline and the ANNIE webserver. Conclusions This work identifies parameter settings and thresholds for a selection of disorder predictors to produce comparable results at a desired level of specificity over a newly derived benchmark dataset that accounts equally for ordered and disordered regions of different lengths.

  6. Specification of phase 3 benchmark (Hex-Z heterogeneous and burnup calculation)

    International Nuclear Information System (INIS)

    Kim, Y.I.

    2002-01-01

    During the second RCM of the IAEA Co-ordinated Research Project Updated Codes and Methods to Reduce the Calculational Uncertainties of the LMFR Reactivity Effects the following items were identified as important. Heterogeneity will affect absolute core reactivity. Rod worths could be considerably reduced by heterogeneity effects depending on their detailed design. Heterogeneity effects will affect the resonance self-shielding in the treatment of fuel Doppler, steel Doppler and sodium density effects. However, it was considered more important to concentrate on the sodium density effect in order to reduce the calculational effort required. It was also recognized that burnup effects will have an influence on fuel Doppler and sodium worths. A benchmark for the assessment of heterogeneity effect for Phase 3 was defined. It is to be performed for the Hex-Z model of the reactor only. No calculations will be performed for the R-Z model. For comparison with heterogeneous evaluations, the control rod worth will be calculated at the beginning of the equilibrium cycle, based on the homogeneous model. The definitions of rod raised and rod inserted for SHR are given, using the composition numbers

  7. OECD/NRC Benchmark Based on NUPEC PWR Sub-channel and Bundle Test (PSBT). Volume I: Experimental Database and Final Problem Specifications

    International Nuclear Information System (INIS)

    Rubin, A.; Schoedel, A.; Avramova, M.; Utsuno, H.; Bajorek, S.; Velazquez-Lozada, A.

    2012-01-01

    The need to refine models for best-estimate calculations, based on good-quality experimental data, has been expressed in many recent meetings in the field of nuclear applications. The needs arising in this respect should not be limited to the currently available macroscopic methods but should be extended to next-generation analysis techniques that focus on more microscopic processes. One of the most valuable databases identified for the thermal-hydraulics modelling was developed by the Nuclear Power Engineering Corporation (NUPEC), Japan, which includes sub-channel void fraction and departure from nucleate boiling (DNB) measurements in a representative Pressurised Water Reactor (PWR) fuel assembly. Part of this database has been made available for this international benchmark activity entitled 'NUPEC PWR Sub-channel and Bundle Tests (PSBT) benchmark'. This international project has been officially approved by the Japanese Ministry of Economy, Trade, and Industry (METI), the US Nuclear Regulatory Commission (NRC) and endorsed by the OECD/NEA. The benchmark team has been organised based on the collaboration between Japan and the USA. A large number of international experts have agreed to participate in this programme. The fine-mesh high-quality sub-channel void fraction and departure from nucleate boiling data encourages advancement in understanding and modelling complex flow behaviour in real bundles. Considering that the present theoretical approach is relatively immature, the benchmark specification is designed so that it will systematically assess and compare the participants' analytical models on the prediction of detailed void distributions and DNB. The development of truly mechanistic models for DNB prediction is currently underway. The benchmark problem includes both macroscopic and microscopic measurement data. In this context, the sub-channel grade void fraction data are regarded as the macroscopic data and the digitised computer graphic images are the

  8. Benchmark studies of induced radioactivity produced in LHC materials, Part I: Specific activities.

    Science.gov (United States)

    Brugger, M; Khater, H; Mayer, S; Prinz, A; Roesler, S; Ulrici, L; Vincke, H

    2005-01-01

    Samples of materials which will be used in the LHC machine for shielding and construction components were irradiated in the stray radiation field of the CERN-EU high-energy reference field facility. After irradiation, the specific activities induced in the various samples were analysed with a high-precision gamma spectrometer at various cooling times, allowing identification of isotopes with a wide range of half-lives. Furthermore, the irradiation experiment was simulated in detail with the FLUKA Monte Carlo code. A comparison of measured and calculated specific activities shows good agreement, supporting the use of FLUKA for estimating the level of induced activity in the LHC.

  9. Benchmark studies of induced radioactivity produced in LHC materials, part I: Specific activities

    International Nuclear Information System (INIS)

    Brugger, M.; Khater, H.; Mayer, S.; Prinz, A.; Roesler, S.; Ulrici, L.; Vincke, H.

    2005-01-01

    Samples of materials which will be used in the LHC machine for shielding and construction components were irradiated in the stray radiation field of the CERN-EU high-energy reference field facility. After irradiation, the specific activities induced in the various samples were analysed with a high-precision gamma spectrometer at various cooling times, allowing identification of isotopes with a wide range of half-lives. Furthermore, the irradiation experiment was simulated in detail with the FLUKA Monte Carlo code. A comparison of measured and calculated specific activities shows good agreement, supporting the use of FLUKA for estimating the level of induced activity in the LHC. (authors)

  10. Comprehensive benchmarking reveals H2BK20 acetylation as a distinctive signature of cell-state-specific enhancers and promoters.

    Science.gov (United States)

    Kumar, Vibhor; Rayan, Nirmala Arul; Muratani, Masafumi; Lim, Stefan; Elanggovan, Bavani; Xin, Lixia; Lu, Tess; Makhija, Harshyaa; Poschmann, Jeremie; Lufkin, Thomas; Ng, Huck Hui; Prabhakar, Shyam

    2016-05-01

    Although over 35 different histone acetylation marks have been described, the overwhelming majority of regulatory genomics studies focus exclusively on H3K27ac and H3K9ac. In order to identify novel epigenomic traits of regulatory elements, we constructed a benchmark set of validated enhancers by performing 140 enhancer assays in human T cells. We tested 40 chromatin signatures on this unbiased enhancer set and identified H2BK20ac, a little-studied histone modification, as the most predictive mark of active enhancers. Notably, we detected a novel class of functionally distinct enhancers enriched in H2BK20ac but lacking H3K27ac, which was present in all examined cell lines and also in embryonic forebrain tissue. H2BK20ac was also unique in highlighting cell-type-specific promoters. In contrast, other acetylation marks were present in all active promoters, regardless of cell-type specificity. In stimulated microglial cells, H2BK20ac was more correlated with cell-state-specific expression changes than H3K27ac, with TGF-beta signaling decoupling the two acetylation marks at a subset of regulatory elements. In summary, our study reveals a previously unknown connection between histone acetylation and cell-type-specific gene regulation and indicates that H2BK20ac profiling can be used to uncover new dimensions of gene regulation. © 2016 Kumar et al.; Published by Cold Spring Harbor Laboratory Press.

  11. Pan-specific MHC class I predictors: A benchmark of HLA class I pan-specific prediction methods

    DEFF Research Database (Denmark)

    Zhang, Hao; Lundegaard, Claus; Nielsen, Morten

    2009-01-01

    not previously been compared using independent evaluation sets. Results: A diverse set of quantitative peptide binding affinity measurements was collected from IEDB, together with a large set of HLA class I ligands from the SYFPEITHI database. Based on these data sets, three different pan-specific HLA web...

  12. Benchmark Specification for an HTR Fuelled with Reactor-grade Plutonium (or Reactor-grade Pu/Th and U/Th). Proposal version 2

    International Nuclear Information System (INIS)

    Hosking, J.G.; Newton, T.D.; Morris, P.

    2007-01-01

    This benchmark proposal builds upon that specified in NEA/NSC/DOC(2003)22 report. In addition to the three phases described in that report, another two phases have now been defined. Additional items for calculation have also been added to the existing phases. It is intended that further items may be added to the benchmark after consultation with its participants. Although the benchmark is specifically designed to provide inter-comparisons for plutonium- and thorium-containing fuels, it is proposed that phases considering simple calculations for a uranium fuel cell and uranium core be included. The purpose of these is to identify any increased uncertainties, relative to uranium fuel, associated with the lesser-known fuels to be investigated in different phases of this benchmark. The first phase considers an infinite array of fuel pebbles fuelled with uranium fuel. Phase 2 considers a similar array of pebbles but for plutonium fuel. Phase 3 continues the plutonium fuel inter-comparisons within the context of whole core calculations. Calculations for Phase 4 are for a uranium-fuelled core. Phase 5 considers an infinite array of pebbles containing thorium. In setting the benchmark the requirements in the definition of the LEUPRO-12 PROTEUS benchmark have been considered. Participants were invited to submit both deterministic results as well as, where appropriate, results from Monte Carlo calculations. Fundamental nuclear data, Avogadro's number, natural abundance data and atomic weights have been taken from the references indicated in the document

  13. Using smooth sheets to describe groundfish habitat in Alaskan waters, with specific application to two flatfishes

    Science.gov (United States)

    Zimmermann, Mark; Reid, Jane A.; Golden, Nadine

    2016-01-01

    In this analysis we demonstrate how preferred fish habitat can be predicted and mapped for juveniles of two Alaskan groundfish species – Pacific halibut (Hippoglossus stenolepis) and flathead sole (Hippoglossoides elassodon) – at five sites (Kiliuda Bay, Izhut Bay, Port Dick, Aialik Bay, and the Barren Islands) in the central Gulf of Alaska. The method involves using geographic information system (GIS) software to extract appropriate information from National Ocean Service (NOS) smooth sheets that are available from NGDC (the National Geophysical Data Center). These smooth sheets are highly detailed charts that include more soundings, substrates, shoreline and feature information than the more commonly-known navigational charts. By bringing the information from smooth sheets into a GIS, a variety of surfaces, such as depth, slope, rugosity and mean grain size were interpolated into raster surfaces. Other measurements such as site openness, shoreline length, proportion of bay that is near shore, areas of rocky reefs and kelp beds, water volumes, surface areas and vertical cross-sections were also made in order to quantify differences between the study sites. Proper GIS processing also allows linking the smooth sheets to other data sets, such as orthographic satellite photographs, topographic maps and precipitation estimates from which watersheds and runoff can be derived. This same methodology can be applied to larger areas, taking advantage of these free data sets to describe predicted groundfish essential fish habitat (EFH) in Alaskan waters.

  14. Parallel β-sheet vibrational couplings revealed by 2D IR spectroscopy of an isotopically labeled macrocycle: quantitative benchmark for the interpretation of amyloid and protein infrared spectra.

    Science.gov (United States)

    Woys, Ann Marie; Almeida, Aaron M; Wang, Lu; Chiu, Chi-Cheng; McGovern, Michael; de Pablo, Juan J; Skinner, James L; Gellman, Samuel H; Zanni, Martin T

    2012-11-21

    Infrared spectroscopy is playing an important role in the elucidation of amyloid fiber formation, but the coupling models that link spectra to structure are not well tested for parallel β-sheets. Using a synthetic macrocycle that enforces a two stranded parallel β-sheet conformation, we measured the lifetimes and frequency for six combinations of doubly (13)C═(18)O labeled amide I modes using 2D IR spectroscopy. The average vibrational lifetime of the isotope labeled residues was 550 fs. The frequencies of the labels ranged from 1585 to 1595 cm(-1), with the largest frequency shift occurring for in-register amino acids. The 2D IR spectra of the coupled isotope labels were calculated from molecular dynamics simulations of a series of macrocycle structures generated from replica exchange dynamics to fully sample the conformational distribution. The models used to simulate the spectra include through-space coupling, through-bond coupling, and local frequency shifts caused by environment electrostatics and hydrogen bonding. The calculated spectra predict the line widths and frequencies nearly quantitatively. Historically, the characteristic features of β-sheet infrared spectra have been attributed to through-space couplings such as transition dipole coupling. We find that frequency shifts of the local carbonyl groups due to nearest neighbor couplings and environmental factors are more important, while the through-space couplings dictate the spectral intensities. As a result, the characteristic absorption spectra empirically used for decades to assign parallel β-sheet secondary structure arises because of a redistribution of oscillator strength, but the through-space couplings do not themselves dramatically alter the frequency distribution of eigenstates much more than already exists in random coil structures. Moreover, solvent exposed residues have amide I bands with >20 cm(-1) line width. Narrower line widths indicate that the amide I backbone is solvent

  15. NUPEC BWR Full-size Fine-mesh Bundle Test (BFBT) Benchmark. Volume II: uncertainty and sensitivity analyses of void distribution and critical power - Specification

    International Nuclear Information System (INIS)

    Aydogan, F.; Hochreiter, L.; Ivanov, K.; Martin, M.; Utsuno, H.; Sartori, E.

    2010-01-01

    This report provides the specification for the uncertainty exercises of the international OECD/NEA, NRC and NUPEC BFBT benchmark problem including the elemental task. The specification was prepared jointly by Pennsylvania State University (PSU), USA and the Japan Nuclear Energy Safety (JNES) Organisation, in cooperation with the OECD/NEA and the Commissariat a l'energie atomique (CEA Saclay, France). The work is sponsored by the US NRC, METI-Japan, the OECD/NEA and the Nuclear Engineering Program (NEP) of Pennsylvania State University. This uncertainty specification covers the fourth exercise of Phase I (Exercise-I-4), and the third exercise of Phase II (Exercise II-3) as well as the elemental task. The OECD/NRC BFBT benchmark provides a very good opportunity to apply uncertainty analysis (UA) and sensitivity analysis (SA) techniques and to assess the accuracy of thermal-hydraulic models for two-phase flows in rod bundles. During the previous OECD benchmarks, participants usually carried out sensitivity analysis on their models for the specification (initial conditions, boundary conditions, etc.) to identify the most sensitive models or/and to improve the computed results. The comprehensive BFBT experimental database (NEA, 2006) leads us one step further in investigating modelling capabilities by taking into account the uncertainty analysis in the benchmark. The uncertainties in input data (boundary conditions) and geometry (provided in the benchmark specification) as well as the uncertainties in code models can be accounted for to produce results with calculational uncertainties and compare them with the measurement uncertainties. Therefore, uncertainty analysis exercises were defined for the void distribution and critical power phases of the BFBT benchmark. This specification is intended to provide definitions related to UA/SA methods, sensitivity/ uncertainty parameters, suggested probability distribution functions (PDF) of sensitivity parameters, and selected

  16. Benchmarking and Regulation

    DEFF Research Database (Denmark)

    Agrell, Per J.; Bogetoft, Peter

    . The application of benchmarking in regulation, however, requires specific steps in terms of data validation, model specification and outlier detection that are not systematically documented in open publications, leading to discussions about regulatory stability and economic feasibility of these techniques...

  17. The Drill Down Benchmark

    NARCIS (Netherlands)

    P.A. Boncz (Peter); T. Rühl (Tim); F. Kwakkel

    1998-01-01

    textabstractData Mining places specific requirements on DBMS query performance that cannot be evaluated satisfactorily using existing OLAP benchmarks. The DD Benchmark - defined here - provides a practical case and yardstick to explore how well a DBMS is able to support Data Mining applications. It

  18. Benchmarking Tool Kit.

    Science.gov (United States)

    Canadian Health Libraries Association.

    Nine Canadian health libraries participated in a pilot test of the Benchmarking Tool Kit between January and April, 1998. Although the Tool Kit was designed specifically for health libraries, the content and approach are useful to other types of libraries as well. Used to its full potential, benchmarking can provide a common measuring stick to…

  19. Benchmark of European strategies of development of gas production and valorisation sectors. European inventory and synthetic sheets per country - Intermediate report

    International Nuclear Information System (INIS)

    Bastide, Guillaume

    2014-10-01

    After a European inventory and a discussion of the evolution of the number of methanization installations, of the evolution of biogas production, and of the situation and main economic levers in European countries, this report proposes sheets of data and analysis for Germany, Italy, Netherlands, the United Kingdom, and Sweden. For each of these countries, the document proposes an historical overview and some key figures on various aspects (types and number of installations, biogas production and valorisation, resources and processed quantities, technologies, digestates, costs of installation and financing modes, jobs and enterprises in the sector), a comment of the national strategy (actors, strategy regarding renewable energy, climate protection and waste processing, regulatory and financial incentive measures, regulatory context and administrative management), and perspectives (maximum potential, development perspectives)

  20. Development of a model performance-based sign sheeting specification based on the evaluation of nighttime traffic signs using legibility and eye-tracker data.

    Science.gov (United States)

    2010-09-01

    This project focused on the evaluation of traffic sign sheeting performance in terms of meeting the nighttime : driver needs. The goal was to develop a nighttime driver needs specification for traffic signs. The : researchers used nighttime sign legi...

  1. Simulation of nonlinear benchmarks and sheet metal forming processes using linear and quadratic solid–shell elements combined with advanced anisotropic behavior models

    Directory of Open Access Journals (Sweden)

    Wang Peng

    2016-01-01

    Full Text Available A family of prismatic and hexahedral solid‒shell (SHB elements with their linear and quadratic versions is presented in this paper to model thin 3D structures. Based on reduced integration and special treatments to eliminate locking effects and to control spurious zero-energy modes, the SHB solid‒shell elements are capable of modeling most thin 3D structural problems with only a single element layer, while describing accurately the various through-thickness phenomena. In this paper, the SHB elements are combined with fully 3D behavior models, including orthotropic elastic behavior for composite materials and anisotropic plastic behavior for metallic materials, which allows describing the strain/stress state in the thickness direction, in contrast to traditional shell elements. All SHB elements are implemented into ABAQUS using both standard/quasi-static and explicit/dynamic solvers. Several benchmark tests have been conducted, in order to first assess the performance of the SHB elements in quasi-static and dynamic analyses. Then, deep drawing of a hemispherical cup is performed to demonstrate the capabilities of the SHB elements in handling various types of nonlinearities (large displacements and rotations, anisotropic plasticity, and contact. Compared to classical ABAQUS solid and shell elements, the results given by the SHB elements show good agreement with the reference solutions.

  2. Library Benchmarking

    Directory of Open Access Journals (Sweden)

    Wiji Suwarno

    2017-02-01

    Full Text Available The term benchmarking has been encountered in the implementation of total quality (TQM or in Indonesian termed holistic quality management because benchmarking is a tool to look for ideas or learn from the library. Benchmarking is a processof measuring and comparing for continuous business process of systematic and continuous measurement, the process of measuring and comparing for continuous business process of an organization to get information that can help these organization improve their performance efforts.

  3. Improved perception of communication and compliance with a revised, intensive care unit-specific bedside communication sheet.

    Science.gov (United States)

    Aponte-Patel, Linda; Sen, Anita

    2015-01-01

    Although many pediatric intensive care units (PICUs) use beside communication sheets (BCSs) to highlight daily goals, the optimal format is unknown. A site-specific BCS could improve both PICU communication and compliance completing the BCS. Via written survey, PICU staff at an academic children's hospital provided recommendations for improving and revising an existing BCS. Pre- and post-BCS revision, PICU staff were polled regarding PICU communication and BCS effectiveness, and daily compliance for completing the BCS was monitored. After implementation of the revised BCS, staff reporting "excellent" or "very good" day-to-day communication within the PICU increased from 57% to 77% (P = .02). Compliance for completing the BCS also increased significantly (75% vs 83%, P = .03). Introduction of a focused and concise BCS tailored to a specific PICU leads to improved perceptions of communication by PICU staff and increased compliance completing the daily BCS. © The Author(s) 2014.

  4. Interactive benchmarking

    DEFF Research Database (Denmark)

    Lawson, Lartey; Nielsen, Kurt

    2005-01-01

    We discuss individual learning by interactive benchmarking using stochastic frontier models. The interactions allow the user to tailor the performance evaluation to preferences and explore alternative improvement strategies by selecting and searching the different frontiers using directional...... in the suggested benchmarking tool. The study investigates how different characteristics on dairy farms influences the technical efficiency....

  5. RUNE benchmarks

    DEFF Research Database (Denmark)

    Peña, Alfredo

    This report contains the description of a number of benchmarks with the purpose of evaluating flow models for near-shore wind resource estimation. The benchmarks are designed based on the comprehensive database of observations that the RUNE coastal experiment established from onshore lidar...

  6. Benchmark selection

    DEFF Research Database (Denmark)

    Hougaard, Jens Leth; Tvede, Mich

    2002-01-01

    Within a production theoretic framework, this paper considers an axiomatic approach to benchmark selection. It is shown that two simple and weak axioms; efficiency and comprehensive monotonicity characterize a natural family of benchmarks which typically becomes unique. Further axioms are added...... in order to obtain a unique selection...

  7. Benchmarking spliced alignment programs including Spaln2, an extended version of Spaln that incorporates additional species-specific features.

    Science.gov (United States)

    Iwata, Hiroaki; Gotoh, Osamu

    2012-11-01

    Spliced alignment plays a central role in the precise identification of eukaryotic gene structures. Even though many spliced alignment programs have been developed, recent rapid progress in DNA sequencing technologies demands further improvements in software tools. Benchmarking algorithms under various conditions is an indispensable task for the development of better software; however, there is a dire lack of appropriate datasets usable for benchmarking spliced alignment programs. In this study, we have constructed two types of datasets: simulated sequence datasets and actual cross-species datasets. The datasets are designed to correspond to various real situations, i.e. divergent eukaryotic species, different types of reference sequences, and the wide divergence between query and target sequences. In addition, we have developed an extended version of our program Spaln, which incorporates two additional features to the scoring scheme of the original version, and examined this extended version, Spaln2, together with the original Spaln and other representative aligners based on our benchmark datasets. Although the effects of the modifications are not individually striking, Spaln2 is consistently most accurate and reasonably fast in most practical cases, especially for plants and fungi and for increasingly divergent pairs of target and query sequences.

  8. Standard specification for tantalum and tantalum alloy plate, sheet, and strip. ASTM standard

    International Nuclear Information System (INIS)

    1998-09-01

    This specification is under the jurisdiction of ASTM Committee B-10 on Reactive and Refractory Metals and Alloys and is the direct responsibility of Subcommittee B10.03 on Niobium and Tantalum. Current edition approved May 10, 1998 and published September 1998. Originally published as B 708-82. Last previous edition was B 708-92

  9. WLUP benchmarks

    International Nuclear Information System (INIS)

    Leszczynski, Francisco

    2002-01-01

    The IAEA-WIMS Library Update Project (WLUP) is on the end stage. The final library will be released on 2002. It is a result of research and development made by more than ten investigators during 10 years. The organization of benchmarks for testing and choosing the best set of data has been coordinated by the author of this paper. It is presented the organization, name conventions, contents and documentation of WLUP benchmarks, and an updated list of the main parameters for all cases. First, the benchmarks objectives and types are given. Then, comparisons of results from different WIMSD libraries are included. Finally it is described the program QVALUE for analysis and plot of results. Some examples are given. The set of benchmarks implemented on this work is a fundamental tool for testing new multigroup libraries. (author)

  10. Benchmarks for Uncertainty Analysis in Modelling (UAM) for the Design, Operation and Safety Analysis of LWRs - Volume I: Specification and Support Data for Neutronics Cases (Phase I)

    International Nuclear Information System (INIS)

    Ivanov, K.; Avramova, M.; Kamerow, S.; Kodeli, I.; Sartori, E.; Ivanov, E.; Cabellos, O.

    2013-01-01

    The objective of the OECD LWR UAM activity is to establish an internationally accepted benchmark framework to compare, assess and further develop different uncertainty analysis methods associated with the design, operation and safety of LWRs. As a result, the LWR UAM benchmark will help to address current nuclear power generation industry and regulation needs and issues related to practical implementation of risk-informed regulation. The realistic evaluation of consequences must be made with best-estimate coupled codes, but to be meaningful, such results should be supplemented by an uncertainty analysis. The use of coupled codes allows us to avoid unnecessary penalties due to incoherent approximations in the traditional decoupled calculations, and to obtain more accurate evaluation of margins regarding licensing limit. This becomes important for licensing power upgrades, improved fuel assembly and control rod designs, higher burn-up and others issues related to operating LWRs as well as to the new Generation 3+ designs being licensed now (ESBWR, AP-1 000, EPR-1 600, etc.). Establishing an internationally accepted LWR UAM benchmark framework offers the possibility to accelerate the licensing process when using best estimate methods. The proposed technical approach is to establish a benchmark for uncertainty analysis in best-estimate modelling and coupled multi-physics and multi-scale LWR analysis, using as bases a series of well-defined problems with complete sets of input specifications and reference experimental data. The objective is to determine the uncertainty in LWR system calculations at all stages of coupled reactor physics/thermal hydraulics calculations. The full chain of uncertainty propagation from basic data, engineering uncertainties, across different scales (multi-scale), and physics phenomena (multi-physics) will be tested on a number of benchmark exercises for which experimental data are available and for which the power plant details have been

  11. Neutron Reference Benchmark Field Specifications: ACRR Polyethylene-Lead-Graphite (PLG) Bucket Environment (ACRR-PLG-CC-32-CL).

    Energy Technology Data Exchange (ETDEWEB)

    Vega, Richard Manuel [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Parm, Edward J. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Griffin, Patrick J. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Vehar, David W. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)

    2015-07-01

    This report was put together to support the International Atomic Energy Agency (IAEA) REAL- 2016 activity to validate the dosimetry community’s ability to use a consistent set of activation data and to derive consistent spectral characterizations. The report captures details of integral measurements taken in the Annular Core Research Reactor (ACRR) central cavity with the Polyethylene-Lead-Graphite (PLG) bucket, reference neutron benchmark field. The field is described and an “a priori” calculated neutron spectrum is reported, based on MCNP6 calculations, and a subject matter expert (SME) based covariance matrix is given for this “a priori” spectrum. The results of 37 integral dosimetry measurements in the neutron field are reported.

  12. Numisheet2005 Benchmark Analysis on Forming of an Automotive Underbody Cross Member: Benchmark 2

    International Nuclear Information System (INIS)

    Buranathiti, Thaweepat; Cao Jian

    2005-01-01

    This report presents an international cooperation benchmark effort focusing on simulations of a sheet metal stamping process. A forming process of an automotive underbody cross member using steel and aluminum blanks is used as a benchmark. Simulation predictions from each submission are analyzed via comparison with the experimental results. A brief summary of various models submitted for this benchmark study is discussed. Prediction accuracy of each parameter of interest is discussed through the evaluation of cumulative errors from each submission

  13. Benchmarking Using Basic DBMS Operations

    Science.gov (United States)

    Crolotte, Alain; Ghazal, Ahmad

    The TPC-H benchmark proved to be successful in the decision support area. Many commercial database vendors and their related hardware vendors used these benchmarks to show the superiority and competitive edge of their products. However, over time, the TPC-H became less representative of industry trends as vendors keep tuning their database to this benchmark-specific workload. In this paper, we present XMarq, a simple benchmark framework that can be used to compare various software/hardware combinations. Our benchmark model is currently composed of 25 queries that measure the performance of basic operations such as scans, aggregations, joins and index access. This benchmark model is based on the TPC-H data model due to its maturity and well-understood data generation capability. We also propose metrics to evaluate single-system performance and compare two systems. Finally we illustrate the effectiveness of this model by showing experimental results comparing two systems under different conditions.

  14. [Do you mean benchmarking?].

    Science.gov (United States)

    Bonnet, F; Solignac, S; Marty, J

    2008-03-01

    The purpose of benchmarking is to settle improvement processes by comparing the activities to quality standards. The proposed methodology is illustrated by benchmark business cases performed inside medical plants on some items like nosocomial diseases or organization of surgery facilities. Moreover, the authors have built a specific graphic tool, enhanced with balance score numbers and mappings, so that the comparison between different anesthesia-reanimation services, which are willing to start an improvement program, is easy and relevant. This ready-made application is even more accurate as far as detailed tariffs of activities are implemented.

  15. Patient-specific IMRT verification using independent fluence-based dose calculation software: experimental benchmarking and initial clinical experience

    International Nuclear Information System (INIS)

    Georg, Dietmar; Stock, Markus; Kroupa, Bernhard; Olofsson, Joergen; Nyholm, Tufve; Ahnesjoe, Anders; Karlsson, Mikael

    2007-01-01

    Experimental methods are commonly used for patient-specific intensity-modulated radiotherapy (IMRT) verification. The purpose of this study was to investigate the accuracy and performance of independent dose calculation software (denoted as 'MUV' (monitor unit verification)) for patient-specific quality assurance (QA). 52 patients receiving step-and-shoot IMRT were considered. IMRT plans were recalculated by the treatment planning systems (TPS) in a dedicated QA phantom, in which an experimental 1D and 2D verification (0.3 cm 3 ionization chamber; films) was performed. Additionally, an independent dose calculation was performed. The fluence-based algorithm of MUV accounts for collimator transmission, rounded leaf ends, tongue-and-groove effect, backscatter to the monitor chamber and scatter from the flattening filter. The dose calculation utilizes a pencil beam model based on a beam quality index. DICOM RT files from patient plans, exported from the TPS, were directly used as patient-specific input data in MUV. For composite IMRT plans, average deviations in the high dose region between ionization chamber measurements and point dose calculations performed with the TPS and MUV were 1.6 ± 1.2% and 0.5 ± 1.1% (1 S.D.). The dose deviations between MUV and TPS slightly depended on the distance from the isocentre position. For individual intensity-modulated beams (total 367), an average deviation of 1.1 ± 2.9% was determined between calculations performed with the TPS and with MUV, with maximum deviations up to 14%. However, absolute dose deviations were mostly less than 3 cGy. Based on the current results, we aim to apply a confidence limit of 3% (with respect to the prescribed dose) or 6 cGy for routine IMRT verification. For off-axis points at distances larger than 5 cm and for low dose regions, we consider 5% dose deviation or 10 cGy acceptable. The time needed for an independent calculation compares very favourably with the net time for an experimental approach

  16. WWER-1000 Burnup Credit Benchmark (CB5)

    International Nuclear Information System (INIS)

    Manolova, M.A.

    2002-01-01

    In the paper the specification of WWER-1000 Burnup Credit Benchmark first phase (depletion calculations), given. The second phase - criticality calculations for the WWER-1000 fuel pin cell, will be given after the evaluation of the results, obtained at the first phase. The proposed benchmark is a continuation of the WWER benchmark activities in this field (Author)

  17. Decontamination sheet

    International Nuclear Information System (INIS)

    Hirose, Emiko; Kanesaki, Ken.

    1995-01-01

    The decontamination sheet of the present invention is formed by applying an adhesive on one surface of a polymer sheet and releasably appending a plurality of curing sheets. In addition, perforated lines are formed on the sheet, and a decontaminating agent is incorporated in the adhesive. This can reduce the number of curing operation steps when a plurality steps of operations for radiation decontamination equipments are performed, and further, the amount of wastes of the cured sheets, and operator's exposure are reduced, as well as an efficiency of the curing operation can be improved, and propagation of contamination can be prevented. (T.M.)

  18. Benchmarking in Foodservice Operations

    National Research Council Canada - National Science Library

    Johnson, Bonnie

    1998-01-01

    The objective of this study was to identify usage of foodservice performance measures, important activities in foodservice benchmarking, and benchmarking attitudes, beliefs, and practices by foodservice directors...

  19. Energy-Based Yield Criteria for Orthotropic Materials, Exhibiting Strength-Differential Effect. Specification for Sheets under Plane Stress State

    Directory of Open Access Journals (Sweden)

    Szeptyński P.

    2017-06-01

    Full Text Available A general proposition of an energy-based limit condition for anisotropic materials exhibiting strength-differential effect (SDE based on spectral decomposition of elasticity tensors and the use of scaling pressure-dependent functions is specified for the case of orthotropic materials. A detailed algorithm (based on classical solutions of cubic equations for the determination of elastic eigenstates and eigenvalues of the orthotropic stiffness tensor is presented. A yield condition is formulated for both two-dimensional and three-dimensional cases. Explicit formulas based on simple strength tests are derived for parameters of criterion in the plane case. The application of both criteria for the description of yielding and plastic deformation of metal sheets is discussed in detail. The plane case criterion is verified with experimental results from the literature.

  20. Clean Energy Manufacturing Analysis Center Benchmark Report: Framework and Methodologies

    Energy Technology Data Exchange (ETDEWEB)

    Sandor, Debra [National Renewable Energy Lab. (NREL), Golden, CO (United States); Chung, Donald [National Renewable Energy Lab. (NREL), Golden, CO (United States); Keyser, David [National Renewable Energy Lab. (NREL), Golden, CO (United States); Mann, Margaret [National Renewable Energy Lab. (NREL), Golden, CO (United States); Engel-Cox, Jill [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2017-05-23

    This report documents the CEMAC methodologies for developing and reporting annual global clean energy manufacturing benchmarks. The report reviews previously published manufacturing benchmark reports and foundational data, establishes a framework for benchmarking clean energy technologies, describes the CEMAC benchmark analysis methodologies, and describes the application of the methodologies to the manufacturing of four specific clean energy technologies.

  1. Benchmarking and Sustainable Transport Policy

    DEFF Research Database (Denmark)

    Gudmundsson, Henrik; Wyatt, Andrew; Gordon, Lucy

    2004-01-01

    Order to learn from the best. In 2000 the European Commission initiated research to explore benchmarking as a tool to promote policies for ‘sustainable transport’. This paper reports findings and recommendations on how to address this challenge. The findings suggest that benchmarking is a valuable...... tool that may indeed help to move forward the transport policy agenda. However, there are major conditions and limitations. First of all it is not always so straightforward to delimit, measure and compare transport services in order to establish a clear benchmark. Secondly ‘sustainable transport......’ evokes a broad range of concerns that are hard to address fully at the level of specific practices. Thirdly policies are not directly comparable across space and context. For these reasons attempting to benchmark ‘sustainable transport policies’ against one another would be a highly complex task, which...

  2. Benchmarking and Performance Measurement.

    Science.gov (United States)

    Town, J. Stephen

    This paper defines benchmarking and its relationship to quality management, describes a project which applied the technique in a library context, and explores the relationship between performance measurement and benchmarking. Numerous benchmarking methods contain similar elements: deciding what to benchmark; identifying partners; gathering…

  3. Benchmarking in the Netherlands

    International Nuclear Information System (INIS)

    1999-01-01

    In two articles an overview is given of the activities in the Dutch industry and energy sector with respect to benchmarking. In benchmarking operational processes of different competitive businesses are compared to improve your own performance. Benchmark covenants for energy efficiency between the Dutch government and industrial sectors contribute to a growth of the number of benchmark surveys in the energy intensive industry in the Netherlands. However, some doubt the effectiveness of the benchmark studies

  4. Dynamic benchmarking of simulation codes

    International Nuclear Information System (INIS)

    Henry, R.E.; Paik, C.Y.; Hauser, G.M.

    1996-01-01

    output includes a plot of the MAAP calculation and the plant data. For the large integral experiments, a major part, but not all of the MAAP code is needed. These use an experiment specific benchmark routine that includes all of the information and boundary conditions for performing the calculation, as well as the information of which parts of MAAP are unnecessary and can be 'bypassed'. Lastly, the separate effects tests only require a few MAAP routines. These are exercised through their own specific benchmark routine that includes the experiment specific information and boundary conditions. This benchmark routine calls the appropriate MAAP routines from the source code, performs the calculations, including integration where necessary and provide the comparison between the MAAP calculation and the experimental observations. (author)

  5. Global ice sheet modeling

    International Nuclear Information System (INIS)

    Hughes, T.J.; Fastook, J.L.

    1994-05-01

    The University of Maine conducted this study for Pacific Northwest Laboratory (PNL) as part of a global climate modeling task for site characterization of the potential nuclear waste respository site at Yucca Mountain, NV. The purpose of the study was to develop a global ice sheet dynamics model that will forecast the three-dimensional configuration of global ice sheets for specific climate change scenarios. The objective of the third (final) year of the work was to produce ice sheet data for glaciation scenarios covering the next 100,000 years. This was accomplished using both the map-plane and flowband solutions of our time-dependent, finite-element gridpoint model. The theory and equations used to develop the ice sheet models are presented. Three future scenarios were simulated by the model and results are discussed

  6. Aquatic Life Benchmarks

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Aquatic Life Benchmarks is an EPA-developed set of criteria for freshwater species. These benchmarks are based on toxicity values reviewed by EPA and used in the...

  7. Benchmarking for Higher Education.

    Science.gov (United States)

    Jackson, Norman, Ed.; Lund, Helen, Ed.

    The chapters in this collection explore the concept of benchmarking as it is being used and developed in higher education (HE). Case studies and reviews show how universities in the United Kingdom are using benchmarking to aid in self-regulation and self-improvement. The chapters are: (1) "Introduction to Benchmarking" (Norman Jackson…

  8. Standard specification for cobalt-chromium-nickel-molybdenum-tungsten alloy (UNS R31233) plate, sheet and strip. ASTM standard

    International Nuclear Information System (INIS)

    1998-09-01

    This specification is under the jurisdiction of ASTM Committee B-2 on Nonferrous Metals and Alloys and is the direct responsibility of Subcommittee B02.07 on Refined Nickel and Cobalt, and Alloys Containing Nickel or Cobalt or Both as Principal Constituents. Current edition approved Apr. 10, 1998 and published September 1998. Originally published as B 818-91. Last previous edition was B 818-93

  9. Benchmarking multimedia performance

    Science.gov (United States)

    Zandi, Ahmad; Sudharsanan, Subramania I.

    1998-03-01

    With the introduction of faster processors and special instruction sets tailored to multimedia, a number of exciting applications are now feasible on the desktops. Among these is the DVD playback consisting, among other things, of MPEG-2 video and Dolby digital audio or MPEG-2 audio. Other multimedia applications such as video conferencing and speech recognition are also becoming popular on computer systems. In view of this tremendous interest in multimedia, a group of major computer companies have formed, Multimedia Benchmarks Committee as part of Standard Performance Evaluation Corp. to address the performance issues of multimedia applications. The approach is multi-tiered with three tiers of fidelity from minimal to full compliant. In each case the fidelity of the bitstream reconstruction as well as quality of the video or audio output are measured and the system is classified accordingly. At the next step the performance of the system is measured. In many multimedia applications such as the DVD playback the application needs to be run at a specific rate. In this case the measurement of the excess processing power, makes all the difference. All these make a system level, application based, multimedia benchmark very challenging. Several ideas and methodologies for each aspect of the problems will be presented and analyzed.

  10. A benchmarking study

    Directory of Open Access Journals (Sweden)

    H. Groessing

    2015-02-01

    Full Text Available A benchmark study for permeability measurement is presented. In the past studies of other research groups which focused on the reproducibility of 1D-permeability measurements showed high standard deviations of the gained permeability values (25%, even though a defined test rig with required specifications was used. Within this study, the reproducibility of capacitive in-plane permeability testing system measurements was benchmarked by comparing results of two research sites using this technology. The reproducibility was compared by using a glass fibre woven textile and carbon fibre non crimped fabric (NCF. These two material types were taken into consideration due to the different electrical properties of glass and carbon with respect to dielectric capacitive sensors of the permeability measurement systems. In order to determine the unsaturated permeability characteristics as function of fibre volume content the measurements were executed at three different fibre volume contents including five repetitions. It was found that the stability and reproducibility of the presentedin-plane permeability measurement system is very good in the case of the glass fibre woven textiles. This is true for the comparison of the repetition measurements as well as for the comparison between the two different permeameters. These positive results were confirmed by a comparison to permeability values of the same textile gained with an older generation permeameter applying the same measurement technology. Also it was shown, that a correct determination of the grammage and the material density are crucial for correct correlation of measured permeability values and fibre volume contents.

  11. Benchmarking semantic web technology

    CERN Document Server

    García-Castro, R

    2009-01-01

    This book addresses the problem of benchmarking Semantic Web Technologies; first, from a methodological point of view, proposing a general methodology to follow in benchmarking activities over Semantic Web Technologies and, second, from a practical point of view, presenting two international benchmarking activities that involved benchmarking the interoperability of Semantic Web technologies using RDF(S) as the interchange language in one activity and OWL in the other.The book presents in detail how the different resources needed for these interoperability benchmarking activities were defined:

  12. Benchmarking in University Toolbox

    Directory of Open Access Journals (Sweden)

    Katarzyna Kuźmicz

    2015-06-01

    Full Text Available In the face of global competition and rising challenges that higher education institutions (HEIs meet, it is imperative to increase innovativeness and efficiency of their management. Benchmarking can be the appropriate tool to search for a point of reference necessary to assess institution’s competitive position and learn from the best in order to improve. The primary purpose of the paper is to present in-depth analysis of benchmarking application in HEIs worldwide. The study involves indicating premises of using benchmarking in HEIs. It also contains detailed examination of types, approaches and scope of benchmarking initiatives. The thorough insight of benchmarking applications enabled developing classification of benchmarking undertakings in HEIs. The paper includes review of the most recent benchmarking projects and relating them to the classification according to the elaborated criteria (geographical range, scope, type of data, subject, support and continuity. The presented examples were chosen in order to exemplify different approaches to benchmarking in higher education setting. The study was performed on the basis of the published reports from benchmarking projects, scientific literature and the experience of the author from the active participation in benchmarking projects. The paper concludes with recommendations for university managers undertaking benchmarking, derived on the basis of the conducted analysis.

  13. MOx Depletion Calculation Benchmark

    International Nuclear Information System (INIS)

    San Felice, Laurence; Eschbach, Romain; Dewi Syarifah, Ratna; Maryam, Seif-Eddine; Hesketh, Kevin

    2016-01-01

    Under the auspices of the NEA Nuclear Science Committee (NSC), the Working Party on Scientific Issues of Reactor Systems (WPRS) has been established to study the reactor physics, fuel performance, radiation transport and shielding, and the uncertainties associated with modelling of these phenomena in present and future nuclear power systems. The WPRS has different expert groups to cover a wide range of scientific issues in these fields. The Expert Group on Reactor Physics and Advanced Nuclear Systems (EGRPANS) was created in 2011 to perform specific tasks associated with reactor physics aspects of present and future nuclear power systems. EGRPANS provides expert advice to the WPRS and the nuclear community on the development needs (data and methods, validation experiments, scenario studies) for different reactor systems and also provides specific technical information regarding: core reactivity characteristics, including fuel depletion effects; core power/flux distributions; Core dynamics and reactivity control. In 2013 EGRPANS published a report that investigated fuel depletion effects in a Pressurised Water Reactor (PWR). This was entitled 'International Comparison of a Depletion Calculation Benchmark on Fuel Cycle Issues' NEA/NSC/DOC(2013) that documented a benchmark exercise for UO 2 fuel rods. This report documents a complementary benchmark exercise that focused on PuO 2 /UO 2 Mixed Oxide (MOX) fuel rods. The results are especially relevant to the back-end of the fuel cycle, including irradiated fuel transport, reprocessing, interim storage and waste repository. Saint-Laurent B1 (SLB1) was the first French reactor to use MOx assemblies. SLB1 is a 900 MWe PWR, with 30% MOx fuel loading. The standard MOx assemblies, used in Saint-Laurent B1 reactor, include three zones with different plutonium enrichments, high Pu content (5.64%) in the center zone, medium Pu content (4.42%) in the intermediate zone and low Pu content (2.91%) in the peripheral zone

  14. Neutron Reference Benchmark Field Specification: ACRR 44 Inch Lead-Boron (LB44) Bucket Environment (ACRR-LB44-CC-32-CL).

    Energy Technology Data Exchange (ETDEWEB)

    Vega, Richard Manuel [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Parma, Edward J. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Griffin, Patrick J. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Vehar, David W. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)

    2015-07-01

    This report was put together to support the International Atomic Energy Agency (IAEA) REAL- 2016 activity to validate the dosimetry community’s ability to use a consistent set of activation data and to derive consistent spectral characterizations. The report captures details of integral measurements taken in the Annular Core Research Reactor (ACRR) central cavity with the 44 inch Lead-Boron (LB44) bucket, reference neutron benchmark field. The field is described and an “a priori” calculated neutron spectrum is reported, based on MCNP6 calculations, and a subject matter expert (SME) based covariance matrix is given for this “a priori” spectrum. The results of 31 integral dosimetry measurements in the neutron field are reported.

  15. Numisheet2005 Benchmark Analysis on Forming of an Automotive Deck Lid Inner Panel: Benchmark 1

    International Nuclear Information System (INIS)

    Buranathiti, Thaweepat; Cao Jian

    2005-01-01

    Numerical simulations in sheet metal forming processes have been a very challenging topic in industry. There are many computer codes and modeling techniques existing today. However, there are many unknowns affecting the prediction accuracy. Systematic benchmark tests are needed to accelerate the future implementations and to provide as a reference. This report presents an international cooperative benchmark effort for an automotive deck lid inner panel. Predictions from simulations are analyzed and discussed against the corresponding experimental results. The correlations between accuracy of each parameter of interest are discussed in this report

  16. Benchmarks for dynamic multi-objective optimisation

    CSIR Research Space (South Africa)

    Helbig, M

    2013-06-01

    Full Text Available When algorithms solve dynamic multi-objective optimisation problems (DMOOPs), benchmark functions should be used to determine whether the algorithm can overcome specific difficulties that can occur in real-world problems. However, for dynamic multi...

  17. MCNP neutron benchmarks

    International Nuclear Information System (INIS)

    Hendricks, J.S.; Whalen, D.J.; Cardon, D.A.; Uhle, J.L.

    1991-01-01

    Over 50 neutron benchmark calculations have recently been completed as part of an ongoing program to validate the MCNP Monte Carlo radiation transport code. The new and significant aspects of this work are as follows: These calculations are the first attempt at a validation program for MCNP and the first official benchmarking of version 4 of the code. We believe the chosen set of benchmarks is a comprehensive set that may be useful for benchmarking other radiation transport codes and data libraries. These calculations provide insight into how well neutron transport calculations can be expected to model a wide variety of problems

  18. Benchmarking Academic Anatomic Pathologists

    Directory of Open Access Journals (Sweden)

    Barbara S. Ducatman MD

    2016-10-01

    Full Text Available The most common benchmarks for faculty productivity are derived from Medical Group Management Association (MGMA or Vizient-AAMC Faculty Practice Solutions Center ® (FPSC databases. The Association of Pathology Chairs has also collected similar survey data for several years. We examined the Association of Pathology Chairs annual faculty productivity data and compared it with MGMA and FPSC data to understand the value, inherent flaws, and limitations of benchmarking data. We hypothesized that the variability in calculated faculty productivity is due to the type of practice model and clinical effort allocation. Data from the Association of Pathology Chairs survey on 629 surgical pathologists and/or anatomic pathologists from 51 programs were analyzed. From review of service assignments, we were able to assign each pathologist to a specific practice model: general anatomic pathologists/surgical pathologists, 1 or more subspecialties, or a hybrid of the 2 models. There were statistically significant differences among academic ranks and practice types. When we analyzed our data using each organization’s methods, the median results for the anatomic pathologists/surgical pathologists general practice model compared to MGMA and FPSC results for anatomic and/or surgical pathology were quite close. Both MGMA and FPSC data exclude a significant proportion of academic pathologists with clinical duties. We used the more inclusive FPSC definition of clinical “full-time faculty” (0.60 clinical full-time equivalent and above. The correlation between clinical full-time equivalent effort allocation, annual days on service, and annual work relative value unit productivity was poor. This study demonstrates that effort allocations are variable across academic departments of pathology and do not correlate well with either work relative value unit effort or reported days on service. Although the Association of Pathology Chairs–reported median work relative

  19. SP2Bench: A SPARQL Performance Benchmark

    Science.gov (United States)

    Schmidt, Michael; Hornung, Thomas; Meier, Michael; Pinkel, Christoph; Lausen, Georg

    A meaningful analysis and comparison of both existing storage schemes for RDF data and evaluation approaches for SPARQL queries necessitates a comprehensive and universal benchmark platform. We present SP2Bench, a publicly available, language-specific performance benchmark for the SPARQL query language. SP2Bench is settled in the DBLP scenario and comprises a data generator for creating arbitrarily large DBLP-like documents and a set of carefully designed benchmark queries. The generated documents mirror vital key characteristics and social-world distributions encountered in the original DBLP data set, while the queries implement meaningful requests on top of this data, covering a variety of SPARQL operator constellations and RDF access patterns. In this chapter, we discuss requirements and desiderata for SPARQL benchmarks and present the SP2Bench framework, including its data generator, benchmark queries and performance metrics.

  20. Development of a model performance-based sign sheeting specification based on the evaluation of nighttime traffic signs using legibility and eye-tracker data : data and analyses.

    Science.gov (United States)

    2010-09-01

    This report presents data and technical analyses for Texas Department of Transportation Project 0-5235. This : project focused on the evaluation of traffic sign sheeting performance in terms of meeting the nighttime : driver needs. The goal was to de...

  1. Prismatic Core Coupled Transient Benchmark

    International Nuclear Information System (INIS)

    Ortensi, J.; Pope, M.A.; Strydom, G.; Sen, R.S.; DeHart, M.D.; Gougar, H.D.; Ellis, C.; Baxter, A.; Seker, V.; Downar, T.J.; Vierow, K.; Ivanov, K.

    2011-01-01

    The Prismatic Modular Reactor (PMR) is one of the High Temperature Reactor (HTR) design concepts that have existed for some time. Several prismatic units have operated in the world (DRAGON, Fort St. Vrain, Peach Bottom) and one unit is still in operation (HTTR). The deterministic neutronics and thermal-fluids transient analysis tools and methods currently available for the design and analysis of PMRs have lagged behind the state of the art compared to LWR reactor technologies. This has motivated the development of more accurate and efficient tools for the design and safety evaluations of the PMR. In addition to the work invested in new methods, it is essential to develop appropriate benchmarks to verify and validate the new methods in computer codes. The purpose of this benchmark is to establish a well-defined problem, based on a common given set of data, to compare methods and tools in core simulation and thermal hydraulics analysis with a specific focus on transient events. The benchmark-working group is currently seeking OECD/NEA sponsorship. This benchmark is being pursued and is heavily based on the success of the PBMR-400 exercise.

  2. Benchmarking af kommunernes sagsbehandling

    DEFF Research Database (Denmark)

    Amilon, Anna

    Fra 2007 skal Ankestyrelsen gennemføre benchmarking af kommuernes sagsbehandlingskvalitet. Formålet med benchmarkingen er at udvikle praksisundersøgelsernes design med henblik på en bedre opfølgning og at forbedre kommunernes sagsbehandling. Dette arbejdspapir diskuterer metoder for benchmarking...

  3. Internet based benchmarking

    DEFF Research Database (Denmark)

    Bogetoft, Peter; Nielsen, Kurt

    2005-01-01

    We discuss the design of interactive, internet based benchmarking using parametric (statistical) as well as nonparametric (DEA) models. The user receives benchmarks and improvement potentials. The user is also given the possibility to search different efficiency frontiers and hereby to explore...

  4. How Activists Use Benchmarks

    DEFF Research Database (Denmark)

    Seabrooke, Leonard; Wigan, Duncan

    2015-01-01

    Non-governmental organisations use benchmarks as a form of symbolic violence to place political pressure on firms, states, and international organisations. The development of benchmarks requires three elements: (1) salience, that the community of concern is aware of the issue and views...... are put to the test. The first is a reformist benchmarking cycle where organisations defer to experts to create a benchmark that conforms with the broader system of politico-economic norms. The second is a revolutionary benchmarking cycle driven by expert-activists that seek to contest strong vested...... interests and challenge established politico-economic norms. Differentiating these cycles provides insights into how activists work through organisations and with expert networks, as well as how campaigns on complex economic issues can be mounted and sustained....

  5. EGS4 benchmark program

    International Nuclear Information System (INIS)

    Yasu, Y.; Hirayama, H.; Namito, Y.; Yashiro, S.

    1995-01-01

    This paper proposes EGS4 Benchmark Suite which consists of three programs called UCSAMPL4, UCSAMPL4I and XYZDOS. This paper also evaluates optimization methods of recent RISC/UNIX systems, such as IBM, HP, DEC, Hitachi and Fujitsu, for the benchmark suite. When particular compiler option and math library were included in the evaluation process, system performed significantly better. Observed performance of some of the RISC/UNIX systems were beyond some so-called Mainframes of IBM, Hitachi or Fujitsu. The computer performance of EGS4 Code System on an HP9000/735 (99MHz) was defined to be the unit of EGS4 Unit. The EGS4 Benchmark Suite also run on various PCs such as Pentiums, i486 and DEC alpha and so forth. The performance of recent fast PCs reaches that of recent RISC/UNIX systems. The benchmark programs have been evaluated with correlation of industry benchmark programs, namely, SPECmark. (author)

  6. Verification and validation benchmarks.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy

    2007-02-01

    Verification and validation (V&V) are the primary means to assess the accuracy and reliability of computational simulations. V&V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V&V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the level of

  7. Verification and validation benchmarks

    International Nuclear Information System (INIS)

    Oberkampf, William Louis; Trucano, Timothy Guy

    2007-01-01

    Verification and validation (V and V) are the primary means to assess the accuracy and reliability of computational simulations. V and V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V and V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the

  8. Verification and validation benchmarks

    International Nuclear Information System (INIS)

    Oberkampf, William L.; Trucano, Timothy G.

    2008-01-01

    Verification and validation (V and V) are the primary means to assess the accuracy and reliability of computational simulations. V and V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V and V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the

  9. A Seafloor Benchmark for 3-dimensional Geodesy

    Science.gov (United States)

    Chadwell, C. D.; Webb, S. C.; Nooner, S. L.

    2014-12-01

    We have developed an inexpensive, permanent seafloor benchmark to increase the longevity of seafloor geodetic measurements. The benchmark provides a physical tie to the sea floor lasting for decades (perhaps longer) on which geodetic sensors can be repeatedly placed and removed with millimeter resolution. Global coordinates estimated with seafloor geodetic techniques will remain attached to the benchmark allowing for the interchange of sensors as they fail or become obsolete, or for the sensors to be removed and used elsewhere, all the while maintaining a coherent series of positions referenced to the benchmark. The benchmark has been designed to free fall from the sea surface with transponders attached. The transponder can be recalled via an acoustic command sent from the surface to release from the benchmark and freely float to the sea surface for recovery. The duration of the sensor attachment to the benchmark will last from a few days to a few years depending on the specific needs of the experiment. The recovered sensors are then available to be reused at other locations, or again at the same site in the future. Three pins on the sensor frame mate precisely and unambiguously with three grooves on the benchmark. To reoccupy a benchmark a Remotely Operated Vehicle (ROV) uses its manipulator arm to place the sensor pins into the benchmark grooves. In June 2014 we deployed four benchmarks offshore central Oregon. We used the ROV Jason to successfully demonstrate the removal and replacement of packages onto the benchmark. We will show the benchmark design and its operational capabilities. Presently models of megathrust slip within the Cascadia Subduction Zone (CSZ) are mostly constrained by the sub-aerial GPS vectors from the Plate Boundary Observatory, a part of Earthscope. More long-lived seafloor geodetic measures are needed to better understand the earthquake and tsunami risk associated with a large rupture of the thrust fault within the Cascadia subduction zone

  10. Benchmark Imagery FY11 Technical Report

    Energy Technology Data Exchange (ETDEWEB)

    Roberts, R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Pope, P. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2011-06-14

    This report details the work performed in FY11 under project LL11-GS-PD06, “Benchmark Imagery for Assessing Geospatial Semantic Extraction Algorithms.” The original LCP for the Benchmark Imagery project called for creating a set of benchmark imagery for verifying and validating algorithms that extract semantic content from imagery. More specifically, the first year was slated to deliver real imagery that had been annotated, the second year to deliver real imagery that had composited features, and the final year was to deliver synthetic imagery modeled after the real imagery.

  11. Benchmarking and the laboratory

    Science.gov (United States)

    Galloway, M; Nadin, L

    2001-01-01

    This article describes how benchmarking can be used to assess laboratory performance. Two benchmarking schemes are reviewed, the Clinical Benchmarking Company's Pathology Report and the College of American Pathologists' Q-Probes scheme. The Clinical Benchmarking Company's Pathology Report is undertaken by staff based in the clinical management unit, Keele University with appropriate input from the professional organisations within pathology. Five annual reports have now been completed. Each report is a detailed analysis of 10 areas of laboratory performance. In this review, particular attention is focused on the areas of quality, productivity, variation in clinical practice, skill mix, and working hours. The Q-Probes scheme is part of the College of American Pathologists programme in studies of quality assurance. The Q-Probes scheme and its applicability to pathology in the UK is illustrated by reviewing two recent Q-Probe studies: routine outpatient test turnaround time and outpatient test order accuracy. The Q-Probes scheme is somewhat limited by the small number of UK laboratories that have participated. In conclusion, as a result of the government's policy in the UK, benchmarking is here to stay. Benchmarking schemes described in this article are one way in which pathologists can demonstrate that they are providing a cost effective and high quality service. Key Words: benchmarking • pathology PMID:11477112

  12. Shielding benchmark problems, (2)

    International Nuclear Information System (INIS)

    Tanaka, Shun-ichi; Sasamoto, Nobuo; Oka, Yoshiaki; Shin, Kazuo; Tada, Keiko.

    1980-02-01

    Shielding benchmark problems prepared by Working Group of Assessment of Shielding Experiments in the Research Committee on Shielding Design in the Atomic Energy Society of Japan were compiled by Shielding Laboratory in Japan Atomic Energy Research Institute. Fourteen shielding benchmark problems are presented newly in addition to twenty-one problems proposed already, for evaluating the calculational algorithm and accuracy of computer codes based on discrete ordinates method and Monte Carlo method and for evaluating the nuclear data used in codes. The present benchmark problems are principally for investigating the backscattering and the streaming of neutrons and gamma rays in two- and three-dimensional configurations. (author)

  13. Toxicological Benchmarks for Wildlife

    Energy Technology Data Exchange (ETDEWEB)

    Sample, B.E. Opresko, D.M. Suter, G.W.

    1993-01-01

    Ecological risks of environmental contaminants are evaluated by using a two-tiered process. In the first tier, a screening assessment is performed where concentrations of contaminants in the environment are compared to no observed adverse effects level (NOAEL)-based toxicological benchmarks. These benchmarks represent concentrations of chemicals (i.e., concentrations presumed to be nonhazardous to the biota) in environmental media (water, sediment, soil, food, etc.). While exceedance of these benchmarks does not indicate any particular level or type of risk, concentrations below the benchmarks should not result in significant effects. In practice, when contaminant concentrations in food or water resources are less than these toxicological benchmarks, the contaminants may be excluded from further consideration. However, if the concentration of a contaminant exceeds a benchmark, that contaminant should be retained as a contaminant of potential concern (COPC) and investigated further. The second tier in ecological risk assessment, the baseline ecological risk assessment, may use toxicological benchmarks as part of a weight-of-evidence approach (Suter 1993). Under this approach, based toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. Other sources of evidence include media toxicity tests, surveys of biota (abundance and diversity), measures of contaminant body burdens, and biomarkers. This report presents NOAEL- and lowest observed adverse effects level (LOAEL)-based toxicological benchmarks for assessment of effects of 85 chemicals on 9 representative mammalian wildlife species (short-tailed shrew, little brown bat, meadow vole, white-footed mouse, cottontail rabbit, mink, red fox, and whitetail deer) or 11 avian wildlife species (American robin, rough-winged swallow, American woodcock, wild turkey, belted kingfisher, great blue heron, barred owl, barn owl, Cooper's hawk, and red

  14. Benchmarking Nuclear Power Plants

    International Nuclear Information System (INIS)

    Jakic, I.

    2016-01-01

    One of the main tasks an owner have is to keep its business competitive on the market while delivering its product. Being owner of nuclear power plant bear the same (or even more complex and stern) responsibility due to safety risks and costs. In the past, nuclear power plant managements could (partly) ignore profit or it was simply expected and to some degree assured through the various regulatory processes governing electricity rate design. It is obvious now that, with the deregulation, utility privatization and competitive electricity market, key measure of success used at nuclear power plants must include traditional metrics of successful business (return on investment, earnings and revenue generation) as well as those of plant performance, safety and reliability. In order to analyze business performance of (specific) nuclear power plant, benchmarking, as one of the well-established concept and usual method was used. Domain was conservatively designed, with well-adjusted framework, but results have still limited application due to many differences, gaps and uncertainties. (author).

  15. Benchmarking specialty hospitals, a scoping review on theory and practice.

    Science.gov (United States)

    Wind, A; van Harten, W H

    2017-04-04

    Although benchmarking may improve hospital processes, research on this subject is limited. The aim of this study was to provide an overview of publications on benchmarking in specialty hospitals and a description of study characteristics. We searched PubMed and EMBASE for articles published in English in the last 10 years. Eligible articles described a project stating benchmarking as its objective and involving a specialty hospital or specific patient category; or those dealing with the methodology or evaluation of benchmarking. Of 1,817 articles identified in total, 24 were included in the study. Articles were categorized into: pathway benchmarking, institutional benchmarking, articles on benchmark methodology or -evaluation and benchmarking using a patient registry. There was a large degree of variability:(1) study designs were mostly descriptive and retrospective; (2) not all studies generated and showed data in sufficient detail; and (3) there was variety in whether a benchmarking model was just described or if quality improvement as a consequence of the benchmark was reported upon. Most of the studies that described a benchmark model described the use of benchmarking partners from the same industry category, sometimes from all over the world. Benchmarking seems to be more developed in eye hospitals, emergency departments and oncology specialty hospitals. Some studies showed promising improvement effects. However, the majority of the articles lacked a structured design, and did not report on benchmark outcomes. In order to evaluate the effectiveness of benchmarking to improve quality in specialty hospitals, robust and structured designs are needed including a follow up to check whether the benchmark study has led to improvements.

  16. Diagnostic Algorithm Benchmarking

    Science.gov (United States)

    Poll, Scott

    2011-01-01

    A poster for the NASA Aviation Safety Program Annual Technical Meeting. It describes empirical benchmarking on diagnostic algorithms using data from the ADAPT Electrical Power System testbed and a diagnostic software framework.

  17. Benchmarking Swiss electricity grids

    International Nuclear Information System (INIS)

    Walti, N.O.; Weber, Ch.

    2001-01-01

    This extensive article describes a pilot benchmarking project initiated by the Swiss Association of Electricity Enterprises that assessed 37 Swiss utilities. The data collected from these utilities on a voluntary basis included data on technical infrastructure, investments and operating costs. These various factors are listed and discussed in detail. The assessment methods and rating mechanisms that provided the benchmarks are discussed and the results of the pilot study are presented that are to form the basis of benchmarking procedures for the grid regulation authorities under the planned Switzerland's electricity market law. Examples of the practical use of the benchmarking methods are given and cost-efficiency questions still open in the area of investment and operating costs are listed. Prefaces by the Swiss Association of Electricity Enterprises and the Swiss Federal Office of Energy complete the article

  18. Financial Integrity Benchmarks

    Data.gov (United States)

    City of Jackson, Mississippi — This data compiles standard financial integrity benchmarks that allow the City to measure its financial standing. It measure the City's debt ratio and bond ratings....

  19. Benchmarking in Foodservice Operations

    National Research Council Canada - National Science Library

    Johnson, Bonnie

    1998-01-01

    .... The design of this study included two parts: (1) eleven expert panelists involved in a Delphi technique to identify and rate importance of foodservice performance measures and rate the importance of benchmarking activities, and (2...

  20. MFTF TOTAL benchmark

    International Nuclear Information System (INIS)

    Choy, J.H.

    1979-06-01

    A benchmark of the TOTAL data base management system as applied to the Mirror Fusion Test Facility (MFTF) data base was implemented and run in February and March of 1979. The benchmark was run on an Interdata 8/32 and involved the following tasks: (1) data base design, (2) data base generation, (3) data base load, and (4) develop and implement programs to simulate MFTF usage of the data base

  1. Accelerator shielding benchmark problems

    International Nuclear Information System (INIS)

    Hirayama, H.; Ban, S.; Nakamura, T.

    1993-01-01

    Accelerator shielding benchmark problems prepared by Working Group of Accelerator Shielding in the Research Committee on Radiation Behavior in the Atomic Energy Society of Japan were compiled by Radiation Safety Control Center of National Laboratory for High Energy Physics. Twenty-five accelerator shielding benchmark problems are presented for evaluating the calculational algorithm, the accuracy of computer codes and the nuclear data used in codes. (author)

  2. Shielding benchmark problems

    International Nuclear Information System (INIS)

    Tanaka, Shun-ichi; Sasamoto, Nobuo; Oka, Yoshiaki; Kawai, Masayoshi; Nakazawa, Masaharu.

    1978-09-01

    Shielding benchmark problems were prepared by the Working Group of Assessment of Shielding Experiments in the Research Comittee on Shielding Design of the Atomic Energy Society of Japan, and compiled by the Shielding Laboratory of Japan Atomic Energy Research Institute. Twenty-one kinds of shielding benchmark problems are presented for evaluating the calculational algorithm and the accuracy of computer codes based on the discrete ordinates method and the Monte Carlo method and for evaluating the nuclear data used in the codes. (author)

  3. Tourism Destination Benchmarking: Evaluation and Selection of the Benchmarking Partners

    Directory of Open Access Journals (Sweden)

    Luštický Martin

    2012-03-01

    Full Text Available Tourism development has an irreplaceable role in regional policy of almost all countries. This is due to its undeniable benefits for the local population with regards to the economic, social and environmental sphere. Tourist destinations compete for visitors at tourism market and subsequently get into a relatively sharp competitive struggle. The main goal of regional governments and destination management institutions is to succeed in this struggle by increasing the competitiveness of their destination. The quality of strategic planning and final strategies is a key factor of competitiveness. Even though the tourism sector is not the typical field where the benchmarking methods are widely used, such approaches could be successfully applied. The paper focuses on key phases of the benchmarking process which lies in the search for suitable referencing partners. The partners are consequently selected to meet general requirements to ensure the quality if strategies. Following from this, some specific characteristics are developed according to the SMART approach. The paper tests this procedure with an expert evaluation of eight selected regional tourism strategies of regions in the Czech Republic, Slovakia and Great Britain. In this way it validates the selected criteria in the frame of the international environment. Hence, it makes it possible to find strengths and weaknesses of selected strategies and at the same time facilitates the discovery of suitable benchmarking partners.

  4. Benchmarking electricity distribution

    Energy Technology Data Exchange (ETDEWEB)

    Watts, K. [Department of Justice and Attorney-General, QLD (Australia)

    1995-12-31

    Benchmarking has been described as a method of continuous improvement that involves an ongoing and systematic evaluation and incorporation of external products, services and processes recognised as representing best practice. It is a management tool similar to total quality management (TQM) and business process re-engineering (BPR), and is best used as part of a total package. This paper discusses benchmarking models and approaches and suggests a few key performance indicators that could be applied to benchmarking electricity distribution utilities. Some recent benchmarking studies are used as examples and briefly discussed. It is concluded that benchmarking is a strong tool to be added to the range of techniques that can be used by electricity distribution utilities and other organizations in search of continuous improvement, and that there is now a high level of interest in Australia. Benchmarking represents an opportunity for organizations to approach learning from others in a disciplined and highly productive way, which will complement the other micro-economic reforms being implemented in Australia. (author). 26 refs.

  5. The General Concept of Benchmarking and Its Application in Higher Education in Europe

    Science.gov (United States)

    Nazarko, Joanicjusz; Kuzmicz, Katarzyna Anna; Szubzda-Prutis, Elzbieta; Urban, Joanna

    2009-01-01

    The purposes of this paper are twofold: a presentation of the theoretical basis of benchmarking and a discussion on practical benchmarking applications. Benchmarking is also analyzed as a productivity accelerator. The authors study benchmarking usage in the private and public sectors with due consideration of the specificities of the two areas.…

  6. XWeB: The XML Warehouse Benchmark

    Science.gov (United States)

    Mahboubi, Hadj; Darmont, Jérôme

    With the emergence of XML as a standard for representing business data, new decision support applications are being developed. These XML data warehouses aim at supporting On-Line Analytical Processing (OLAP) operations that manipulate irregular XML data. To ensure feasibility of these new tools, important performance issues must be addressed. Performance is customarily assessed with the help of benchmarks. However, decision support benchmarks do not currently support XML features. In this paper, we introduce the XML Warehouse Benchmark (XWeB), which aims at filling this gap. XWeB derives from the relational decision support benchmark TPC-H. It is mainly composed of a test data warehouse that is based on a unified reference model for XML warehouses and that features XML-specific structures, and its associate XQuery decision support workload. XWeB's usage is illustrated by experiments on several XML database management systems.

  7. The KMAT: Benchmarking Knowledge Management.

    Science.gov (United States)

    de Jager, Martha

    Provides an overview of knowledge management and benchmarking, including the benefits and methods of benchmarking (e.g., competitive, cooperative, collaborative, and internal benchmarking). Arthur Andersen's KMAT (Knowledge Management Assessment Tool) is described. The KMAT is a collaborative benchmarking tool, designed to help organizations make…

  8. Benchmarking the Netherlands. Benchmarking for growth

    International Nuclear Information System (INIS)

    2003-01-01

    This is the fourth edition of the Ministry of Economic Affairs' publication 'Benchmarking the Netherlands', which aims to assess the competitiveness of the Dutch economy. The methodology and objective of the benchmarking remain the same. The basic conditions for economic activity (institutions, regulation, etc.) in a number of benchmark countries are compared in order to learn from the solutions found by other countries for common economic problems. This publication is devoted entirely to the potential output of the Dutch economy. In other words, its ability to achieve sustainable growth and create work over a longer period without capacity becoming an obstacle. This is important because economic growth is needed to increase prosperity in the broad sense and meeting social needs. Prosperity in both a material (per capita GDP) and immaterial (living environment, environment, health, etc) sense, in other words. The economy's potential output is determined by two structural factors: the growth of potential employment and the structural increase in labour productivity. Analysis by the Netherlands Bureau for Economic Policy Analysis (CPB) shows that in recent years the increase in the capacity for economic growth has been realised mainly by increasing the supply of labour and reducing the equilibrium unemployment rate. In view of the ageing of the population in the coming years and decades the supply of labour is unlikely to continue growing at the pace we have become accustomed to in recent years. According to a number of recent studies, to achieve a respectable rate of sustainable economic growth the aim will therefore have to be to increase labour productivity. To realise this we have to focus on for six pillars of economic policy: (1) human capital, (2) functioning of markets, (3) entrepreneurship, (4) spatial planning, (5) innovation, and (6) sustainability. These six pillars determine the course for economic policy aiming at higher productivity growth. Throughout

  9. Benchmarking the Netherlands. Benchmarking for growth

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2003-01-01

    This is the fourth edition of the Ministry of Economic Affairs' publication 'Benchmarking the Netherlands', which aims to assess the competitiveness of the Dutch economy. The methodology and objective of the benchmarking remain the same. The basic conditions for economic activity (institutions, regulation, etc.) in a number of benchmark countries are compared in order to learn from the solutions found by other countries for common economic problems. This publication is devoted entirely to the potential output of the Dutch economy. In other words, its ability to achieve sustainable growth and create work over a longer period without capacity becoming an obstacle. This is important because economic growth is needed to increase prosperity in the broad sense and meeting social needs. Prosperity in both a material (per capita GDP) and immaterial (living environment, environment, health, etc) sense, in other words. The economy's potential output is determined by two structural factors: the growth of potential employment and the structural increase in labour productivity. Analysis by the Netherlands Bureau for Economic Policy Analysis (CPB) shows that in recent years the increase in the capacity for economic growth has been realised mainly by increasing the supply of labour and reducing the equilibrium unemployment rate. In view of the ageing of the population in the coming years and decades the supply of labour is unlikely to continue growing at the pace we have become accustomed to in recent years. According to a number of recent studies, to achieve a respectable rate of sustainable economic growth the aim will therefore have to be to increase labour productivity. To realise this we have to focus on for six pillars of economic policy: (1) human capital, (2) functioning of markets, (3) entrepreneurship, (4) spatial planning, (5) innovation, and (6) sustainability. These six pillars determine the course for economic policy aiming at higher productivity

  10. Chlamydia - CDC Fact Sheet

    Science.gov (United States)

    ... Archive STDs Home Page Bacterial Vaginosis (BV) Chlamydia Gonorrhea Genital Herpes Hepatitis HIV/AIDS & STDs Human Papillomavirus ( ... sheet Pelvic Inflammatory Disease (PID) – CDC fact sheet Gonorrhea – CDC fact sheet STDs Home Page Bacterial Vaginosis ( ...

  11. Benchmarking in Mobarakeh Steel Company

    Directory of Open Access Journals (Sweden)

    Sasan Ghasemi

    2008-05-01

    Full Text Available Benchmarking is considered as one of the most effective ways of improving performance incompanies. Although benchmarking in business organizations is a relatively new concept and practice, ithas rapidly gained acceptance worldwide. This paper introduces the benchmarking project conducted in Esfahan’s Mobarakeh Steel Company, as the first systematic benchmarking project conducted in Iran. It aimsto share the process deployed for the benchmarking project in this company and illustrate how the projectsystematic implementation led to succes.

  12. Benchmarking in Mobarakeh Steel Company

    OpenAIRE

    Sasan Ghasemi; Mohammad Nazemi; Mehran Nejati

    2008-01-01

    Benchmarking is considered as one of the most effective ways of improving performance in companies. Although benchmarking in business organizations is a relatively new concept and practice, it has rapidly gained acceptance worldwide. This paper introduces the benchmarking project conducted in Esfahan's Mobarakeh Steel Company, as the first systematic benchmarking project conducted in Iran. It aims to share the process deployed for the benchmarking project in this company and illustrate how th...

  13. Shielding Benchmark Computational Analysis

    International Nuclear Information System (INIS)

    Hunter, H.T.; Slater, C.O.; Holland, L.B.; Tracz, G.; Marshall, W.J.; Parsons, J.L.

    2000-01-01

    Over the past several decades, nuclear science has relied on experimental research to verify and validate information about shielding nuclear radiation for a variety of applications. These benchmarks are compared with results from computer code models and are useful for the development of more accurate cross-section libraries, computer code development of radiation transport modeling, and building accurate tests for miniature shielding mockups of new nuclear facilities. When documenting measurements, one must describe many parts of the experimental results to allow a complete computational analysis. Both old and new benchmark experiments, by any definition, must provide a sound basis for modeling more complex geometries required for quality assurance and cost savings in nuclear project development. Benchmarks may involve one or many materials and thicknesses, types of sources, and measurement techniques. In this paper the benchmark experiments of varying complexity are chosen to study the transport properties of some popular materials and thicknesses. These were analyzed using three-dimensional (3-D) models and continuous energy libraries of MCNP4B2, a Monte Carlo code developed at Los Alamos National Laboratory, New Mexico. A shielding benchmark library provided the experimental data and allowed a wide range of choices for source, geometry, and measurement data. The experimental data had often been used in previous analyses by reputable groups such as the Cross Section Evaluation Working Group (CSEWG) and the Organization for Economic Cooperation and Development/Nuclear Energy Agency Nuclear Science Committee (OECD/NEANSC)

  14. GASN sheets

    International Nuclear Information System (INIS)

    2013-12-01

    This document gathers around 50 detailed sheets which describe and present various aspects, data and information related to the nuclear sector or, more generally to energy. The following items are addressed: natural and artificial radioactive environment, evolution of energy needs in the world, radioactive wastes, which energy for France tomorrow, the consequences in France of the Chernobyl accident, ammunitions containing depleted uranium, processing and recycling of used nuclear fuel, transport of radioactive materials, seismic risk for the basic nuclear installations, radon, the precautionary principle, the issue of low doses, the EPR, the greenhouse effect, the Oklo nuclear reactors, ITER on the way towards fusion reactors, simulation and nuclear deterrence, crisis management in the nuclear field, does nuclear research put a break on the development of renewable energies by monopolizing funding, nuclear safety and security, the plutonium, generation IV reactors, comparison of different modes of electricity production, medical exposure to ionizing radiations, the control of nuclear activities, food preservation by ionization, photovoltaic solar collectors, the Polonium 210, the dismantling of nuclear installations, wind energy, desalination and nuclear reactors, from non-communication to transparency about nuclear safety, the Jules Horowitz reactor, CO 2 capture and storage, hydrogen, solar energy, the radium, the subcontractors of maintenance of the nuclear fleet, biomass, internal radio-contamination, epidemiological studies, submarine nuclear propulsion, sea energy, the Three Mile Island accident, the Chernobyl accident, the Fukushima accident, the nuclear after Fukushima

  15. Deviating From the Benchmarks

    DEFF Research Database (Denmark)

    Rocha, Vera; Van Praag, Mirjam; Carneiro, Anabela

    This paper studies three related questions: To what extent otherwise similar startups employ different quantities and qualities of human capital at the moment of entry? How persistent are initial human capital choices over time? And how does deviating from human capital benchmarks influence firm......, founders human capital, and the ownership structure of startups (solo entrepreneurs versus entrepreneurial teams). We then study the survival implications of exogenous deviations from these benchmarks, based on spline models for survival data. Our results indicate that (especially negative) deviations from...... the benchmark can be substantial, are persistent over time, and hinder the survival of firms. The implications may, however, vary according to the sector and the ownership structure at entry. Given the stickiness of initial choices, wrong human capital decisions at entry turn out to be a close to irreversible...

  16. Benchmark neutron porosity log calculations

    International Nuclear Information System (INIS)

    Little, R.C.; Michael, M.; Verghese, K.; Gardner, R.P.

    1989-01-01

    Calculations have been made for a benchmark neutron porosity log problem with the general purpose Monte Carlo code MCNP and the specific purpose Monte Carlo code McDNL. For accuracy and timing comparison purposes the CRAY XMP and MicroVax II computers have been used with these codes. The CRAY has been used for an analog version of the MCNP code while the MicroVax II has been used for the optimized variance reduction versions of both codes. Results indicate that the two codes give the same results within calculated standard deviations. Comparisons are given and discussed for accuracy (precision) and computation times for the two codes

  17. Benchmarking for Best Practice

    CERN Document Server

    Zairi, Mohamed

    1998-01-01

    Benchmarking for Best Practice uses up-to-the-minute case-studies of individual companies and industry-wide quality schemes to show how and why implementation has succeeded. For any practitioner wanting to establish best practice in a wide variety of business areas, this book makes essential reading. .It is also an ideal textbook on the applications of TQM since it describes concepts, covers definitions and illustrates the applications with first-hand examples. Professor Mohamed Zairi is an international expert and leading figure in the field of benchmarking. His pioneering work in this area l

  18. Benchmarking Danish Industries

    DEFF Research Database (Denmark)

    Gammelgaard, Britta; Bentzen, Eric; Aagaard Andreassen, Mette

    2003-01-01

    compatible survey. The International Manufacturing Strategy Survey (IMSS) doesbring up the question of supply chain management, but unfortunately, we did not have access to thedatabase. Data from the members of the SCOR-model, in the form of benchmarked performance data,may exist, but are nonetheless...... not public. The survey is a cooperative project "Benchmarking DanishIndustries" with CIP/Aalborg University, the Danish Technological University, the DanishTechnological Institute and Copenhagen Business School as consortia partners. The project has beenfunded by the Danish Agency for Trade and Industry...

  19. RB reactor benchmark cores

    International Nuclear Information System (INIS)

    Pesic, M.

    1998-01-01

    A selected set of the RB reactor benchmark cores is presented in this paper. The first results of validation of the well-known Monte Carlo MCNP TM code and adjoining neutron cross section libraries are given. They confirm the idea for the proposal of the new U-D 2 O criticality benchmark system and support the intention to include this system in the next edition of the recent OECD/NEA Project: International Handbook of Evaluated Criticality Safety Experiment, in near future. (author)

  20. Benchmarking ENDF/B-VII.0

    International Nuclear Information System (INIS)

    Marck, Steven C. van der

    2006-01-01

    The new major release VII.0 of the ENDF/B nuclear data library has been tested extensively using benchmark calculations. These were based upon MCNP-4C3 continuous-energy Monte Carlo neutronics simulations, together with nuclear data processed using the code NJOY. Three types of benchmarks were used, viz., criticality safety benchmarks (fusion) shielding benchmarks, and reference systems for which the effective delayed neutron fraction is reported. For criticality safety, more than 700 benchmarks from the International Handbook of Criticality Safety Benchmark Experiments were used. Benchmarks from all categories were used, ranging from low-enriched uranium, compound fuel, thermal spectrum ones (LEU-COMP-THERM), to mixed uranium-plutonium, metallic fuel, fast spectrum ones (MIX-MET-FAST). For fusion shielding many benchmarks were based on IAEA specifications for the Oktavian experiments (for Al, Co, Cr, Cu, LiF, Mn, Mo, Si, Ti, W, Zr), Fusion Neutronics Source in Japan (for Be, C, N, O, Fe, Pb), and Pulsed Sphere experiments at Lawrence Livermore National Laboratory (for 6 Li, 7 Li, Be, C, N, O, Mg, Al, Ti, Fe, Pb, D 2 O, H 2 O, concrete, polyethylene and teflon). For testing delayed neutron data more than thirty measurements in widely varying systems were used. Among these were measurements in the Tank Critical Assembly (TCA in Japan) and IPEN/MB-01 (Brazil), both with a thermal spectrum, and two cores in Masurca (France) and three cores in the Fast Critical Assembly (FCA, Japan), all with fast spectra. In criticality safety, many benchmarks were chosen from the category with a thermal spectrum, low-enriched uranium, compound fuel (LEU-COMP-THERM), because this is typical of most current-day reactors, and because these benchmarks were previously underpredicted by as much as 0.5% by most nuclear data libraries (such as ENDF/B-VI.8, JEFF-3.0). The calculated results presented here show that this underprediction is no longer there for ENDF/B-VII.0. The average over 257

  1. Benchmarking and Performance Management

    Directory of Open Access Journals (Sweden)

    Adrian TANTAU

    2010-12-01

    Full Text Available The relevance of the chosen topic is explained by the meaning of the firm efficiency concept - the firm efficiency means the revealed performance (how well the firm performs in the actual market environment given the basic characteristics of the firms and their markets that are expected to drive their profitability (firm size, market power etc.. This complex and relative performance could be due to such things as product innovation, management quality, work organization, some other factors can be a cause even if they are not directly observed by the researcher. The critical need for the management individuals/group to continuously improve their firm/company’s efficiency and effectiveness, the need for the managers to know which are the success factors and the competitiveness determinants determine consequently, what performance measures are most critical in determining their firm’s overall success. Benchmarking, when done properly, can accurately identify both successful companies and the underlying reasons for their success. Innovation and benchmarking firm level performance are critical interdependent activities. Firm level variables, used to infer performance, are often interdependent due to operational reasons. Hence, the managers need to take the dependencies among these variables into account when forecasting and benchmarking performance. This paper studies firm level performance using financial ratio and other type of profitability measures. It uses econometric models to describe and then propose a method to forecast and benchmark performance.

  2. Surveys and Benchmarks

    Science.gov (United States)

    Bers, Trudy

    2012-01-01

    Surveys and benchmarks continue to grow in importance for community colleges in response to several factors. One is the press for accountability, that is, for colleges to report the outcomes of their programs and services to demonstrate their quality and prudent use of resources, primarily to external constituents and governing boards at the state…

  3. Omega Hawaii Antenna System: Modification and Validation Tests. Volume 2. Data Sheets.

    Science.gov (United States)

    1979-10-19

    a benchmark because of potential hotel construction . DS 5-1 DATA SHEET 5 (DS-5) RADIO FIELD INTENSITY MEASUREMENTS OMEGA STATION: HAWAII SITE NO. C 1A...27.5 1008 11.05 26.5 1007 Ft 11.80 28.1 COMMENT Not considered for a benchmark because of potential hotel construction . DS 5-5 DATA SHEET 5 (DS-5) RADIO

  4. Benchmark calculations of power distribution within assemblies

    International Nuclear Information System (INIS)

    Cavarec, C.; Perron, J.F.; Verwaerde, D.; West, J.P.

    1994-09-01

    The main objective of this Benchmark is to compare different techniques for fine flux prediction based upon coarse mesh diffusion or transport calculations. We proposed 5 ''core'' configurations including different assembly types (17 x 17 pins, ''uranium'', ''absorber'' or ''MOX'' assemblies), with different boundary conditions. The specification required results in terms of reactivity, pin by pin fluxes and production rate distributions. The proposal for these Benchmark calculations was made by J.C. LEFEBVRE, J. MONDOT, J.P. WEST and the specification (with nuclear data, assembly types, core configurations for 2D geometry and results presentation) was distributed to correspondents of the OECD Nuclear Energy Agency. 11 countries and 19 companies answered the exercise proposed by this Benchmark. Heterogeneous calculations and homogeneous calculations were made. Various methods were used to produce the results: diffusion (finite differences, nodal...), transport (P ij , S n , Monte Carlo). This report presents an analysis and intercomparisons of all the results received

  5. Calculation of the 5th AER dynamic benchmark with APROS

    International Nuclear Information System (INIS)

    Puska, E.K.; Kontio, H.

    1998-01-01

    The model used for calculation of the 5th AER dynamic benchmark with APROS code is presented. In the calculation of the 5th AER dynamic benchmark the three-dimensional neutronics model of APROS was used. The core was divided axially into 20 nodes according to the specifications of the benchmark and each six identical fuel assemblies were placed into one one-dimensional thermal hydraulic channel. The five-equation thermal hydraulic model was used in the benchmark. The plant process and automation was described with a generic VVER-440 plant model created by IVO PE. (author)

  6. Benchmarking i den offentlige sektor

    DEFF Research Database (Denmark)

    Bukh, Per Nikolaj; Dietrichson, Lars; Sandalgaard, Niels

    2008-01-01

    I artiklen vil vi kort diskutere behovet for benchmarking i fraværet af traditionelle markedsmekanismer. Herefter vil vi nærmere redegøre for, hvad benchmarking er med udgangspunkt i fire forskellige anvendelser af benchmarking. Regulering af forsyningsvirksomheder vil blive behandlet, hvorefter...

  7. An improved benchmark model for the Big Ten critical assembly - 021

    International Nuclear Information System (INIS)

    Mosteller, R.D.

    2010-01-01

    A new benchmark specification is developed for the BIG TEN uranium critical assembly. The assembly has a fast spectrum, and its core contains approximately 10 wt.% enriched uranium. Detailed specifications for the benchmark are provided, and results from the MCNP5 Monte Carlo code using a variety of nuclear-data libraries are given for this benchmark and two others. (authors)

  8. Cloud benchmarking for performance

    OpenAIRE

    Varghese, Blesson; Akgun, Ozgur; Miguel, Ian; Thai, Long; Barker, Adam

    2014-01-01

    Date of Acceptance: 20/09/2014 How can applications be deployed on the cloud to achieve maximum performance? This question has become significant and challenging with the availability of a wide variety of Virtual Machines (VMs) with different performance capabilities in the cloud. The above question is addressed by proposing a six step benchmarking methodology in which a user provides a set of four weights that indicate how important each of the following groups: memory, processor, computa...

  9. SCWEB, Scientific Workstation Evaluation Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Raffenetti, R C [Computing Services-Support Services Division, Argonne National Laboratory, 9700 South Cass Avenue, Argonne, Illinois 60439 (United States)

    1988-06-16

    1 - Description of program or function: The SCWEB (Scientific Workstation Evaluation Benchmark) software includes 16 programs which are executed in a well-defined scenario to measure the following performance capabilities of a scientific workstation: implementation of FORTRAN77, processor speed, memory management, disk I/O, monitor (or display) output, scheduling of processing (multiprocessing), and scheduling of print tasks (spooling). 2 - Method of solution: The benchmark programs are: DK1, DK2, and DK3, which do Fourier series fitting based on spline techniques; JC1, which checks the FORTRAN function routines which produce numerical results; JD1 and JD2, which solve dense systems of linear equations in double- and single-precision, respectively; JD3 and JD4, which perform matrix multiplication in single- and double-precision, respectively; RB1, RB2, and RB3, which perform substantial amounts of I/O processing on files other than the input and output files; RR1, which does intense single-precision floating-point multiplication in a tight loop, RR2, which initializes a 512x512 integer matrix in a manner which skips around in the address space rather than initializing each consecutive memory cell in turn; RR3, which writes alternating text buffers to the output file; RR4, which evaluates the timer routines and demonstrates that they conform to the specification; and RR5, which determines whether the workstation is capable of executing a 4-megabyte program

  10. Storage-Intensive Supercomputing Benchmark Study

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, J; Dossa, D; Gokhale, M; Hysom, D; May, J; Pearce, R; Yoo, A

    2007-10-30

    : SuperMicro X7DBE Xeon Dual Socket Blackford Server Motherboard; 2 Intel Xeon Dual-Core 2.66 GHz processors; 1 GB DDR2 PC2-5300 RAM (2 x 512); 80GB Hard Drive (Seagate SATA II Barracuda). The Fusion board is presently capable of 4X in a PCIe slot. The image resampling benchmark was run on a dual Xeon workstation with NVIDIA graphics card (see Chapter 5 for full specification). An XtremeData Opteron+FPGA was used for the language classification application. We observed that these benchmarks are not uniformly I/O intensive. The only benchmark that showed greater that 50% of the time in I/O was the graph algorithm when it accessed data files over NFS. When local disk was used, the graph benchmark spent at most 40% of its time in I/O. The other benchmarks were CPU dominated. The image resampling benchmark and language classification showed order of magnitude speedup over software by using co-processor technology to offload the CPU-intensive kernels. Our experiments to date suggest that emerging hardware technologies offer significant benefit to boosting the performance of data-intensive algorithms. Using GPU and FPGA co-processors, we were able to improve performance by more than an order of magnitude on the benchmark algorithms, eliminating the processor bottleneck of CPU-bound tasks. Experiments with a prototype solid state nonvolative memory available today show 10X better throughput on random reads than disk, with a 2X speedup on a graph processing benchmark when compared to the use of local SATA disk.

  11. A simplified 2D HTTR benchmark problem

    International Nuclear Information System (INIS)

    Zhang, Z.; Rahnema, F.; Pounders, J. M.; Zhang, D.; Ougouag, A.

    2009-01-01

    To access the accuracy of diffusion or transport methods for reactor calculations, it is desirable to create heterogeneous benchmark problems that are typical of relevant whole core configurations. In this paper we have created a numerical benchmark problem in 2D configuration typical of a high temperature gas cooled prismatic core. This problem was derived from the HTTR start-up experiment. For code-to-code verification, complex details of geometry and material specification of the physical experiments are not necessary. To this end, the benchmark problem presented here is derived by simplifications that remove the unnecessary details while retaining the heterogeneity and major physics properties from the neutronics viewpoint. Also included here is a six-group material (macroscopic) cross section library for the benchmark problem. This library was generated using the lattice depletion code HELIOS. Using this library, benchmark quality Monte Carlo solutions are provided for three different configurations (all-rods-in, partially-controlled and all-rods-out). The reference solutions include the core eigenvalue, block (assembly) averaged fuel pin fission density distributions, and absorption rate in absorbers (burnable poison and control rods). (authors)

  12. Benchmarking reference services: an introduction.

    Science.gov (United States)

    Marshall, J G; Buchanan, H S

    1995-01-01

    Benchmarking is based on the common sense idea that someone else, either inside or outside of libraries, has found a better way of doing certain things and that your own library's performance can be improved by finding out how others do things and adopting the best practices you find. Benchmarking is one of the tools used for achieving continuous improvement in Total Quality Management (TQM) programs. Although benchmarking can be done on an informal basis, TQM puts considerable emphasis on formal data collection and performance measurement. Used to its full potential, benchmarking can provide a common measuring stick to evaluate process performance. This article introduces the general concept of benchmarking, linking it whenever possible to reference services in health sciences libraries. Data collection instruments that have potential application in benchmarking studies are discussed and the need to develop common measurement tools to facilitate benchmarking is emphasized.

  13. Introduction to 'International Handbook of Criticality Safety Benchmark Experiments'

    International Nuclear Information System (INIS)

    Komuro, Yuichi

    1998-01-01

    The Criticality Safety Benchmark Evaluation Project (CSBEP) was initiated in 1992 by the United States Department of Energy. The project quickly became an international effort as scientists from other interested countries became involved. The International Criticality Safety Benchmark Evaluation Project (ICSBEP) is now an official activity of the Organization for Economic Cooperation and Development-Nuclear Energy Agency (OECD-NEA). 'International Handbook of Criticality Safety Benchmark Experiments' was prepared and is updated year by year by the working group of the project. This handbook contains criticality safety benchmark specifications that have been derived from experiments that were performed at various nuclear critical facilities around the world. The benchmark specifications are intended for use by criticality safety engineers to validate calculation techniques used. The author briefly introduces the informative handbook and would like to encourage Japanese engineers who are in charge of nuclear criticality safety to use the handbook. (author)

  14. Standard Guide for Benchmark Testing of Light Water Reactor Calculations

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2010-01-01

    1.1 This guide covers general approaches for benchmarking neutron transport calculations in light water reactor systems. A companion guide (Guide E2005) covers use of benchmark fields for testing neutron transport calculations and cross sections in well controlled environments. This guide covers experimental benchmarking of neutron fluence calculations (or calculations of other exposure parameters such as dpa) in more complex geometries relevant to reactor surveillance. Particular sections of the guide discuss: the use of well-characterized benchmark neutron fields to provide an indication of the accuracy of the calculational methods and nuclear data when applied to typical cases; and the use of plant specific measurements to indicate bias in individual plant calculations. Use of these two benchmark techniques will serve to limit plant-specific calculational uncertainty, and, when combined with analytical uncertainty estimates for the calculations, will provide uncertainty estimates for reactor fluences with ...

  15. Benchmarking HIV health care

    DEFF Research Database (Denmark)

    Podlekareva, Daria; Reekie, Joanne; Mocroft, Amanda

    2012-01-01

    ABSTRACT: BACKGROUND: State-of-the-art care involving the utilisation of multiple health care interventions is the basis for an optimal long-term clinical prognosis for HIV-patients. We evaluated health care for HIV-patients based on four key indicators. METHODS: Four indicators of health care we...... document pronounced regional differences in adherence to guidelines and can help to identify gaps and direct target interventions. It may serve as a tool for assessment and benchmarking the clinical management of HIV-patients in any setting worldwide....

  16. Benchmarking Cloud Storage Systems

    OpenAIRE

    Wang, Xing

    2014-01-01

    With the rise of cloud computing, many cloud storage systems like Dropbox, Google Drive and Mega have been built to provide decentralized and reliable file storage. It is thus of prime importance to know their features, performance, and the best way to make use of them. In this context, we introduce BenchCloud, a tool designed as part of this thesis to conveniently and efficiently benchmark any cloud storage system. First, we provide a study of six commonly-used cloud storage systems to ident...

  17. The COST Benchmark

    DEFF Research Database (Denmark)

    Jensen, Christian Søndergaard; Tiesyte, Dalia; Tradisauskas, Nerius

    2006-01-01

    An infrastructure is emerging that enables the positioning of populations of on-line, mobile service users. In step with this, research in the management of moving objects has attracted substantial attention. In particular, quite a few proposals now exist for the indexing of moving objects...... takes into account that the available positions of the moving objects are inaccurate, an aspect largely ignored in previous indexing research. The concepts of data and query enlargement are introduced for addressing inaccuracy. As proof of concepts of the benchmark, the paper covers the application...

  18. Guidebook for Using the Tool BEST Cement: Benchmarking and Energy Savings Tool for the Cement Industry

    Energy Technology Data Exchange (ETDEWEB)

    Galitsky, Christina; Price, Lynn; Zhou, Nan; Fuqiu , Zhou; Huawen, Xiong; Xuemin, Zeng; Lan, Wang

    2008-07-30

    The Benchmarking and Energy Savings Tool (BEST) Cement is a process-based tool based on commercially available efficiency technologies used anywhere in the world applicable to the cement industry. This version has been designed for use in China. No actual cement facility with every single efficiency measure included in the benchmark will likely exist; however, the benchmark sets a reasonable standard by which to compare for plants striving to be the best. The energy consumption of the benchmark facility differs due to differences in processing at a given cement facility. The tool accounts for most of these variables and allows the user to adapt the model to operational variables specific for his/her cement facility. Figure 1 shows the boundaries included in a plant modeled by BEST Cement. In order to model the benchmark, i.e., the most energy efficient cement facility, so that it represents a facility similar to the user's cement facility, the user is first required to input production variables in the input sheet (see Section 6 for more information on how to input variables). These variables allow the tool to estimate a benchmark facility that is similar to the user's cement plant, giving a better picture of the potential for that particular facility, rather than benchmarking against a generic one. The input variables required include the following: (1) the amount of raw materials used in tonnes per year (limestone, gypsum, clay minerals, iron ore, blast furnace slag, fly ash, slag from other industries, natural pozzolans, limestone powder (used post-clinker stage), municipal wastes and others); the amount of raw materials that are preblended (prehomogenized and proportioned) and crushed (in tonnes per year); (2) the amount of additives that are dried and ground (in tonnes per year); (3) the production of clinker (in tonnes per year) from each kiln by kiln type; (4) the amount of raw materials, coal and clinker that is ground by mill type (in tonnes per

  19. Core Benchmarks Descriptions

    International Nuclear Information System (INIS)

    Pavlovichev, A.M.

    2001-01-01

    Actual regulations while designing of new fuel cycles for nuclear power installations comprise a calculational justification to be performed by certified computer codes. It guarantees that obtained calculational results will be within the limits of declared uncertainties that are indicated in a certificate issued by Gosatomnadzor of Russian Federation (GAN) and concerning a corresponding computer code. A formal justification of declared uncertainties is the comparison of calculational results obtained by a commercial code with the results of experiments or of calculational tests that are calculated with an uncertainty defined by certified precision codes of MCU type or of other one. The actual level of international cooperation provides an enlarging of the bank of experimental and calculational benchmarks acceptable for a certification of commercial codes that are being used for a design of fuel loadings with MOX fuel. In particular, the work is practically finished on the forming of calculational benchmarks list for a certification of code TVS-M as applied to MOX fuel assembly calculations. The results on these activities are presented

  20. Benchmarking & European Sustainable Transport Policies

    DEFF Research Database (Denmark)

    Gudmundsson, H.

    2003-01-01

    , Benchmarking is one of the management tools that have recently been introduced in the transport sector. It is rapidly being applied to a wide range of transport operations, services and policies. This paper is a contribution to the discussion of the role of benchmarking in the future efforts to...... contribution to the discussions within the Eusponsored BEST Thematic Network (Benchmarking European Sustainable Transport) which ran from 2000 to 2003....

  1. Benchmarking in Czech Higher Education

    OpenAIRE

    Plaček Michal; Ochrana František; Půček Milan

    2015-01-01

    The first part of this article surveys the current experience with the use of benchmarking at Czech universities specializing in economics and management. The results indicate that collaborative benchmarking is not used on this level today, but most actors show some interest in its introduction. The expression of the need for it and the importance of benchmarking as a very suitable performance-management tool in less developed countries are the impetus for the second part of our article. Base...

  2. Power reactor pressure vessel benchmarks

    International Nuclear Information System (INIS)

    Rahn, F.J.

    1978-01-01

    A review is given of the current status of experimental and calculational benchmarks for use in understanding the radiation embrittlement effects in the pressure vessels of operating light water power reactors. The requirements of such benchmarks for application to pressure vessel dosimetry are stated. Recent developments in active and passive neutron detectors sensitive in the ranges of importance to embrittlement studies are summarized and recommendations for improvements in the benchmark are made. (author)

  3. Self-benchmarking Guide for Cleanrooms: Metrics, Benchmarks, Actions

    Energy Technology Data Exchange (ETDEWEB)

    Mathew, Paul; Sartor, Dale; Tschudi, William

    2009-07-13

    This guide describes energy efficiency metrics and benchmarks that can be used to track the performance of and identify potential opportunities to reduce energy use in laboratory buildings. This guide is primarily intended for personnel who have responsibility for managing energy use in existing laboratory facilities - including facilities managers, energy managers, and their engineering consultants. Additionally, laboratory planners and designers may also use the metrics and benchmarks described in this guide for goal-setting in new construction or major renovation. This guide provides the following information: (1) A step-by-step outline of the benchmarking process. (2) A set of performance metrics for the whole building as well as individual systems. For each metric, the guide provides a definition, performance benchmarks, and potential actions that can be inferred from evaluating this metric. (3) A list and descriptions of the data required for computing the metrics. This guide is complemented by spreadsheet templates for data collection and for computing the benchmarking metrics. This guide builds on prior research supported by the national Laboratories for the 21st Century (Labs21) program, supported by the U.S. Department of Energy and the U.S. Environmental Protection Agency. Much of the benchmarking data are drawn from the Labs21 benchmarking database and technical guides. Additional benchmark data were obtained from engineering experts including laboratory designers and energy managers.

  4. Self-benchmarking Guide for Laboratory Buildings: Metrics, Benchmarks, Actions

    Energy Technology Data Exchange (ETDEWEB)

    Mathew, Paul; Greenberg, Steve; Sartor, Dale

    2009-07-13

    This guide describes energy efficiency metrics and benchmarks that can be used to track the performance of and identify potential opportunities to reduce energy use in laboratory buildings. This guide is primarily intended for personnel who have responsibility for managing energy use in existing laboratory facilities - including facilities managers, energy managers, and their engineering consultants. Additionally, laboratory planners and designers may also use the metrics and benchmarks described in this guide for goal-setting in new construction or major renovation. This guide provides the following information: (1) A step-by-step outline of the benchmarking process. (2) A set of performance metrics for the whole building as well as individual systems. For each metric, the guide provides a definition, performance benchmarks, and potential actions that can be inferred from evaluating this metric. (3) A list and descriptions of the data required for computing the metrics. This guide is complemented by spreadsheet templates for data collection and for computing the benchmarking metrics. This guide builds on prior research supported by the national Laboratories for the 21st Century (Labs21) program, supported by the U.S. Department of Energy and the U.S. Environmental Protection Agency. Much of the benchmarking data are drawn from the Labs21 benchmarking database and technical guides. Additional benchmark data were obtained from engineering experts including laboratory designers and energy managers.

  5. Shielding benchmark test

    International Nuclear Information System (INIS)

    Kawai, Masayoshi

    1984-01-01

    Iron data in JENDL-2 have been tested by analyzing shielding benchmark experiments for neutron transmission through iron block performed at KFK using CF-252 neutron source and at ORNL using collimated neutron beam from reactor. The analyses are made by a shielding analysis code system RADHEAT-V4 developed at JAERI. The calculated results are compared with the measured data. As for the KFK experiments, the C/E values are about 1.1. For the ORNL experiments, the calculated values agree with the measured data within an accuracy of 33% for the off-center geometry. The d-t neutron transmission measurements through carbon sphere made at LLNL are also analyzed preliminarily by using the revised JENDL data for fusion neutronics calculation. (author)

  6. Benchmarking monthly homogenization algorithms

    Science.gov (United States)

    Venema, V. K. C.; Mestre, O.; Aguilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertacnik, G.; Szentimrey, T.; Stepanek, P.; Zahradnicek, P.; Viarre, J.; Müller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratianni, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Prohom Duran, M.; Likso, T.; Esteban, P.; Brandsma, T.

    2011-08-01

    The COST (European Cooperation in Science and Technology) Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative). The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random break-type inhomogeneities were added to the simulated datasets modeled as a Poisson process with normally distributed breakpoint sizes. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide) trend was added. Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including (i) the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii) the error in linear trend estimates and (iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data

  7. Benchmarking foreign electronics technologies

    Energy Technology Data Exchange (ETDEWEB)

    Bostian, C.W.; Hodges, D.A.; Leachman, R.C.; Sheridan, T.B.; Tsang, W.T.; White, R.M.

    1994-12-01

    This report has been drafted in response to a request from the Japanese Technology Evaluation Center`s (JTEC) Panel on Benchmarking Select Technologies. Since April 1991, the Competitive Semiconductor Manufacturing (CSM) Program at the University of California at Berkeley has been engaged in a detailed study of quality, productivity, and competitiveness in semiconductor manufacturing worldwide. The program is a joint activity of the College of Engineering, the Haas School of Business, and the Berkeley Roundtable on the International Economy, under sponsorship of the Alfred P. Sloan Foundation, and with the cooperation of semiconductor producers from Asia, Europe and the United States. Professors David A. Hodges and Robert C. Leachman are the project`s Co-Directors. The present report for JTEC is primarily based on data and analysis drawn from that continuing program. The CSM program is being conducted by faculty, graduate students and research staff from UC Berkeley`s Schools of Engineering and Business, and Department of Economics. Many of the participating firms are represented on the program`s Industry Advisory Board. The Board played an important role in defining the research agenda. A pilot study was conducted in 1991 with the cooperation of three semiconductor plants. The research plan and survey documents were thereby refined. The main phase of the CSM benchmarking study began in mid-1992 and will continue at least through 1997. reports are presented on the manufacture of integrated circuits; data storage; wireless technology; human-machine interfaces; and optoelectronics. Selected papers are indexed separately for inclusion in the Energy Science and Technology Database.

  8. SSI and structural benchmarks

    International Nuclear Information System (INIS)

    Philippacopoulos, A.J.; Miller, C.A.; Costantino, C.J.; Graves, H.

    1987-01-01

    This paper presents the latest results of the ongoing program entitled, Standard Problems for Structural Computer Codes, currently being worked on at BNL for the USNRC, Office of Nuclear Regulatory Research. During FY 1986, efforts were focussed on three tasks, namely, (1) an investigation of ground water effects on the response of Category I structures, (2) the Soil-Structure Interaction Workshop and (3) studies on structural benchmarks associated with Category I structures. The objective of the studies on ground water effects is to verify the applicability and the limitations of the SSI methods currently used by the industry in performing seismic evaluations of nuclear plants which are located at sites with high water tables. In a previous study by BNL (NUREG/CR-4588), it has been concluded that the pore water can influence significantly the soil-structure interaction process. This result, however, is based on the assumption of fully saturated soil profiles. Consequently, the work was further extended to include cases associated with variable water table depths. In this paper, results related to cut-off depths beyond which the pore water effects can be ignored in seismic calculations, are addressed. Comprehensive numerical data are given for soil configurations typical to those encountered in nuclear plant sites. These data were generated by using a modified version of the SLAM code which is capable of handling problems related to the dynamic response of saturated soils. Further, the paper presents some key aspects of the Soil-Structure Interaction Workshop (NUREG/CP-0054) which was held in Bethesda, MD on June 1, 1986. Finally, recent efforts related to the task on the structural benchmarks are described

  9. Review for session K - benchmarks

    International Nuclear Information System (INIS)

    McCracken, A.K.

    1980-01-01

    Eight of the papers to be considered in Session K are directly concerned, at least in part, with the Pool Critical Assembly (P.C.A.) benchmark at Oak Ridge. The remaining seven papers in this session, the subject of this review, are concerned with a variety of topics related to the general theme of Benchmarks and will be considered individually

  10. Internal Benchmarking for Institutional Effectiveness

    Science.gov (United States)

    Ronco, Sharron L.

    2012-01-01

    Internal benchmarking is an established practice in business and industry for identifying best in-house practices and disseminating the knowledge about those practices to other groups in the organization. Internal benchmarking can be done with structures, processes, outcomes, or even individuals. In colleges or universities with multicampuses or a…

  11. Entropy-based benchmarking methods

    NARCIS (Netherlands)

    Temurshoev, Umed

    2012-01-01

    We argue that benchmarking sign-volatile series should be based on the principle of movement and sign preservation, which states that a bench-marked series should reproduce the movement and signs in the original series. We show that the widely used variants of Denton (1971) method and the growth

  12. Benchmark simulation models, quo vadis?

    DEFF Research Database (Denmark)

    Jeppsson, U.; Alex, J; Batstone, D. J.

    2013-01-01

    As the work of the IWA Task Group on Benchmarking of Control Strategies for wastewater treatment plants (WWTPs) is coming to an end, it is essential to disseminate the knowledge gained. For this reason, all authors of the IWA Scientific and Technical Report on benchmarking have come together to p...

  13. EPA's Benchmark Dose Modeling Software

    Science.gov (United States)

    The EPA developed the Benchmark Dose Software (BMDS) as a tool to help Agency risk assessors facilitate applying benchmark dose (BMD) method’s to EPA’s human health risk assessment (HHRA) documents. The application of BMD methods overcomes many well know limitations ...

  14. Benchmark for Strategic Performance Improvement.

    Science.gov (United States)

    Gohlke, Annette

    1997-01-01

    Explains benchmarking, a total quality management tool used to measure and compare the work processes in a library with those in other libraries to increase library performance. Topics include the main groups of upper management, clients, and staff; critical success factors for each group; and benefits of benchmarking. (Author/LRW)

  15. Benchmarking: A Process for Improvement.

    Science.gov (United States)

    Peischl, Thomas M.

    One problem with the outcome-based measures used in higher education is that they measure quantity but not quality. Benchmarking, or the use of some external standard of quality to measure tasks, processes, and outputs, is partially solving that difficulty. Benchmarking allows for the establishment of a systematic process to indicate if outputs…

  16. Benchmark job – Watch out!

    CERN Multimedia

    Staff Association

    2017-01-01

    On 12 December 2016, in Echo No. 259, we already discussed at length the MERIT and benchmark jobs. Still, we find that a couple of issues warrant further discussion. Benchmark job – administrative decision on 1 July 2017 On 12 January 2017, the HR Department informed all staff members of a change to the effective date of the administrative decision regarding benchmark jobs. The benchmark job title of each staff member will be confirmed on 1 July 2017, instead of 1 May 2017 as originally announced in HR’s letter on 18 August 2016. Postponing the administrative decision by two months will leave a little more time to address the issues related to incorrect placement in a benchmark job. Benchmark job – discuss with your supervisor, at the latest during the MERIT interview In order to rectify an incorrect placement in a benchmark job, it is essential that the supervisor and the supervisee go over the assigned benchmark job together. In most cases, this placement has been done autom...

  17. Benchmarking: applications to transfusion medicine.

    Science.gov (United States)

    Apelseth, Torunn Oveland; Molnar, Laura; Arnold, Emmy; Heddle, Nancy M

    2012-10-01

    Benchmarking is as a structured continuous collaborative process in which comparisons for selected indicators are used to identify factors that, when implemented, will improve transfusion practices. This study aimed to identify transfusion medicine studies reporting on benchmarking, summarize the benchmarking approaches used, and identify important considerations to move the concept of benchmarking forward in the field of transfusion medicine. A systematic review of published literature was performed to identify transfusion medicine-related studies that compared at least 2 separate institutions or regions with the intention of benchmarking focusing on 4 areas: blood utilization, safety, operational aspects, and blood donation. Forty-five studies were included: blood utilization (n = 35), safety (n = 5), operational aspects of transfusion medicine (n = 5), and blood donation (n = 0). Based on predefined criteria, 7 publications were classified as benchmarking, 2 as trending, and 36 as single-event studies. Three models of benchmarking are described: (1) a regional benchmarking program that collects and links relevant data from existing electronic sources, (2) a sentinel site model where data from a limited number of sites are collected, and (3) an institutional-initiated model where a site identifies indicators of interest and approaches other institutions. Benchmarking approaches are needed in the field of transfusion medicine. Major challenges include defining best practices and developing cost-effective methods of data collection. For those interested in initiating a benchmarking program, the sentinel site model may be most effective and sustainable as a starting point, although the regional model would be the ideal goal. Copyright © 2012 Elsevier Inc. All rights reserved.

  18. How to Advance TPC Benchmarks with Dependability Aspects

    Science.gov (United States)

    Almeida, Raquel; Poess, Meikel; Nambiar, Raghunath; Patil, Indira; Vieira, Marco

    Transactional systems are the core of the information systems of most organizations. Although there is general acknowledgement that failures in these systems often entail significant impact both on the proceeds and reputation of companies, the benchmarks developed and managed by the Transaction Processing Performance Council (TPC) still maintain their focus on reporting bare performance. Each TPC benchmark has to pass a list of dependability-related tests (to verify ACID properties), but not all benchmarks require measuring their performances. While TPC-E measures the recovery time of some system failures, TPC-H and TPC-C only require functional correctness of such recovery. Consequently, systems used in TPC benchmarks are tuned mostly for performance. In this paper we argue that nowadays systems should be tuned for a more comprehensive suite of dependability tests, and that a dependability metric should be part of TPC benchmark publications. The paper discusses WHY and HOW this can be achieved. Two approaches are introduced and discussed: augmenting each TPC benchmark in a customized way, by extending each specification individually; and pursuing a more unified approach, defining a generic specification that could be adjoined to any TPC benchmark.

  19. Benchmarking school nursing practice: the North West Regional Benchmarking Group

    OpenAIRE

    Littler, Nadine; Mullen, Margaret; Beckett, Helen; Freshney, Alice; Pinder, Lynn

    2016-01-01

    It is essential that the quality of care is reviewed regularly through robust processes such as benchmarking to ensure all outcomes and resources are evidence-based so that children and young people’s needs are met effectively. This article provides an example of the use of benchmarking in school nursing practice. Benchmarking has been defined as a process for finding, adapting and applying best practices (Camp, 1994). This concept was first adopted in the 1970s ‘from industry where it was us...

  20. NASA Software Engineering Benchmarking Study

    Science.gov (United States)

    Rarick, Heather L.; Godfrey, Sara H.; Kelly, John C.; Crumbley, Robert T.; Wifl, Joel M.

    2013-01-01

    was its software assurance practices, which seemed to rate well in comparison to the other organizational groups and also seemed to include a larger scope of activities. An unexpected benefit of the software benchmarking study was the identification of many opportunities for collaboration in areas including metrics, training, sharing of CMMI experiences and resources such as instructors and CMMI Lead Appraisers, and even sharing of assets such as documented processes. A further unexpected benefit of the study was the feedback on NASA practices that was received from some of the organizations interviewed. From that feedback, other potential areas where NASA could improve were highlighted, such as accuracy of software cost estimation and budgetary practices. The detailed report contains discussion of the practices noted in each of the topic areas, as well as a summary of observations and recommendations from each of the topic areas. The resulting 24 recommendations from the topic areas were then consolidated to eliminate duplication and culled into a set of 14 suggested actionable recommendations. This final set of actionable recommendations, listed below, are items that can be implemented to improve NASA's software engineering practices and to help address many of the items that were listed in the NASA top software engineering issues. 1. Develop and implement standard contract language for software procurements. 2. Advance accurate and trusted software cost estimates for both procured and in-house software and improve the capture of actual cost data to facilitate further improvements. 3. Establish a consistent set of objectives and expectations, specifically types of metrics at the Agency level, so key trends and models can be identified and used to continuously improve software processes and each software development effort. 4. Maintain the CMMI Maturity Level requirement for critical NASA projects and use CMMI to measure organizations developing software for NASA. 5

  1. KAERI results for BN600 full MOX benchmark (Phase 4)

    International Nuclear Information System (INIS)

    Lee, Kibog Lee

    2003-01-01

    The purpose of this document is to report the results of KAERI's calculation for the Phase-4 of BN-600 full MOX fueled core benchmark analyses according to the RCM report of IAEA CRP Action on U pdated Codes and Methods to Reduce the Calculational Uncertainties of the LMFR Reactivity Effects. T he BN-600 full MOX core model is based on the specification in the document, F ull MOX Model (Phase4. doc ) . This document addresses the calculational methods employed in the benchmark analyses and benchmark results carried out by KAERI

  2. Virtual machine performance benchmarking.

    Science.gov (United States)

    Langer, Steve G; French, Todd

    2011-10-01

    The attractions of virtual computing are many: reduced costs, reduced resources and simplified maintenance. Any one of these would be compelling for a medical imaging professional attempting to support a complex practice on limited resources in an era of ever tightened reimbursement. In particular, the ability to run multiple operating systems optimized for different tasks (computational image processing on Linux versus office tasks on Microsoft operating systems) on a single physical machine is compelling. However, there are also potential drawbacks. High performance requirements need to be carefully considered if they are to be executed in an environment where the running software has to execute through multiple layers of device drivers before reaching the real disk or network interface. Our lab has attempted to gain insight into the impact of virtualization on performance by benchmarking the following metrics on both physical and virtual platforms: local memory and disk bandwidth, network bandwidth, and integer and floating point performance. The virtual performance metrics are compared to baseline performance on "bare metal." The results are complex, and indeed somewhat surprising.

  3. Benchmarking biofuels; Biobrandstoffen benchmarken

    Energy Technology Data Exchange (ETDEWEB)

    Croezen, H.; Kampman, B.; Bergsma, G.

    2012-03-15

    A sustainability benchmark for transport biofuels has been developed and used to evaluate the various biofuels currently on the market. For comparison, electric vehicles, hydrogen vehicles and petrol/diesel vehicles were also included. A range of studies as well as growing insight are making it ever clearer that biomass-based transport fuels may have just as big a carbon footprint as fossil fuels like petrol or diesel, or even bigger. At the request of Greenpeace Netherlands, CE Delft has brought together current understanding on the sustainability of fossil fuels, biofuels and electric vehicles, with particular focus on the performance of the respective energy carriers on three sustainability criteria, with the first weighing the heaviest: (1) Greenhouse gas emissions; (2) Land use; and (3) Nutrient consumption [Dutch] Greenpeace Nederland heeft CE Delft gevraagd een duurzaamheidsmeetlat voor biobrandstoffen voor transport te ontwerpen en hierop de verschillende biobrandstoffen te scoren. Voor een vergelijk zijn ook elektrisch rijden, rijden op waterstof en rijden op benzine of diesel opgenomen. Door onderzoek en voortschrijdend inzicht blijkt steeds vaker dat transportbrandstoffen op basis van biomassa soms net zoveel of zelfs meer broeikasgassen veroorzaken dan fossiele brandstoffen als benzine en diesel. CE Delft heeft voor Greenpeace Nederland op een rijtje gezet wat de huidige inzichten zijn over de duurzaamheid van fossiele brandstoffen, biobrandstoffen en elektrisch rijden. Daarbij is gekeken naar de effecten van de brandstoffen op drie duurzaamheidscriteria, waarbij broeikasgasemissies het zwaarst wegen: (1) Broeikasgasemissies; (2) Landgebruik; en (3) Nutriëntengebruik.

  4. Benchmarking in academic pharmacy departments.

    Science.gov (United States)

    Bosso, John A; Chisholm-Burns, Marie; Nappi, Jean; Gubbins, Paul O; Ross, Leigh Ann

    2010-10-11

    Benchmarking in academic pharmacy, and recommendations for the potential uses of benchmarking in academic pharmacy departments are discussed in this paper. Benchmarking is the process by which practices, procedures, and performance metrics are compared to an established standard or best practice. Many businesses and industries use benchmarking to compare processes and outcomes, and ultimately plan for improvement. Institutions of higher learning have embraced benchmarking practices to facilitate measuring the quality of their educational and research programs. Benchmarking is used internally as well to justify the allocation of institutional resources or to mediate among competing demands for additional program staff or space. Surveying all chairs of academic pharmacy departments to explore benchmarking issues such as department size and composition, as well as faculty teaching, scholarly, and service productivity, could provide valuable information. To date, attempts to gather this data have had limited success. We believe this information is potentially important, urge that efforts to gather it should be continued, and offer suggestions to achieve full participation.

  5. Issues in Benchmark Metric Selection

    Science.gov (United States)

    Crolotte, Alain

    It is true that a metric can influence a benchmark but will esoteric metrics create more problems than they will solve? We answer this question affirmatively by examining the case of the TPC-D metric which used the much debated geometric mean for the single-stream test. We will show how a simple choice influenced the benchmark and its conduct and, to some extent, DBMS development. After examining other alternatives our conclusion is that the “real” measure for a decision-support benchmark is the arithmetic mean.

  6. California commercial building energy benchmarking

    Energy Technology Data Exchange (ETDEWEB)

    Kinney, Satkartar; Piette, Mary Ann

    2003-07-01

    Building energy benchmarking is the comparison of whole-building energy use relative to a set of similar buildings. It provides a useful starting point for individual energy audits and for targeting buildings for energy-saving measures in multiple-site audits. Benchmarking is of interest and practical use to a number of groups. Energy service companies and performance contractors communicate energy savings potential with ''typical'' and ''best-practice'' benchmarks while control companies and utilities can provide direct tracking of energy use and combine data from multiple buildings. Benchmarking is also useful in the design stage of a new building or retrofit to determine if a design is relatively efficient. Energy managers and building owners have an ongoing interest in comparing energy performance to others. Large corporations, schools, and government agencies with numerous facilities also use benchmarking methods to compare their buildings to each other. The primary goal of Task 2.1.1 Web-based Benchmarking was the development of a web-based benchmarking tool, dubbed Cal-Arch, for benchmarking energy use in California commercial buildings. While there were several other benchmarking tools available to California consumers prior to the development of Cal-Arch, there were none that were based solely on California data. Most available benchmarking information, including the Energy Star performance rating, were developed using DOE's Commercial Building Energy Consumption Survey (CBECS), which does not provide state-level data. Each database and tool has advantages as well as limitations, such as the number of buildings and the coverage by type, climate regions and end uses. There is considerable commercial interest in benchmarking because it provides an inexpensive method of screening buildings for tune-ups and retrofits. However, private companies who collect and manage consumption data are concerned that the

  7. A Heterogeneous Medium Analytical Benchmark

    International Nuclear Information System (INIS)

    Ganapol, B.D.

    1999-01-01

    A benchmark, called benchmark BLUE, has been developed for one-group neutral particle (neutron or photon) transport in a one-dimensional sub-critical heterogeneous plane parallel medium with surface illumination. General anisotropic scattering is accommodated through the Green's Function Method (GFM). Numerical Fourier transform inversion is used to generate the required Green's functions which are kernels to coupled integral equations that give the exiting angular fluxes. The interior scalar flux is then obtained through quadrature. A compound iterative procedure for quadrature order and slab surface source convergence provides highly accurate benchmark qualities (4- to 5- places of accuracy) results

  8. Ice sheet in peril

    DEFF Research Database (Denmark)

    Hvidberg, Christine Schøtt

    2016-01-01

    Earth's large ice sheets in Greenland and Antarctica are major contributors to sea level change. At present, the Greenland Ice Sheet (see the photo) is losing mass in response to climate warming in Greenland (1), but the present changes also include a long-term response to past climate transitions...

  9. Mobility Balance Sheet 2009

    International Nuclear Information System (INIS)

    Jorritsma, P.; Derriks, H.; Francke, J.; Gordijn, H.; Groot, W.; Harms, L.; Van der Loop, H.; Peer, S.; Savelberg, F.; Wouters, P.

    2009-06-01

    The Mobility Balance Sheet provides an overview of the state of the art of mobility in the Netherlands. In addition to describing the development of mobility this report also provides explanations for the growth of passenger and freight transport. Moreover, the Mobility Balance Sheet also focuses on a topical theme: the effects of economic crises on mobility. [nl

  10. A Global Vision over Benchmarking Process: Benchmarking Based Enterprises

    OpenAIRE

    Sitnikov, Catalina; Giurca Vasilescu, Laura

    2008-01-01

    Benchmarking uses the knowledge and the experience of others to improve the enterprise. Starting from the analysis of the performance and underlying the strengths and weaknesses of the enterprise it should be assessed what must be done in order to improve its activity. Using benchmarking techniques, an enterprise looks at how processes in the value chain are performed. The approach based on the vision “from the whole towards the parts” (a fragmented image of the enterprise’s value chain) redu...

  11. Benchmarking and Learning in Public Healthcare

    DEFF Research Database (Denmark)

    Buckmaster, Natalie; Mouritsen, Jan

    2017-01-01

    This research investigates the effects of learning-oriented benchmarking in public healthcare settings. Benchmarking is a widely adopted yet little explored accounting practice that is part of the paradigm of New Public Management. Extant studies are directed towards mandated coercive benchmarking...... applications. The present study analyses voluntary benchmarking in a public setting that is oriented towards learning. The study contributes by showing how benchmarking can be mobilised for learning and offers evidence of the effects of such benchmarking for performance outcomes. It concludes that benchmarking...... can enable learning in public settings but that this requires actors to invest in ensuring that benchmark data are directed towards improvement....

  12. A Field-Based Aquatic Life Benchmark for Conductivity in ...

    Science.gov (United States)

    This report adapts the standard U.S. EPA methodology for deriving ambient water quality criteria. Rather than use toxicity test results, the adaptation uses field data to determine the loss of 5% of genera from streams. The method is applied to derive effect benchmarks for dissolved salts as measured by conductivity in Central Appalachian streams using data from West Virginia and Kentucky. This report provides scientific evidence for a conductivity benchmark in a specific region rather than for the entire United States.

  13. Benchmark matrix and guide: Part II.

    Science.gov (United States)

    1991-01-01

    In the last issue of the Journal of Quality Assurance (September/October 1991, Volume 13, Number 5, pp. 14-19), the benchmark matrix developed by Headquarters Air Force Logistics Command was published. Five horizontal levels on the matrix delineate progress in TQM: business as usual, initiation, implementation, expansion, and integration. The six vertical categories that are critical to the success of TQM are leadership, structure, training, recognition, process improvement, and customer focus. In this issue, "Benchmark Matrix and Guide: Part II" will show specifically how to apply the categories of leadership, structure, and training to the benchmark matrix progress levels. At the intersection of each category and level, specific behavior objectives are listed with supporting behaviors and guidelines. Some categories will have objectives that are relatively easy to accomplish, allowing quick progress from one level to the next. Other categories will take considerable time and effort to complete. In the next issue, Part III of this series will focus on recognition, process improvement, and customer focus.

  14. Performance Targets and External Benchmarking

    DEFF Research Database (Denmark)

    Friis, Ivar; Hansen, Allan; Vámosi, Tamás S.

    Research on relative performance measures, transfer pricing, beyond budgeting initiatives, target costing, piece rates systems and value based management has for decades underlined the importance of external benchmarking in performance management. Research conceptualises external benchmarking...... as a market mechanism that can be brought inside the firm to provide incentives for continuous improvement and the development of competitive advances. However, whereas extant research primarily has focused on the importance and effects of using external benchmarks, less attention has been directed towards...... the conditions upon which the market mechanism is performing within organizations. This paper aims to contribute to research by providing more insight to the conditions for the use of external benchmarking as an element in performance management in organizations. Our study explores a particular type of external...

  15. Benchmarking and energy management schemes in SMEs

    Energy Technology Data Exchange (ETDEWEB)

    Huenges Wajer, Boudewijn [SenterNovem (Netherlands); Helgerud, Hans Even [New Energy Performance AS (Norway); Lackner, Petra [Austrian Energy Agency (Austria)

    2007-07-01

    Many companies are reluctant to focus on energy management or to invest in energy efficiency measures. Nevertheless, there are many good examples proving that the right approach to implementing energy efficiency can very well be combined with the business-priorities of most companies. SMEs in particular can benefit from a facilitated European approach because they normally have a lack of resources and time to invest in energy efficiency. In the EU supported pilot project BESS, 60 SMEs from 11 European countries of the food and drink industries successfully tested a package of interactive instruments which offers such a facilitated approach. A number of pilot companies show a profit increase of 3 up to 10 %. The package includes a user-friendly and web based E-learning scheme for implementing energy management as well as a benchmarking module for company specific comparison of energy performance indicators. Moreover, it has several practical and tested tools to support the cycle of continuous improvement of energy efficiency in the company such as checklists, sector specific measure lists, templates for auditing and energy conservation plans. An important feature and also a key trigger for companies is the possibility for SMEs to benchmark anonymously their energy situation against others of the same sector. SMEs can participate in a unique web based benchmarking system to interactively benchmark in a way which fully guarantees confidentiality and safety of company data. Furthermore, the available data can contribute to a bottom-up approach to support the objectives of (national) monitoring and targeting and thereby also contributing to the EU Energy Efficiency and Energy Services Directive. A follow up project to expand the number of participating SMEs of various sectors is currently being developed.

  16. Benchmarking: contexts and details matter.

    Science.gov (United States)

    Zheng, Siyuan

    2017-07-05

    Benchmarking is an essential step in the development of computational tools. We take this opportunity to pitch in our opinions on tool benchmarking, in light of two correspondence articles published in Genome Biology.Please see related Li et al. and Newman et al. correspondence articles: www.dx.doi.org/10.1186/s13059-017-1256-5 and www.dx.doi.org/10.1186/s13059-017-1257-4.

  17. Handbook of critical experiments benchmarks

    International Nuclear Information System (INIS)

    Durst, B.M.; Bierman, S.R.; Clayton, E.D.

    1978-03-01

    Data from critical experiments have been collected together for use as benchmarks in evaluating calculational techniques and nuclear data. These benchmarks have been selected from the numerous experiments performed on homogeneous plutonium systems. No attempt has been made to reproduce all of the data that exists. The primary objective in the collection of these data is to present representative experimental data defined in a concise, standardized format that can easily be translated into computer code input

  18. Analysis of Benchmark 2 results

    International Nuclear Information System (INIS)

    Bacha, F.; Lefievre, B.; Maillard, J.; Silva, J.

    1994-01-01

    The code GEANT315 has been compared to different codes in two benchmarks. We analyze its performances through our results, especially in the thick target case. In spite of gaps in nucleus-nucleus interaction theories at intermediate energies, benchmarks allow possible improvements of physical models used in our codes. Thereafter, a scheme of radioactive waste burning system is studied. (authors). 4 refs., 7 figs., 1 tab

  19. Benchmarks for GADRAS performance validation

    International Nuclear Information System (INIS)

    Mattingly, John K.; Mitchell, Dean James; Rhykerd, Charles L. Jr.

    2009-01-01

    The performance of the Gamma Detector Response and Analysis Software (GADRAS) was validated by comparing GADRAS model results to experimental measurements for a series of benchmark sources. Sources for the benchmark include a plutonium metal sphere, bare and shielded in polyethylene, plutonium oxide in cans, a highly enriched uranium sphere, bare and shielded in polyethylene, a depleted uranium shell and spheres, and a natural uranium sphere. The benchmark experimental data were previously acquired and consist of careful collection of background and calibration source spectra along with the source spectra. The calibration data were fit with GADRAS to determine response functions for the detector in each experiment. A one-dimensional model (pie chart) was constructed for each source based on the dimensions of the benchmark source. The GADRAS code made a forward calculation from each model to predict the radiation spectrum for the detector used in the benchmark experiment. The comparisons between the GADRAS calculation and the experimental measurements are excellent, validating that GADRAS can correctly predict the radiation spectra for these well-defined benchmark sources.

  20. Benchmarking in Czech Higher Education

    Directory of Open Access Journals (Sweden)

    Plaček Michal

    2015-12-01

    Full Text Available The first part of this article surveys the current experience with the use of benchmarking at Czech universities specializing in economics and management. The results indicate that collaborative benchmarking is not used on this level today, but most actors show some interest in its introduction. The expression of the need for it and the importance of benchmarking as a very suitable performance-management tool in less developed countries are the impetus for the second part of our article. Based on an analysis of the current situation and existing needs in the Czech Republic, as well as on a comparison with international experience, recommendations for public policy are made, which lie in the design of a model of a collaborative benchmarking for Czech economics and management in higher-education programs. Because the fully complex model cannot be implemented immediately – which is also confirmed by structured interviews with academics who have practical experience with benchmarking –, the final model is designed as a multi-stage model. This approach helps eliminate major barriers to the implementation of benchmarking.

  1. International Criticality Safety Benchmark Evaluation Project (ICSBEP) - ICSBEP 2015 Handbook

    International Nuclear Information System (INIS)

    Bess, John D.

    2015-01-01

    The Criticality Safety Benchmark Evaluation Project (CSBEP) was initiated in October of 1992 by the United States Department of Energy (DOE). The project quickly became an international effort as scientists from other interested countries became involved. The International Criticality Safety Benchmark Evaluation Project (ICSBEP) became an official activity of the Nuclear Energy Agency (NEA) in 1995. This handbook contains criticality safety benchmark specifications that have been derived from experiments performed at various critical facilities around the world. The benchmark specifications are intended for use by criticality safety engineers to validate calculation techniques used to establish minimum subcritical margins for operations with fissile material and to determine criticality alarm requirements and placement. Many of the specifications are also useful for nuclear data testing. Example calculations are presented; however, these calculations do not constitute a validation of the codes or cross-section data. The evaluated criticality safety benchmark data are given in nine volumes. These volumes span approximately 69000 pages and contain 567 evaluations with benchmark specifications for 4874 critical, near-critical or subcritical configurations, 31 criticality alarm placement/shielding configurations with multiple dose points for each, and 207 configurations that have been categorised as fundamental physics measurements that are relevant to criticality safety applications. New to the handbook are benchmark specifications for neutron activation foil and thermoluminescent dosimeter measurements performed at the SILENE critical assembly in Valduc, France as part of a joint venture in 2010 between the US DOE and the French Alternative Energies and Atomic Energy Commission (CEA). A photograph of this experiment is shown on the front cover. Experiments that are found unacceptable for use as criticality safety benchmark experiments are discussed in these

  2. Carbon sheet pumping

    International Nuclear Information System (INIS)

    Ohyabu, N.; Sagara, A.; Kawamura, T.; Motojima, O.; Ono, T.

    1993-07-01

    A new hydrogen pumping scheme has been proposed which controls recycling of the particles for significant improvement of the energy confinement in toroidal magnetic fusion devices. In this scheme, a part of the vacuum vessel surface near the divertor is covered with carbon sheets of a large surface area. Before discharge initiation, the sheets are baked up to 700 ∼ 1000degC to remove the previously trapped hydrogen atoms. After being cooled down to below ∼ 200degC, the unsaturated carbon sheets trap high energy charge exchange hydrogen atoms effectively during a discharge and overall pumping efficiency can be as high as ∼ 50 %. (author)

  3. Selectively reflective transparent sheets

    Science.gov (United States)

    Waché, Rémi; Florescu, Marian; Sweeney, Stephen J.; Clowes, Steven K.

    2015-08-01

    We investigate the possibility to selectively reflect certain wavelengths while maintaining the optical properties on other spectral ranges. This is of particular interest for transparent materials, which for specific applications may require high reflectivity at pre-determined frequencies. Although there exist currently techniques such as coatings to produce selective reflection, this work focuses on new approaches for mass production of polyethylene sheets which incorporate either additives or surface patterning for selective reflection between 8 to 13 μ m. Typical additives used to produce a greenhouse effect in plastics include particles such as clays, silica or hydroxide materials. However, the absorption of thermal radiation is less efficient than the decrease of emissivity as it can be compared with the inclusion of Lambertian materials. Photonic band gap engineering by the periodic structuring of metamaterials is known in nature for producing the vivid bright colors in certain organisms via strong wavelength-selective reflection. Research to artificially engineer such structures has mainly focused on wavelengths in the visible and near infrared. However few studies to date have been carried out to investigate the properties of metastructures in the mid infrared range even though the patterning of microstructure is easier to achieve. We present preliminary results on the diffuse reflectivity using FDTD simulations and analyze the technical feasibility of these approaches.

  4. The International Criticality Safety Benchmark Evaluation Project (ICSBEP)

    International Nuclear Information System (INIS)

    Briggs, J.B.

    2003-01-01

    The International Criticality Safety Benchmark Evaluation Project (ICSBEP) was initiated in 1992 by the United States Department of Energy. The ICSBEP became an official activity of the Organisation for Economic Cooperation and Development (OECD) - Nuclear Energy Agency (NEA) in 1995. Representatives from the United States, United Kingdom, France, Japan, the Russian Federation, Hungary, Republic of Korea, Slovenia, Yugoslavia, Kazakhstan, Israel, Spain, and Brazil are now participating. The purpose of the ICSBEP is to identify, evaluate, verify, and formally document a comprehensive and internationally peer-reviewed set of criticality safety benchmark data. The work of the ICSBEP is published as an OECD handbook entitled 'International Handbook of Evaluated Criticality Safety Benchmark Experiments.' The 2003 Edition of the Handbook contains benchmark model specifications for 3070 critical or subcritical configurations that are intended for validating computer codes that calculate effective neutron multiplication and for testing basic nuclear data. (author)

  5. Standard specification for Nickel-Chromium-Iron alloys (UNS N06600, N06601, N06603, N06690, N06693, N06025, N06045 and N06696), Nickel-Chromium-Cobalt-Molybdenum alloy (UNS N06617), and Nickel-Iron-Chromium-Tungsten alloy (UNS N06674) plate, sheet and strip

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2011-01-01

    Standard specification for Nickel-Chromium-Iron alloys (UNS N06600, N06601, N06603, N06690, N06693, N06025, N06045 and N06696), Nickel-Chromium-Cobalt-Molybdenum alloy (UNS N06617), and Nickel-Iron-Chromium-Tungsten alloy (UNS N06674) plate, sheet and strip

  6. Model based energy benchmarking for glass furnace

    International Nuclear Information System (INIS)

    Sardeshpande, Vishal; Gaitonde, U.N.; Banerjee, Rangan

    2007-01-01

    Energy benchmarking of processes is important for setting energy efficiency targets and planning energy management strategies. Most approaches used for energy benchmarking are based on statistical methods by comparing with a sample of existing plants. This paper presents a model based approach for benchmarking of energy intensive industrial processes and illustrates this approach for industrial glass furnaces. A simulation model for a glass furnace is developed using mass and energy balances, and heat loss equations for the different zones and empirical equations based on operating practices. The model is checked with field data from end fired industrial glass furnaces in India. The simulation model enables calculation of the energy performance of a given furnace design. The model results show the potential for improvement and the impact of different operating and design preferences on specific energy consumption. A case study for a 100 TPD end fired furnace is presented. An achievable minimum energy consumption of about 3830 kJ/kg is estimated for this furnace. The useful heat carried by glass is about 53% of the heat supplied by the fuel. Actual furnaces operating at these production scales have a potential for reduction in energy consumption of about 20-25%

  7. Boiling water reactor turbine trip (TT) benchmark

    International Nuclear Information System (INIS)

    2001-06-01

    In the field of coupled neutronics/thermal-hydraulics computation there is a need to enhance scientific knowledge in order to develop advanced modelling techniques for new nuclear technologies and concepts, as well as for current nuclear applications Recently developed 'best-estimate' computer code systems for modelling 3-D coupled neutronics/thermal-hydraulics transients in nuclear cores and for the coupling of core phenomena and system dynamics (PWR, BWR, VVER) need to be compared against each other and validated against results from experiments. International benchmark studies have been set up for the purpose. The present volume describes the specification of such a benchmark. The transient addressed is a turbine trip (TT) in a BWR involving pressurization events in which the coupling between core phenomena and system dynamics plays an important role. In addition, the data made available from experiments carried out at the plant make the present benchmark very valuable. The data used are from events at the Peach Bottom 2 reactor (a GE-designed BWR/4). (authors)

  8. Equilibrium Partitioning Sediment Benchmarks (ESBs) for the ...

    Science.gov (United States)

    This document describes procedures to determine the concentrations of nonionic organic chemicals in sediment interstitial waters. In previous ESB documents, the general equilibrium partitioning (EqP) approach was chosen for the derivation of sediment benchmarks because it accounts for the varying bioavailability of chemicals in different sediments and allows for the incorporation of the appropriate biological effects concentration. This provides for the derivation of benchmarks that are causally linked to the specific chemical, applicable across sediments, and appropriately protective of benthic organisms.  This equilibrium partitioning sediment benchmark (ESB) document was prepared by scientists from the Atlantic Ecology Division, Mid-Continent Ecology Division, and Western Ecology Division, the Office of Water, and private consultants. The document describes procedures to determine the interstitial water concentrations of nonionic organic chemicals in contaminated sediments. Based on these concentrations, guidance is provided on the derivation of toxic units to assess whether the sediments are likely to cause adverse effects to benthic organisms. The equilibrium partitioning (EqP) approach was chosen because it is based on the concentrations of chemical(s) that are known to be harmful and bioavailable in the environment.  This document, and five others published over the last nine years, will be useful for the Program Offices, including Superfund, a

  9. Analysis of an OECD/NEA high-temperature reactor benchmark

    International Nuclear Information System (INIS)

    Hosking, J. G.; Newton, T. D.; Koeberl, O.; Morris, P.; Goluoglu, S.; Tombakoglu, T.; Colak, U.; Sartori, E.

    2006-01-01

    This paper describes analyses of the OECD/NEA HTR benchmark organized by the 'Working Party on the Scientific Issues of Reactor Systems (WPRS)', formerly the 'Working Party on the Physics of Plutonium Fuels and Innovative Fuel Cycles'. The benchmark was specifically designed to provide inter-comparisons for plutonium and thorium fuels when used in HTR systems. Calculations considering uranium fuel have also been included in the benchmark, in order to identify any increased uncertainties when using plutonium or thorium fuels. The benchmark consists of five phases, which include cell and whole-core calculations. Analysis of the benchmark has been performed by a number of international participants, who have used a range of deterministic and Monte Carlo code schemes. For each of the benchmark phases, neutronics parameters have been evaluated. Comparisons are made between the results of the benchmark participants, as well as comparisons between the predictions of the deterministic calculations and those from detailed Monte Carlo calculations. (authors)

  10. Anesthesia Fact Sheet

    Science.gov (United States)

    ... Education About NIGMS NIGMS Home > Science Education > Anesthesia Anesthesia Tagline (Optional) Middle/Main Content Area En español ... Version (464 KB) Other Fact Sheets What is anesthesia? Anesthesia is a medical treatment that prevents patients ...

  11. Structural Biology Fact Sheet

    Science.gov (United States)

    ... NIGMS NIGMS Home > Science Education > Structural Biology Structural Biology Tagline (Optional) Middle/Main Content Area PDF Version (688 KB) Other Fact Sheets What is structural biology? Structural biology is the study of how biological ...

  12. Radiation protecting sheet

    International Nuclear Information System (INIS)

    Makiguchi, Hiroshi.

    1989-01-01

    As protection sheets used in radioactivity administration areas, a thermoplastic polyurethane composition sheet with a thickness of less 0.5 mm, solid content (ash) of less than 5% and a shore D hardness of less than 60 is used. A composite sheet with thickness of less than 0.5 mm laminated or coated with such a thermoplastic polyurethane composition as a surface layer and the thermoplastic polyurethane composition sheet applied with secondary fabrication are used. This can satisfy all of the required properties, such as draping property, abrasion resistance, high breaking strength, necking resistance, endurance strength, as well as chemical resistance and easy burnability in burning furnace. Further, by forming uneveness on the surface by means of embossing, etc. safety problems such as slippage during operation and walking can be overcome. (T.M.)

  13. Energy information sheets

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-07-01

    The National Energy Information Center (NEIC), as part of its mission, provides energy information and referral assistance to Federal, State, and local governments, the academic community, business and industrial organizations, and the public. The Energy Information Sheets was developed to provide general information on various aspects of fuel production, prices, consumption, and capability. Additional information on related subject matter can be found in other Energy Information Administration (EIA) publications as referenced at the end of each sheet.

  14. Regional Competitive Intelligence: Benchmarking and Policymaking

    OpenAIRE

    Huggins , Robert

    2010-01-01

    Benchmarking exercises have become increasingly popular within the sphere of regional policymaking in recent years. The aim of this paper is to analyse the concept of regional benchmarking and its links with regional policymaking processes. It develops a typology of regional benchmarking exercises and regional benchmarkers, and critically reviews the literature, both academic and policy oriented. It is argued that critics who suggest regional benchmarking is a flawed concept and technique fai...

  15. Using the Global Burden of Disease study to assist development of nation-specific fact sheets to promote prevention and control of hypertension and reduction in dietary salt: a resource from the World Hypertension League.

    Science.gov (United States)

    Campbell, Norm R C; Lackland, Daniel T; Lisheng, Liu; Niebylski, Mark L; Nilsson, Peter M; Zhang, Xin-Hua

    2015-03-01

    Increased blood pressure and high dietary salt are leading risks for death and disability globally. Reducing the burden of both health risks are United Nations' targets for reducing noncommunicable disease. Nongovernmental organizations and individuals can assist by ensuring widespread dissemination of the best available facts and recommended interventions for both health risks. Simple but impactful fact sheets can be useful for informing the public, healthcare professionals, and policy makers. The World Hypertension League has developed fact sheets on dietary salt and hypertension but in many circumstances the greatest impact would be obtained from national-level fact sheets. This manuscript provides instructions and a template for developing fact sheets based on the Global Burden of Disease study and national survey data. ©2015 Wiley Periodicals, Inc.

  16. Benchmarking of human resources management

    Directory of Open Access Journals (Sweden)

    David M. Akinnusi

    2008-11-01

    Full Text Available This paper reviews the role of human resource management (HRM which, today, plays a strategic partnership role in management. The focus of the paper is on HRM in the public sector, where much hope rests on HRM as a means of transforming the public service and achieving much needed service delivery. However, a critical evaluation of HRM practices in the public sector reveals that these services leave much to be desired. The paper suggests the adoption of benchmarking as a process to revamp HRM in the public sector so that it is able to deliver on its promises. It describes the nature and process of benchmarking and highlights the inherent difficulties in applying benchmarking in HRM. It concludes with some suggestions for a plan of action. The process of identifying “best” practices in HRM requires the best collaborative efforts of HRM practitioners and academicians. If used creatively, benchmarking has the potential to bring about radical and positive changes in HRM in the public sector. The adoption of the benchmarking process is, in itself, a litmus test of the extent to which HRM in the public sector has grown professionally.

  17. Benchmark simulation models, quo vadis?

    Science.gov (United States)

    Jeppsson, U; Alex, J; Batstone, D J; Benedetti, L; Comas, J; Copp, J B; Corominas, L; Flores-Alsina, X; Gernaey, K V; Nopens, I; Pons, M-N; Rodríguez-Roda, I; Rosen, C; Steyer, J-P; Vanrolleghem, P A; Volcke, E I P; Vrecko, D

    2013-01-01

    As the work of the IWA Task Group on Benchmarking of Control Strategies for wastewater treatment plants (WWTPs) is coming to an end, it is essential to disseminate the knowledge gained. For this reason, all authors of the IWA Scientific and Technical Report on benchmarking have come together to provide their insights, highlighting areas where knowledge may still be deficient and where new opportunities are emerging, and to propose potential avenues for future development and application of the general benchmarking framework and its associated tools. The paper focuses on the topics of temporal and spatial extension, process modifications within the WWTP, the realism of models, control strategy extensions and the potential for new evaluation tools within the existing benchmark system. We find that there are major opportunities for application within all of these areas, either from existing work already being done within the context of the benchmarking simulation models (BSMs) or applicable work in the wider literature. Of key importance is increasing capability, usability and transparency of the BSM package while avoiding unnecessary complexity.

  18. Energized Oxygen : Speiser Current Sheet Bifurcation

    Science.gov (United States)

    George, D. E.; Jahn, J. M.

    2017-12-01

    A single population of energized Oxygen (O+) is shown to produce a cross-tail bifurcated current sheet in 2.5D PIC simulations of the magnetotail without the influence of magnetic reconnection. Treatment of oxygen in simulations of space plasmas, specifically a magnetotail current sheet, has been limited to thermal energies despite observations of and mechanisms which explain energized ions. We performed simulations of a homogeneous oxygen background, that has been energized in a physically appropriate manner, to study the behavior of current sheets and magnetic reconnection, specifically their bifurcation. This work uses a 2.5D explicit Particle-In-a-Cell (PIC) code to investigate the dynamics of energized heavy ions as they stream Dawn-to-Dusk in the magnetotail current sheet. We present a simulation study dealing with the response of a current sheet system to energized oxygen ions. We establish a, well known and studied, 2-species GEM Challenge Harris current sheet as a starting point. This system is known to eventually evolve and produce magnetic reconnection upon thinning of the current sheet. We added a uniform distribution of thermal O+ to the background. This 3-species system is also known to eventually evolve and produce magnetic reconnection. We add one additional variable to the system by providing an initial duskward velocity to energize the O+. We also traced individual particle motion within the PIC simulation. Three main results are shown. First, energized dawn- dusk streaming ions are clearly seen to exhibit sustained Speiser motion. Second, a single population of heavy ions clearly produces a stable bifurcated current sheet. Third, magnetic reconnection is not required to produce the bifurcated current sheet. Finally a bifurcated current sheet is compatible with the Harris current sheet model. This work is the first step in a series of investigations aimed at studying the effects of energized heavy ions on magnetic reconnection. This work differs

  19. Radiation Detection Computational Benchmark Scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Shaver, Mark W.; Casella, Andrew M.; Wittman, Richard S.; McDonald, Ben S.

    2013-09-24

    Modeling forms an important component of radiation detection development, allowing for testing of new detector designs, evaluation of existing equipment against a wide variety of potential threat sources, and assessing operation performance of radiation detection systems. This can, however, result in large and complex scenarios which are time consuming to model. A variety of approaches to radiation transport modeling exist with complementary strengths and weaknesses for different problems. This variety of approaches, and the development of promising new tools (such as ORNL’s ADVANTG) which combine benefits of multiple approaches, illustrates the need for a means of evaluating or comparing different techniques for radiation detection problems. This report presents a set of 9 benchmark problems for comparing different types of radiation transport calculations, identifying appropriate tools for classes of problems, and testing and guiding the development of new methods. The benchmarks were drawn primarily from existing or previous calculations with a preference for scenarios which include experimental data, or otherwise have results with a high level of confidence, are non-sensitive, and represent problem sets of interest to NA-22. From a technical perspective, the benchmarks were chosen to span a range of difficulty and to include gamma transport, neutron transport, or both and represent different important physical processes and a range of sensitivity to angular or energy fidelity. Following benchmark identification, existing information about geometry, measurements, and previous calculations were assembled. Monte Carlo results (MCNP decks) were reviewed or created and re-run in order to attain accurate computational times and to verify agreement with experimental data, when present. Benchmark information was then conveyed to ORNL in order to guide testing and development of hybrid calculations. The results of those ADVANTG calculations were then sent to PNNL for

  20. 3-D neutron transport benchmarks

    International Nuclear Information System (INIS)

    Takeda, T.; Ikeda, H.

    1991-03-01

    A set of 3-D neutron transport benchmark problems proposed by the Osaka University to NEACRP in 1988 has been calculated by many participants and the corresponding results are summarized in this report. The results of K eff , control rod worth and region-averaged fluxes for the four proposed core models, calculated by using various 3-D transport codes are compared and discussed. The calculational methods used were: Monte Carlo, Discrete Ordinates (Sn), Spherical Harmonics (Pn), Nodal Transport and others. The solutions of the four core models are quite useful as benchmarks for checking the validity of 3-D neutron transport codes

  1. Strategic behaviour under regulatory benchmarking

    Energy Technology Data Exchange (ETDEWEB)

    Jamasb, T. [Cambridge Univ. (United Kingdom). Dept. of Applied Economics; Nillesen, P. [NUON NV (Netherlands); Pollitt, M. [Cambridge Univ. (United Kingdom). Judge Inst. of Management

    2004-09-01

    In order to improve the efficiency of electricity distribution networks, some regulators have adopted incentive regulation schemes that rely on performance benchmarking. Although regulation benchmarking can influence the ''regulation game,'' the subject has received limited attention. This paper discusses how strategic behaviour can result in inefficient behaviour by firms. We then use the Data Envelopment Analysis (DEA) method with US utility data to examine implications of illustrative cases of strategic behaviour reported by regulators. The results show that gaming can have significant effects on the measured performance and profitability of firms. (author)

  2. Atomic Energy Research benchmark activity

    International Nuclear Information System (INIS)

    Makai, M.

    1998-01-01

    The test problems utilized in the validation and verification process of computer programs in Atomic Energie Research are collected into one bunch. This is the first step towards issuing a volume in which tests for VVER are collected, along with reference solutions and a number of solutions. The benchmarks do not include the ZR-6 experiments because they have been published along with a number of comparisons in the Final reports of TIC. The present collection focuses on operational and mathematical benchmarks which cover almost the entire range of reaktor calculation. (Author)

  3. Benchmarking: a method for continuous quality improvement in health.

    Science.gov (United States)

    Ettorchi-Tardy, Amina; Levif, Marie; Michel, Philippe

    2012-05-01

    Benchmarking, a management approach for implementing best practices at best cost, is a recent concept in the healthcare system. The objectives of this paper are to better understand the concept and its evolution in the healthcare sector, to propose an operational definition, and to describe some French and international experiences of benchmarking in the healthcare sector. To this end, we reviewed the literature on this approach's emergence in the industrial sector, its evolution, its fields of application and examples of how it has been used in the healthcare sector. Benchmarking is often thought to consist simply of comparing indicators and is not perceived in its entirety, that is, as a tool based on voluntary and active collaboration among several organizations to create a spirit of competition and to apply best practices. The key feature of benchmarking is its integration within a comprehensive and participatory policy of continuous quality improvement (CQI). Conditions for successful benchmarking focus essentially on careful preparation of the process, monitoring of the relevant indicators, staff involvement and inter-organizational visits. Compared to methods previously implemented in France (CQI and collaborative projects), benchmarking has specific features that set it apart as a healthcare innovation. This is especially true for healthcare or medical-social organizations, as the principle of inter-organizational visiting is not part of their culture. Thus, this approach will need to be assessed for feasibility and acceptability before it is more widely promoted.

  4. Toxicological Benchmarks for Screening Potential Contaminants of Concern for Effects on Soil and Litter Invertebrates and Heterotrophic Process

    Energy Technology Data Exchange (ETDEWEB)

    Will, M.E.

    1994-01-01

    This report presents a standard method for deriving benchmarks for the purpose of ''contaminant screening,'' performed by comparing measured ambient concentrations of chemicals. The work was performed under Work Breakdown Structure 1.4.12.2.3.04.07.02 (Activity Data Sheet 8304). In addition, this report presents sets of data concerning the effects of chemicals in soil on invertebrates and soil microbial processes, benchmarks for chemicals potentially associated with United States Department of Energy sites, and literature describing the experiments from which data were drawn for benchmark derivation.

  5. Analysis of the impact of correlated benchmark experiments on the validation of codes for criticality safety analysis

    International Nuclear Information System (INIS)

    Bock, M.; Stuke, M.; Behler, M.

    2013-01-01

    The validation of a code for criticality safety analysis requires the recalculation of benchmark experiments. The selected benchmark experiments are chosen such that they have properties similar to the application case that has to be assessed. A common source of benchmark experiments is the 'International Handbook of Evaluated Criticality Safety Benchmark Experiments' (ICSBEP Handbook) compiled by the 'International Criticality Safety Benchmark Evaluation Project' (ICSBEP). In order to take full advantage of the information provided by the individual benchmark descriptions for the application case, the recommended procedure is to perform an uncertainty analysis. The latter is based on the uncertainties of experimental results included in most of the benchmark descriptions. They can be performed by means of the Monte Carlo sampling technique. The consideration of uncertainties is also being introduced in the supplementary sheet of DIN 25478 'Application of computer codes in the assessment of criticality safety'. However, for a correct treatment of uncertainties taking into account the individual uncertainties of the benchmark experiments is insufficient. In addition, correlations between benchmark experiments have to be handled correctly. For example, these correlations can arise due to different cases of a benchmark experiment sharing the same components like fuel pins or fissile solutions. Thus, manufacturing tolerances of these components (e.g. diameter of the fuel pellets) have to be considered in a consistent manner in all cases of the benchmark experiment. At the 2012 meeting of the Expert Group on 'Uncertainty Analysis for Criticality Safety Assessment' (UACSA) of the OECD/NEA a benchmark proposal was outlined that aimed for the determination of the impact on benchmark correlations on the estimation of the computational bias of the neutron multiplication factor (k eff ). The analysis presented here is based on this proposal. (orig.)

  6. BigDataBench: a Big Data Benchmark Suite from Internet Services

    OpenAIRE

    Wang, Lei; Zhan, Jianfeng; Luo, Chunjie; Zhu, Yuqing; Yang, Qiang; He, Yongqiang; Gao, Wanling; Jia, Zhen; Shi, Yingjie; Zhang, Shujie; Zheng, Chen; Lu, Gang; Zhan, Kent; Li, Xiaona; Qiu, Bizhu

    2014-01-01

    As architecture, systems, and data management communities pay greater attention to innovative big data systems and architectures, the pressure of benchmarking and evaluating these systems rises. Considering the broad use of big data systems, big data benchmarks must include diversity of data and workloads. Most of the state-of-the-art big data benchmarking efforts target evaluating specific types of applications or system software stacks, and hence they are not qualified for serving the purpo...

  7. Benchmarked Library Websites Comparative Study

    KAUST Repository

    Ramli, Rindra M.; Tyhurst, Janis

    2015-01-01

    This presentation provides an analysis of services provided by the benchmarked library websites. The exploratory study includes comparison of these websites against a list of criterion and presents a list of services that are most commonly deployed by the selected websites. In addition to that, the investigators proposed a list of services that could be provided via the KAUST library website.

  8. Disintegration of liquid sheets

    Science.gov (United States)

    Mansour, Adel; Chigier, Norman

    1990-01-01

    The development, stability, and disintegration of liquid sheets issuing from a two-dimensional air-assisted nozzle is studied. Detailed measurements of mean drop size and velocity are made using a phase Doppler particle analyzer. Without air flow the liquid sheet converges toward the axis as a result of surface tension forces. With airflow a quasi-two-dimensional expanding spray is formed. The air flow causes small variations in sheet thickness to develop into major disturbances with the result that disruption starts before the formation of the main break-up region. In the two-dimensional variable geometry air-blast atomizer, it is shown that the air flow is responsible for the formation of large, ordered, and small chaotic 'cell' structures.

  9. Safety advice sheets

    CERN Multimedia

    HSE Unit

    2013-01-01

    You never know when you might be faced with questions such as: when/how should I dispose of a gas canister? Where can I find an inspection report? How should I handle/store/dispose of a chemical substance…?   The SI section of the DGS/SEE Group is primarily responsible for safety inspections, evaluating the safety conditions of equipment items, premises and facilities. On top of this core task, it also regularly issues “Safety Advice Sheets” on various topics, designed to be of assistance to users but also to recall and reinforce safety rules and procedures. These clear and concise sheets, complete with illustrations, are easy to display in the appropriate areas. The following safety advice sheets have been issued so far: Other sheets will be published shortly. Suggestions are welcome and should be sent to the SI section of the DGS/SEE Group. Please send enquiries to general-safety-visits.service@cern.ch.

  10. The International Criticality Safety Benchmark Evaluation Project

    International Nuclear Information System (INIS)

    Briggs, B. J.; Dean, V. F.; Pesic, M. P.

    2001-01-01

    experimenters or individuals who are familiar with the experimenters or the experimental facility; (3) compile the data into a standardized format; (4) perform calculations of each experiment with standard criticality safety codes, and (5) formally document the work into a single source of verified benchmark critical data. The work of the ICSBEP is documented as an OECD handbook, in 7 volumes, entitled, 'International Handbook of Evaluated Criticality Safety Benchmark Experiments'. This handbook is available on CD-ROM or on the Internet (http://icsbep.inel.gov/icsbep). Over 150 scientists from around the world have combined their efforts to produce this Handbook. The 2000 publication of the handbook will span over 19,000 pages and contain benchmark specifications for approximately 284 evaluations containing 2352 critical configurations. The handbook is currently in use in 45 different countries by criticality safety analysts to perform necessary validation of their calculation techniques and it is expected to be a valuable tool for decades to come. As a result of the efforts of the ICSBEP: (1) a large portion of the tedious, redundant, and very costly research and processing of criticality safety experimental data has been eliminated; (2) the necessary step in criticality safety analyses of validating computer codes with benchmark data is greatly streamlined; (3) gaps in data are being highlighted; (4) lost data are being retrieved; (5) deficiencies and errors in cross section processing codes and neutronic codes are being identified, and (6) over a half-century of valuable criticality safety data are being preserved. (author)

  11. H.B. Robinson-2 pressure vessel benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Remec, I.; Kam, F.B.K.

    1998-02-01

    The H. B. Robinson Unit 2 Pressure Vessel Benchmark (HBR-2 benchmark) is described and analyzed in this report. Analysis of the HBR-2 benchmark can be used as partial fulfillment of the requirements for the qualification of the methodology for calculating neutron fluence in pressure vessels, as required by the U.S. Nuclear Regulatory Commission Regulatory Guide DG-1053, Calculational and Dosimetry Methods for Determining Pressure Vessel Neutron Fluence. Section 1 of this report describes the HBR-2 benchmark and provides all the dimensions, material compositions, and neutron source data necessary for the analysis. The measured quantities, to be compared with the calculated values, are the specific activities at the end of fuel cycle 9. The characteristic feature of the HBR-2 benchmark is that it provides measurements on both sides of the pressure vessel: in the surveillance capsule attached to the thermal shield and in the reactor cavity. In section 2, the analysis of the HBR-2 benchmark is described. Calculations with the computer code DORT, based on the discrete-ordinates method, were performed with three multigroup libraries based on ENDF/B-VI: BUGLE-93, SAILOR-95 and BUGLE-96. The average ratio of the calculated-to-measured specific activities (C/M) for the six dosimeters in the surveillance capsule was 0.90 {+-} 0.04 for all three libraries. The average C/Ms for the cavity dosimeters (without neptunium dosimeter) were 0.89 {+-} 0.10, 0.91 {+-} 0.10, and 0.90 {+-} 0.09 for the BUGLE-93, SAILOR-95 and BUGLE-96 libraries, respectively. It is expected that the agreement of the calculations with the measurements, similar to the agreement obtained in this research, should typically be observed when the discrete-ordinates method and ENDF/B-VI libraries are used for the HBR-2 benchmark analysis.

  12. Ice Sheets & Ice Cores

    DEFF Research Database (Denmark)

    Mikkelsen, Troels Bøgeholm

    Since the discovery of the Ice Ages it has been evident that Earth’s climate is liable to undergo dramatic changes. The previous climatic period known as the Last Glacial saw large oscillations in the extent of ice sheets covering the Northern hemisphere. Understanding these oscillations known....... The first part concerns time series analysis of ice core data obtained from the Greenland Ice Sheet. We analyze parts of the time series where DO-events occur using the so-called transfer operator and compare the results with time series from a simple model capable of switching by either undergoing...

  13. Energy information sheets

    Energy Technology Data Exchange (ETDEWEB)

    1993-12-02

    The National Energy Information Center (NEIC), as part of its mission, provides energy information and referral assistance to Federal, State, and local governments, the academic community, business and industrial organizations, and the general public. Written for the general public, the EIA publication Energy Information Sheets was developed to provide information on various aspects of fuel production, prices, consumption and capability. The information contained herein pertains to energy data as of December 1991. Additional information on related subject matter can be found in other EIA publications as referenced at the end of each sheet.

  14. PMLB: a large benchmark suite for machine learning evaluation and comparison.

    Science.gov (United States)

    Olson, Randal S; La Cava, William; Orzechowski, Patryk; Urbanowicz, Ryan J; Moore, Jason H

    2017-01-01

    The selection, development, or comparison of machine learning methods in data mining can be a difficult task based on the target problem and goals of a particular study. Numerous publicly available real-world and simulated benchmark datasets have emerged from different sources, but their organization and adoption as standards have been inconsistent. As such, selecting and curating specific benchmarks remains an unnecessary burden on machine learning practitioners and data scientists. The present study introduces an accessible, curated, and developing public benchmark resource to facilitate identification of the strengths and weaknesses of different machine learning methodologies. We compare meta-features among the current set of benchmark datasets in this resource to characterize the diversity of available data. Finally, we apply a number of established machine learning methods to the entire benchmark suite and analyze how datasets and algorithms cluster in terms of performance. From this study, we find that existing benchmarks lack the diversity to properly benchmark machine learning algorithms, and there are several gaps in benchmarking problems that still need to be considered. This work represents another important step towards understanding the limitations of popular benchmarking suites and developing a resource that connects existing benchmarking standards to more diverse and efficient standards in the future.

  15. Benchmarking computer platforms for lattice QCD applications

    International Nuclear Information System (INIS)

    Hasenbusch, M.; Jansen, K.; Pleiter, D.; Wegner, P.; Wettig, T.

    2003-09-01

    We define a benchmark suite for lattice QCD and report on benchmark results from several computer platforms. The platforms considered are apeNEXT, CRAY T3E, Hitachi SR8000, IBM p690, PC-Clusters, and QCDOC. (orig.)

  16. Benchmarking computer platforms for lattice QCD applications

    International Nuclear Information System (INIS)

    Hasenbusch, M.; Jansen, K.; Pleiter, D.; Stueben, H.; Wegner, P.; Wettig, T.; Wittig, H.

    2004-01-01

    We define a benchmark suite for lattice QCD and report on benchmark results from several computer platforms. The platforms considered are apeNEXT, CRAY T3E; Hitachi SR8000, IBM p690, PC-Clusters, and QCDOC

  17. Collisionless current sheet equilibria

    Science.gov (United States)

    Neukirch, T.; Wilson, F.; Allanson, O.

    2018-01-01

    Current sheets are important for the structure and dynamics of many plasma systems. In space and astrophysical plasmas they play a crucial role in activity processes, for example by facilitating the release of magnetic energy via processes such as magnetic reconnection. In this contribution we will focus on collisionless plasma systems. A sensible first step in any investigation of physical processes involving current sheets is to find appropriate equilibrium solutions. The theory of collisionless plasma equilibria is well established, but over the past few years there has been a renewed interest in finding equilibrium distribution functions for collisionless current sheets with particular properties, for example for cases where the current density is parallel to the magnetic field (force-free current sheets). This interest is due to a combination of scientific curiosity and potential applications to space and astrophysical plasmas. In this paper we will give an overview of some of the recent developments, discuss their potential applications and address a number of open questions.

  18. Cholera Fact Sheet

    Science.gov (United States)

    ... news-room/fact-sheets/detail/cholera","@context":"http://schema.org","@type":"Article"}; العربية 中文 français русский español ... that includes feedback at the local level and information-sharing at the global level. Cholera cases are ...

  19. Pseudomonas - Fact Sheet

    OpenAIRE

    Public Health Agency

    2012-01-01

    Fact sheet on Pseudomonas, including:What is Pseudomonas?What infections does it cause?Who is susceptible to pseudomonas infection?How will I know if I have pseudomonas infection?How can Pseudomonas be prevented from spreading?How can I protect myself from Pseudomonas?How is Pseudomonas infection treated?

  20. NTPR Fact Sheets

    Science.gov (United States)

    History Documents US Underground Nuclear Test History Reports NTPR Radiation Exposure Reports Enewetak Atoll Cleanup Documents TRAC About Who We Are Our Values History Locations Our Leadership Director Support Center Contact Us FAQ Sheet Links Success Stories Contracts Business Opportunities Current

  1. Production (information sheets)

    NARCIS (Netherlands)

    2007-01-01

    Documentation sheets: Geo energy 2 Integrated System Approach Petroleum Production (ISAPP) The value of smartness 4 Reservoir permeability estimation from production data 6 Coupled modeling for reservoir application 8 Toward an integrated near-wellbore model 10 TNO conceptual framework for "E&P

  2. Hibernia fact sheet

    International Nuclear Information System (INIS)

    Anon.

    1994-01-01

    This fact sheet gives details of the Hibernia oil field including its location, discovery date, oil company's interests in the project, the recoverable reserves of the two reservoirs, the production system used, capital costs of the project, and overall targets for Canadian benefit. Significant dates for the Hibernia project are listed. (UK)

  3. Ethanol Basics (Fact Sheet)

    Energy Technology Data Exchange (ETDEWEB)

    2015-01-01

    Ethanol is a widely-used, domestically-produced renewable fuel made from corn and other plant materials. More than 96% of gasoline sold in the United States contains ethanol. Learn more about this alternative fuel in the Ethanol Basics Fact Sheet, produced by the U.S. Department of Energy's Clean Cities program.

  4. BONFIRE: benchmarking computers and computer networks

    OpenAIRE

    Bouckaert, Stefan; Vanhie-Van Gerwen, Jono; Moerman, Ingrid; Phillips, Stephen; Wilander, Jerker

    2011-01-01

    The benchmarking concept is not new in the field of computing or computer networking. With “benchmarking tools”, one usually refers to a program or set of programs, used to evaluate the performance of a solution under certain reference conditions, relative to the performance of another solution. Since the 1970s, benchmarking techniques have been used to measure the performance of computers and computer networks. Benchmarking of applications and virtual machines in an Infrastructure-as-a-Servi...

  5. Benchmarking clinical photography services in the NHS.

    Science.gov (United States)

    Arbon, Giles

    2015-01-01

    Benchmarking is used in services across the National Health Service (NHS) using various benchmarking programs. Clinical photography services do not have a program in place and services have to rely on ad hoc surveys of other services. A trial benchmarking exercise was undertaken with 13 services in NHS Trusts. This highlights valuable data and comparisons that can be used to benchmark and improve services throughout the profession.

  6. Benchmarking urban energy efficiency in the UK

    International Nuclear Information System (INIS)

    Keirstead, James

    2013-01-01

    This study asks what is the ‘best’ way to measure urban energy efficiency. There has been recent interest in identifying efficient cities so that best practices can be shared, a process known as benchmarking. Previous studies have used relatively simple metrics that provide limited insight on the complexity of urban energy efficiency and arguably fail to provide a ‘fair’ measure of urban performance. Using a data set of 198 urban UK local administrative units, three methods are compared: ratio measures, regression residuals, and data envelopment analysis. The results show that each method has its own strengths and weaknesses regarding the ease of interpretation, ability to identify outliers and provide consistent rankings. Efficient areas are diverse but are notably found in low income areas of large conurbations such as London, whereas industrial areas are consistently ranked as inefficient. The results highlight the shortcomings of the underlying production-based energy accounts. Ideally urban energy efficiency benchmarks would be built on consumption-based accounts, but interim recommendations are made regarding the use of efficiency measures that improve upon current practice and facilitate wider conversations about what it means for a specific city to be energy-efficient within an interconnected economy. - Highlights: • Benchmarking is a potentially valuable method for improving urban energy performance. • Three different measures of urban energy efficiency are presented for UK cities. • Most efficient areas are diverse but include low-income areas of large conurbations. • Least efficient areas perform industrial activities of national importance. • Improve current practice with grouped per capita metrics or regression residuals

  7. Thermal reactor benchmark tests on JENDL-2

    International Nuclear Information System (INIS)

    Takano, Hideki; Tsuchihashi, Keichiro; Yamane, Tsuyoshi; Akino, Fujiyoshi; Ishiguro, Yukio; Ido, Masaru.

    1983-11-01

    A group constant library for the thermal reactor standard nuclear design code system SRAC was produced by using the evaluated nuclear data JENDL-2. Furthermore, the group constants for 235 U were calculated also from ENDF/B-V. Thermal reactor benchmark calculations were performed using the produced group constant library. The selected benchmark cores are two water-moderated lattices (TRX-1 and 2), two heavy water-moderated cores (DCA and ETA-1), two graphite-moderated cores (SHE-8 and 13) and eight critical experiments for critical safety. The effective multiplication factors and lattice cell parameters were calculated and compared with the experimental values. The results are summarized as follows. (1) Effective multiplication factors: The results by JENDL-2 are considerably improved in comparison with ones by ENDF/B-IV. The best agreement is obtained by using JENDL-2 and ENDF/B-V (only 235 U) data. (2) Lattice cell parameters: For the rho 28 (the ratio of epithermal to thermal 238 U captures) and C* (the ratio of 238 U captures to 235 U fissions), the values calculated by JENDL-2 are in good agreement with the experimental values. The rho 28 (the ratio of 238 U to 235 U fissions) are overestimated as found also for the fast reactor benchmarks. The rho 02 (the ratio of epithermal to thermal 232 Th captures) calculated by JENDL-2 or ENDF/B-IV are considerably underestimated. The functions of the SRAC system have been continued to be extended according to the needs of its users. A brief description will be given, in Appendix B, to the extended parts of the SRAC system together with the input specification. (author)

  8. Regional restoration benchmarks for Acropora cervicornis

    Science.gov (United States)

    Schopmeyer, Stephanie A.; Lirman, Diego; Bartels, Erich; Gilliam, David S.; Goergen, Elizabeth A.; Griffin, Sean P.; Johnson, Meaghan E.; Lustic, Caitlin; Maxwell, Kerry; Walter, Cory S.

    2017-12-01

    Coral gardening plays an important role in the recovery of depleted populations of threatened Acropora cervicornis in the Caribbean. Over the past decade, high survival coupled with fast growth of in situ nursery corals have allowed practitioners to create healthy and genotypically diverse nursery stocks. Currently, thousands of corals are propagated and outplanted onto degraded reefs on a yearly basis, representing a substantial increase in the abundance, biomass, and overall footprint of A. cervicornis. Here, we combined an extensive dataset collected by restoration practitioners to document early (1-2 yr) restoration success metrics in Florida and Puerto Rico, USA. By reporting region-specific data on the impacts of fragment collection on donor colonies, survivorship and productivity of nursery corals, and survivorship and productivity of outplanted corals during normal conditions, we provide the basis for a stop-light indicator framework for new or existing restoration programs to evaluate their performance. We show that current restoration methods are very effective, that no excess damage is caused to donor colonies, and that once outplanted, corals behave just as wild colonies. We also provide science-based benchmarks that can be used by programs to evaluate successes and challenges of their efforts, and to make modifications where needed. We propose that up to 10% of the biomass can be collected from healthy, large A. cervicornis donor colonies for nursery propagation. We also propose the following benchmarks for the first year of activities for A. cervicornis restoration: (1) >75% live tissue cover on donor colonies; (2) >80% survivorship of nursery corals; and (3) >70% survivorship of outplanted corals. Finally, we report productivity means of 4.4 cm yr-1 for nursery corals and 4.8 cm yr-1 for outplants as a frame of reference for ranking performance within programs. Such benchmarks, and potential subsequent adaptive actions, are needed to fully assess the

  9. Calculation of the fifth atomic energy research dynamic benchmark with APROS

    International Nuclear Information System (INIS)

    Puska Eija Karita; Kontio Harii

    1998-01-01

    The band-out presents the model used for calculation of the fifth atomic energy research dynamic benchmark with APROS code. In the calculation of the fifth atomic energy research dynamic benchmark the three-dimensional neutronics model of APROS was used. The core was divided axially into 20 nodes according to the specifications of the benchmark and each six identical fuel assemblies were placed into one one-dimensional thermal hydraulic channel. The five-equation thermal hydraulic model was used in the benchmark. The plant process and automation was described with a generic WWER-440 plant model created by IVO Power Engineering Ltd. - Finland. (Author)

  10. How Benchmarking and Higher Education Came Together

    Science.gov (United States)

    Levy, Gary D.; Ronco, Sharron L.

    2012-01-01

    This chapter introduces the concept of benchmarking and how higher education institutions began to use benchmarking for a variety of purposes. Here, benchmarking is defined as a strategic and structured approach whereby an organization compares aspects of its processes and/or outcomes to those of another organization or set of organizations to…

  11. Benchmarking and Learning in Public Healthcare

    DEFF Research Database (Denmark)

    Buckmaster, Natalie; Mouritsen, Jan

    2017-01-01

    This research investigates the effects of learning-oriented benchmarking in public healthcare settings. Benchmarking is a widely adopted yet little explored accounting practice that is part of the paradigm of New Public Management. Extant studies are directed towards mandated coercive benchmarking...

  12. 16 CFR 460.13 - Fact sheets.

    Science.gov (United States)

    2010-01-01

    ... INSULATION § 460.13 Fact sheets. If you are a manufacturer, you must give retailers and installers fact... uses. (b) A heading: “This is ____ insulation.” Fill in the blank with the type and form of your... compressed during installation.” (e) After the chart and any statement dealing with the specific type of...

  13. Laboratory Powder Metallurgy Makes Tough Aluminum Sheet

    Science.gov (United States)

    Royster, D. M.; Thomas, J. R.; Singleton, O. R.

    1993-01-01

    Aluminum alloy sheet exhibits high tensile and Kahn tear strengths. Rapid solidification of aluminum alloys in powder form and subsequent consolidation and fabrication processes used to tailor parts made of these alloys to satisfy such specific aerospace design requirements as high strength and toughness.

  14. beta-sheet preferences from first principles

    DEFF Research Database (Denmark)

    Rossmeisl, Jan; Bækgaard, Iben Sig Buur; Gregersen, Misha Marie

    2003-01-01

    The natural amino acids have different preferences of occurring in specific types of secondary protein structure. Simulations are performed on periodic model â-sheets of 14 different amino acids, at the level of density functional theory, employing the generalized gradient approximation. We find ...

  15. THE LANGUAGE LABORATORY--WORK SHEET.

    Science.gov (United States)

    CROSBIE, KEITH

    DESIGNED FOR TEACHERS AND ADMINISTRATORS, THIS WORK SHEET PROVIDES GENERAL AND SPECIFIC INFORMATION ABOUT THE PHILOSOPHY, TYPES, AND USES OF LANGUAGE LABORATORIES IN SECONDARY SCHOOL LANGUAGE PROGRAMS. THE FIRST SECTION DISCUSSES THE ADVANTAGES OF USING THE LABORATORY EFFECTIVELY TO REINFORCE AND CONSOLIDATE CLASSROOM LEARNING, AND MENTIONS SOME…

  16. Guideline for benchmarking thermal treatment systems for low-level mixed waste

    International Nuclear Information System (INIS)

    Hoffman, D.P.; Gibson, L.V. Jr.; Hermes, W.H.; Bastian, R.E.; Davis, W.T.

    1994-01-01

    A process for benchmarking low-level mixed waste (LLMW) treatment technologies has been developed. When used in conjunction with the identification and preparation of surrogate waste mixtures, and with defined quality assurance and quality control procedures, the benchmarking process will effectively streamline the selection of treatment technologies being considered by the US Department of Energy (DOE) for LLMW cleanup and management. Following the quantitative template provided in the benchmarking process will greatly increase the technical information available for the decision-making process. The additional technical information will remove a large part of the uncertainty in the selection of treatment technologies. It is anticipated that the use of the benchmarking process will minimize technology development costs and overall treatment costs. In addition, the benchmarking process will enhance development of the most promising LLMW treatment processes and aid in transferring the technology to the private sector. To instill inherent quality, the benchmarking process is based on defined criteria and a structured evaluation format, which are independent of any specific conventional treatment or emerging process technology. Five categories of benchmarking criteria have been developed for the evaluation: operation/design; personnel health and safety; economics; product quality; and environmental quality. This benchmarking document gives specific guidance on what information should be included and how it should be presented. A standard format for reporting is included in Appendix A and B of this document. Special considerations for LLMW are presented and included in each of the benchmarking categories

  17. Rubella - Fact Sheet for Parents

    Science.gov (United States)

    ... and 4 through 6 years Fact Sheet for Parents Color [2 pages] Español: Rubéola The best way ... according to the recommended schedule. Fact Sheets for Parents Diseases and the Vaccines that Prevent Them Chickenpox ...

  18. Geothermal Heat Pump Benchmarking Report

    Energy Technology Data Exchange (ETDEWEB)

    None

    1997-01-17

    A benchmarking study was conducted on behalf of the Department of Energy to determine the critical factors in successful utility geothermal heat pump programs. A Successful program is one that has achieved significant market penetration. Successfully marketing geothermal heat pumps has presented some major challenges to the utility industry. However, select utilities have developed programs that generate significant GHP sales. This benchmarking study concludes that there are three factors critical to the success of utility GHP marking programs: (1) Top management marketing commitment; (2) An understanding of the fundamentals of marketing and business development; and (3) An aggressive competitive posture. To generate significant GHP sales, competitive market forces must by used. However, because utilities have functioned only in a regulated arena, these companies and their leaders are unschooled in competitive business practices. Therefore, a lack of experience coupled with an intrinsically non-competitive culture yields an industry environment that impedes the generation of significant GHP sales in many, but not all, utilities.

  19. The development of code benchmarks

    International Nuclear Information System (INIS)

    Glass, R.E.

    1986-01-01

    Sandia National Laboratories has undertaken a code benchmarking effort to define a series of cask-like problems having both numerical solutions and experimental data. The development of the benchmarks includes: (1) model problem definition, (2) code intercomparison, and (3) experimental verification. The first two steps are complete and a series of experiments are planned. The experiments will examine the elastic/plastic behavior of cylinders for both the end and side impacts resulting from a nine meter drop. The cylinders will be made from stainless steel and aluminum to give a range of plastic deformations. This paper presents the results of analyses simulating the model's behavior using materials properties for stainless steel and aluminum

  20. Benchmarking Variable Selection in QSAR.

    Science.gov (United States)

    Eklund, Martin; Norinder, Ulf; Boyer, Scott; Carlsson, Lars

    2012-02-01

    Variable selection is important in QSAR modeling since it can improve model performance and transparency, as well as reduce the computational cost of model fitting and predictions. Which variable selection methods that perform well in QSAR settings is largely unknown. To address this question we, in a total of 1728 benchmarking experiments, rigorously investigated how eight variable selection methods affect the predictive performance and transparency of random forest models fitted to seven QSAR datasets covering different endpoints, descriptors sets, types of response variables, and number of chemical compounds. The results show that univariate variable selection methods are suboptimal and that the number of variables in the benchmarked datasets can be reduced with about 60 % without significant loss in model performance when using multivariate adaptive regression splines MARS and forward selection. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. International handbook of evaluated criticality safety benchmark experiments

    International Nuclear Information System (INIS)

    2010-01-01

    The Criticality Safety Benchmark Evaluation Project (CSBEP) was initiated in October of 1992 by the United States Department of Energy. The project quickly became an international effort as scientists from other interested countries became involved. The International Criticality Safety Benchmark Evaluation Project (ICSBEP) became an official activity of the Organization for Economic Cooperation and Development - Nuclear Energy Agency (OECD-NEA) in 1995. This handbook contains criticality safety benchmark specifications that have been derived from experiments performed at various nuclear critical facilities around the world. The benchmark specifications are intended for use by criticality safety engineers to validate calculational techniques used to establish minimum subcritical margins for operations with fissile material and to determine criticality alarm requirement and placement. Many of the specifications are also useful for nuclear data testing. Example calculations are presented; however, these calculations do not constitute a validation of the codes or cross section data. The evaluated criticality safety benchmark data are given in nine volumes. These volumes span over 55,000 pages and contain 516 evaluations with benchmark specifications for 4,405 critical, near critical, or subcritical configurations, 24 criticality alarm placement / shielding configurations with multiple dose points for each, and 200 configurations that have been categorized as fundamental physics measurements that are relevant to criticality safety applications. Experiments that are found unacceptable for use as criticality safety benchmark experiments are discussed in these evaluations; however, benchmark specifications are not derived for such experiments (in some cases models are provided in an appendix). Approximately 770 experimental configurations are categorized as unacceptable for use as criticality safety benchmark experiments. Additional evaluations are in progress and will be

  2. Journal Benchmarking for Strategic Publication Management and for Improving Journal Positioning in the World Ranking Systems

    Science.gov (United States)

    Moskovkin, Vladimir M.; Bocharova, Emilia A.; Balashova, Oksana V.

    2014-01-01

    Purpose: The purpose of this paper is to introduce and develop the methodology of journal benchmarking. Design/Methodology/ Approach: The journal benchmarking method is understood to be an analytic procedure of continuous monitoring and comparing of the advance of specific journal(s) against that of competing journals in the same subject area,…

  3. Closed-loop neuromorphic benchmarks

    CSIR Research Space (South Africa)

    Stewart, TC

    2015-11-01

    Full Text Available Benchmarks   Terrence C. Stewart 1* , Travis DeWolf 1 , Ashley Kleinhans 2 , Chris Eliasmith 1   1 University of Waterloo, Canada, 2 Council for Scientific and Industrial Research, South Africa   Submitted to Journal:   Frontiers in Neuroscience   Specialty... Eliasmith 1 1Centre for Theoretical Neuroscience, University of Waterloo, Waterloo, ON, Canada 2Mobile Intelligent Autonomous Systems group, Council for Scientific and Industrial Research, Pretoria, South Africa Correspondence*: Terrence C. Stewart Centre...

  4. Investible benchmarks & hedge fund liquidity

    OpenAIRE

    Freed, Marc S; McMillan, Ben

    2011-01-01

    A lack of commonly accepted benchmarks for hedge fund performance has permitted hedge fund managers to attribute to skill returns that may actually accrue from market risk factors and illiquidity. Recent innovations in hedge fund replication permits us to estimate the extent of this misattribution. Using an option-based model, we find evidence that the value of liquidity options that investors implicitly grant managers when they invest may account for part or even all hedge fund returns. C...

  5. Results of the event sequence reliability benchmark exercise

    International Nuclear Information System (INIS)

    Silvestri, E.

    1990-01-01

    The Event Sequence Reliability Benchmark Exercise is the fourth of a series of benchmark exercises on reliability and risk assessment, with specific reference to nuclear power plant applications, and is the logical continuation of the previous benchmark exercises on System Analysis Common Cause Failure and Human Factors. The reference plant is the Nuclear Power Plant at Grohnde Federal Republic of Germany a 1300 MW PWR plant of KWU design. The specific objective of the Exercise is to model, to quantify and to analyze such event sequences initiated by the occurrence of a loss of offsite power that involve the steam generator feed. The general aim is to develop a segment of a risk assessment, which ought to include all the specific aspects and models of quantification, such as common canal failure, Human Factors and System Analysis, developed in the previous reliability benchmark exercises, with the addition of the specific topics of dependences between homologous components belonging to different systems featuring in a given event sequence and of uncertainty quantification, to end up with an overall assessment of: - the state of the art in risk assessment and the relative influences of quantification problems in a general risk assessment framework. The Exercise has been carried out in two phases, both requiring modelling and quantification, with the second phase adopting more restrictive rules and fixing certain common data, as emerged necessary from the first phase. Fourteen teams have participated in the Exercise mostly from EEC countries, with one from Sweden and one from the USA. (author)

  6. RISKIND verification and benchmark comparisons

    International Nuclear Information System (INIS)

    Biwer, B.M.; Arnish, J.J.; Chen, S.Y.; Kamboj, S.

    1997-08-01

    This report presents verification calculations and benchmark comparisons for RISKIND, a computer code designed to estimate potential radiological consequences and health risks to individuals and the population from exposures associated with the transportation of spent nuclear fuel and other radioactive materials. Spreadsheet calculations were performed to verify the proper operation of the major options and calculational steps in RISKIND. The program is unique in that it combines a variety of well-established models into a comprehensive treatment for assessing risks from the transportation of radioactive materials. Benchmark comparisons with other validated codes that incorporate similar models were also performed. For instance, the external gamma and neutron dose rate curves for a shipping package estimated by RISKIND were compared with those estimated by using the RADTRAN 4 code and NUREG-0170 methodology. Atmospheric dispersion of released material and dose estimates from the GENII and CAP88-PC codes. Verification results have shown the program to be performing its intended function correctly. The benchmark results indicate that the predictions made by RISKIND are within acceptable limits when compared with predictions from similar existing models

  7. RISKIND verification and benchmark comparisons

    Energy Technology Data Exchange (ETDEWEB)

    Biwer, B.M.; Arnish, J.J.; Chen, S.Y.; Kamboj, S.

    1997-08-01

    This report presents verification calculations and benchmark comparisons for RISKIND, a computer code designed to estimate potential radiological consequences and health risks to individuals and the population from exposures associated with the transportation of spent nuclear fuel and other radioactive materials. Spreadsheet calculations were performed to verify the proper operation of the major options and calculational steps in RISKIND. The program is unique in that it combines a variety of well-established models into a comprehensive treatment for assessing risks from the transportation of radioactive materials. Benchmark comparisons with other validated codes that incorporate similar models were also performed. For instance, the external gamma and neutron dose rate curves for a shipping package estimated by RISKIND were compared with those estimated by using the RADTRAN 4 code and NUREG-0170 methodology. Atmospheric dispersion of released material and dose estimates from the GENII and CAP88-PC codes. Verification results have shown the program to be performing its intended function correctly. The benchmark results indicate that the predictions made by RISKIND are within acceptable limits when compared with predictions from similar existing models.

  8. The Medical Library Association Benchmarking Network: results.

    Science.gov (United States)

    Dudden, Rosalind Farnam; Corcoran, Kate; Kaplan, Janice; Magouirk, Jeff; Rand, Debra C; Smith, Bernie Todd

    2006-04-01

    This article presents some limited results from the Medical Library Association (MLA) Benchmarking Network survey conducted in 2002. Other uses of the data are also presented. After several years of development and testing, a Web-based survey opened for data input in December 2001. Three hundred eighty-five MLA members entered data on the size of their institutions and the activities of their libraries. The data from 344 hospital libraries were edited and selected for reporting in aggregate tables and on an interactive site in the Members-Only area of MLANET. The data represent a 16% to 23% return rate and have a 95% confidence level. Specific questions can be answered using the reports. The data can be used to review internal processes, perform outcomes benchmarking, retest a hypothesis, refute a previous survey findings, or develop library standards. The data can be used to compare to current surveys or look for trends by comparing the data to past surveys. The impact of this project on MLA will reach into areas of research and advocacy. The data will be useful in the everyday working of small health sciences libraries as well as provide concrete data on the current practices of health sciences libraries.

  9. The Medical Library Association Benchmarking Network: results*

    Science.gov (United States)

    Dudden, Rosalind Farnam; Corcoran, Kate; Kaplan, Janice; Magouirk, Jeff; Rand, Debra C.; Smith, Bernie Todd

    2006-01-01

    Objective: This article presents some limited results from the Medical Library Association (MLA) Benchmarking Network survey conducted in 2002. Other uses of the data are also presented. Methods: After several years of development and testing, a Web-based survey opened for data input in December 2001. Three hundred eighty-five MLA members entered data on the size of their institutions and the activities of their libraries. The data from 344 hospital libraries were edited and selected for reporting in aggregate tables and on an interactive site in the Members-Only area of MLANET. The data represent a 16% to 23% return rate and have a 95% confidence level. Results: Specific questions can be answered using the reports. The data can be used to review internal processes, perform outcomes benchmarking, retest a hypothesis, refute a previous survey findings, or develop library standards. The data can be used to compare to current surveys or look for trends by comparing the data to past surveys. Conclusions: The impact of this project on MLA will reach into areas of research and advocacy. The data will be useful in the everyday working of small health sciences libraries as well as provide concrete data on the current practices of health sciences libraries. PMID:16636703

  10. HS06 Benchmark for an ARM Server

    Science.gov (United States)

    Kluth, Stefan

    2014-06-01

    We benchmarked an ARM cortex-A9 based server system with a four-core CPU running at 1.1 GHz. The system used Ubuntu 12.04 as operating system and the HEPSPEC 2006 (HS06) benchmarking suite was compiled natively with gcc-4.4 on the system. The benchmark was run for various settings of the relevant gcc compiler options. We did not find significant influence from the compiler options on the benchmark result. The final HS06 benchmark result is 10.4.

  11. HS06 benchmark for an ARM server

    International Nuclear Information System (INIS)

    Kluth, Stefan

    2014-01-01

    We benchmarked an ARM cortex-A9 based server system with a four-core CPU running at 1.1 GHz. The system used Ubuntu 12.04 as operating system and the HEPSPEC 2006 (HS06) benchmarking suite was compiled natively with gcc-4.4 on the system. The benchmark was run for various settings of the relevant gcc compiler options. We did not find significant influence from the compiler options on the benchmark result. The final HS06 benchmark result is 10.4.

  12. Simplified two and three dimensional HTTR benchmark problems

    International Nuclear Information System (INIS)

    Zhang Zhan; Rahnema, Farzad; Zhang Dingkang; Pounders, Justin M.; Ougouag, Abderrafi M.

    2011-01-01

    To assess the accuracy of diffusion or transport methods for reactor calculations, it is desirable to create heterogeneous benchmark problems that are typical of whole core configurations. In this paper we have created two and three dimensional numerical benchmark problems typical of high temperature gas cooled prismatic cores. Additionally, a single cell and single block benchmark problems are also included. These problems were derived from the HTTR start-up experiment. Since the primary utility of the benchmark problems is in code-to-code verification, minor details regarding geometry and material specification of the original experiment have been simplified while retaining the heterogeneity and the major physics properties of the core from a neutronics viewpoint. A six-group material (macroscopic) cross section library has been generated for the benchmark problems using the lattice depletion code HELIOS. Using this library, Monte Carlo solutions are presented for three configurations (all-rods-in, partially-controlled and all-rods-out) for both the 2D and 3D problems. These solutions include the core eigenvalues, the block (assembly) averaged fission densities, local peaking factors, the absorption densities in the burnable poison and control rods, and pin fission density distribution for selected blocks. Also included are the solutions for the single cell and single block problems.

  13. IT-benchmarking of clinical workflows: concept, implementation, and evaluation.

    Science.gov (United States)

    Thye, Johannes; Straede, Matthias-Christopher; Liebe, Jan-David; Hübner, Ursula

    2014-01-01

    Due to the emerging evidence of health IT as opportunity and risk for clinical workflows, health IT must undergo a continuous measurement of its efficacy and efficiency. IT-benchmarks are a proven means for providing this information. The aim of this study was to enhance the methodology of an existing benchmarking procedure by including, in particular, new indicators of clinical workflows and by proposing new types of visualisation. Drawing on the concept of information logistics, we propose four workflow descriptors that were applied to four clinical processes. General and specific indicators were derived from these descriptors and processes. 199 chief information officers (CIOs) took part in the benchmarking. These hospitals were assigned to reference groups of a similar size and ownership from a total of 259 hospitals. Stepwise and comprehensive feedback was given to the CIOs. Most participants who evaluated the benchmark rated the procedure as very good, good, or rather good (98.4%). Benchmark information was used by CIOs for getting a general overview, advancing IT, preparing negotiations with board members, and arguing for a new IT project.

  14. Benchmark calculations for VENUS-2 MOX -fueled reactor dosimetry

    International Nuclear Information System (INIS)

    Kim, Jong Kung; Kim, Hong Chul; Shin, Chang Ho; Han, Chi Young; Na, Byung Chan

    2004-01-01

    As a part of a Nuclear Energy Agency (NEA) Project, it was pursued the benchmark for dosimetry calculation of the VENUS-2 MOX-fueled reactor. In this benchmark, the goal is to test the current state-of-the-art computational methods of calculating neutron flux to reactor components against the measured data of the VENUS-2 MOX-fuelled critical experiments. The measured data to be used for this benchmark are the equivalent fission fluxes which are the reaction rates divided by the U 235 fission spectrum averaged cross-section of the corresponding dosimeter. The present benchmark is, therefore, defined to calculate reaction rates and corresponding equivalent fission fluxes measured on the core-mid plane at specific positions outside the core of the VENUS-2 MOX-fuelled reactor. This is a follow-up exercise to the previously completed UO 2 -fuelled VENUS-1 two-dimensional and VENUS-3 three-dimensional exercises. The use of MOX fuel in LWRs presents different neutron characteristics and this is the main interest of the current benchmark compared to the previous ones

  15. Benchmarking motion planning algorithms for bin-picking applications

    DEFF Research Database (Denmark)

    Iversen, Thomas Fridolin; Ellekilde, Lars-Peter

    2017-01-01

    Purpose For robot motion planning there exists a large number of different algorithms, each appropriate for a certain domain, and the right choice of planner depends on the specific use case. The purpose of this paper is to consider the application of bin picking and benchmark a set of motion...... planning algorithms to identify which are most suited in the given context. Design/methodology/approach The paper presents a selection of motion planning algorithms and defines benchmarks based on three different bin-picking scenarios. The evaluation is done based on a fixed set of tasks, which are planned...... and executed on a real and a simulated robot. Findings The benchmarking shows a clear difference between the planners and generally indicates that algorithms integrating optimization, despite longer planning time, perform better due to a faster execution. Originality/value The originality of this work lies...

  16. Argonne Code Center: Benchmark problem book.

    Energy Technology Data Exchange (ETDEWEB)

    None, None

    1977-06-01

    This book is an outgrowth of activities of the Computational Benchmark Problems Committee of the Mathematics and Computation Division of the American Nuclear Society. This is the second supplement of the original benchmark book which was first published in February, 1968 and contained computational benchmark problems in four different areas. Supplement No. 1, which was published in December, 1972, contained corrections to the original benchmark book plus additional problems in three new areas. The current supplement. Supplement No. 2, contains problems in eight additional new areas. The objectives of computational benchmark work and the procedures used by the committee in pursuing the objectives are outlined in the original edition of the benchmark book (ANL-7416, February, 1968). The members of the committee who have made contributions to Supplement No. 2 are listed below followed by the contributors to the earlier editions of the benchmark book.

  17. Benchmarks

    Data.gov (United States)

    Earth Data Analysis Center, University of New Mexico — The National Flood Hazard Layer (NFHL) data incorporates all Digital Flood Insurance Rate Map(DFIRM) databases published by FEMA, and any Letters Of Map Revision...

  18. Film sheet cassette

    International Nuclear Information System (INIS)

    1981-01-01

    A novel film sheet cassette is described for handling CAT photographic films under daylight conditions and facilitating their imaging. A detailed description of the design and operation of the cassette is given together with appropriate illustrations. The resulting cassette is a low-cost unit which is easily constructed and yet provides a sure light-tight seal for the interior contents of the cassette. The individual resilient fingers on the light-trap permit the ready removal of the slide plate for taking pictures. The stippled, non-electrostatic surface of the pressure plate ensures an air layer and free slidability of the film for removal and withdrawal of the film sheet. The advantage of the daylight system is that a darkroom need not be used for inserting and removing the film in and out of the cassette resulting in a considerable time saving. (U.K.)

  19. Clean Cities Fact Sheet

    Energy Technology Data Exchange (ETDEWEB)

    2004-01-01

    This fact sheet explains the Clean Cities Program and provides contact information for all coalitions and regional offices. It answers key questions such as: What is the Clean Cities Program? What are alternative fuels? How does the Clean Cities Program work? What sort of assistance does Clean Cities offer? What has Clean Cities accomplished? What is Clean Cities International? and Where can I find more information?

  20. Information sheets on energy

    International Nuclear Information System (INIS)

    2004-01-01

    These sheets, presented by the Cea, bring some information, in the energy domain, on the following topics: the world energy demand and the energy policy in France and in Europe, the part of the nuclear power in the energy of the future, the greenhouse gases emissions and the fight against the greenhouse effect, the carbon dioxide storage cost and the hydrogen economy. (A.L.B.)

  1. Biomolecular Science (Fact Sheet)

    Energy Technology Data Exchange (ETDEWEB)

    2012-04-01

    A brief fact sheet about NREL Photobiology and Biomolecular Science. The research goal of NREL's Biomolecular Science is to enable cost-competitive advanced lignocellulosic biofuels production by understanding the science critical for overcoming biomass recalcitrance and developing new product and product intermediate pathways. NREL's Photobiology focuses on understanding the capture of solar energy in photosynthetic systems and its use in converting carbon dioxide and water directly into hydrogen and advanced biofuels.

  2. ICSBEP-2007, International Criticality Safety Benchmark Experiment Handbook

    International Nuclear Information System (INIS)

    Blair Briggs, J.

    2007-01-01

    1 - Description: The Critically Safety Benchmark Evaluation Project (CSBEP) was initiated in October of 1992 by the United Sates Department of Energy. The project quickly became an international effort as scientist from other interested countries became involved. The International Criticality Safety Benchmark Evaluation Project (ICSBEP) is now an official activity of the Organization of Economic Cooperation and Development - Nuclear Energy Agency (OECD-NEA). This handbook contains criticality safety benchmark specifications that have been derived from experiments that were performed at various nuclear critical facilities around the world. The benchmark specifications are intended for use by criticality safety engineers to validate calculational techniques used to establish minimum subcritical margins for operations with fissile material. The example calculations presented do not constitute a validation of the codes or cross section data. The work of the ICSBEP is documented as an International Handbook of Evaluated Criticality Safety Benchmark Experiments. Currently, the handbook spans over 42,000 pages and contains 464 evaluations representing 4,092 critical, near-critical, or subcritical configurations and 21 criticality alarm placement/shielding configurations with multiple dose points for each and 46 configurations that have been categorized as fundamental physics measurements that are relevant to criticality safety applications. The handbook is intended for use by criticality safety analysts to perform necessary validations of their calculational techniques and is expected to be a valuable tool for decades to come. The ICSBEP Handbook is available on DVD. You may request a DVD by completing the DVD Request Form on the internet. Access to the Handbook on the Internet requires a password. You may request a password by completing the Password Request Form. The Web address is: http://icsbep.inel.gov/handbook.shtml 2 - Method of solution: Experiments that are found

  3. INTEGRAL BENCHMARKS AVAILABLE THROUGH THE INTERNATIONAL REACTOR PHYSICS EXPERIMENT EVALUATION PROJECT AND THE INTERNATIONAL CRITICALITY SAFETY BENCHMARK EVALUATION PROJECT

    International Nuclear Information System (INIS)

    J. Blair Briggs; Lori Scott; Enrico Sartori; Yolanda Rugama

    2008-01-01

    Interest in high-quality integral benchmark data is increasing as efforts to quantify and reduce calculational uncertainties accelerate to meet the demands of next generation reactor and advanced fuel cycle concepts. The International Reactor Physics Experiment Evaluation Project (IRPhEP) and the International Criticality Safety Benchmark Evaluation Project (ICSBEP) continue to expand their efforts and broaden their scope to identify, evaluate, and provide integral benchmark data for method and data validation. Benchmark model specifications provided by these two projects are used heavily by the international reactor physics, nuclear data, and criticality safety communities. Thus far, 14 countries have contributed to the IRPhEP, and 20 have contributed to the ICSBEP. The status of the IRPhEP and ICSBEP is discussed in this paper, and the future of the two projects is outlined and discussed. Selected benchmarks that have been added to the IRPhEP and ICSBEP handbooks since PHYSOR-06 are highlighted, and the future of the two projects is discussed

  4. INTEGRAL BENCHMARKS AVAILABLE THROUGH THE INTERNATIONAL REACTOR PHYSICS EXPERIMENT EVALUATION PROJECT AND THE INTERNATIONAL CRITICALITY SAFETY BENCHMARK EVALUATION PROJECT

    Energy Technology Data Exchange (ETDEWEB)

    J. Blair Briggs; Lori Scott; Enrico Sartori; Yolanda Rugama

    2008-09-01

    Interest in high-quality integral benchmark data is increasing as efforts to quantify and reduce calculational uncertainties accelerate to meet the demands of next generation reactor and advanced fuel cycle concepts. The International Reactor Physics Experiment Evaluation Project (IRPhEP) and the International Criticality Safety Benchmark Evaluation Project (ICSBEP) continue to expand their efforts and broaden their scope to identify, evaluate, and provide integral benchmark data for method and data validation. Benchmark model specifications provided by these two projects are used heavily by the international reactor physics, nuclear data, and criticality safety communities. Thus far, 14 countries have contributed to the IRPhEP, and 20 have contributed to the ICSBEP. The status of the IRPhEP and ICSBEP is discussed in this paper, and the future of the two projects is outlined and discussed. Selected benchmarks that have been added to the IRPhEP and ICSBEP handbooks since PHYSOR’06 are highlighted, and the future of the two projects is discussed.

  5. Sheet pinch devices

    International Nuclear Information System (INIS)

    Anderson, O.A.; Baker, W.R.; Ise, J. Jr.; Kunkel, W.B.; Pyle, R.V.; Stone, J.M.

    1958-01-01

    Three types of sheet-like discharges are being studied at Berkeley. The first of these, which has been given the name 'Triax', consists of a cylindrical plasma sleeve contained between two coaxial conducting cylinders A theoretical analysis of the stability of the cylindrical sheet plasma predicts the existence of a 'sausage-mode' instability which is, however, expected to grow more slowly than in the case of the unstabilized linear pinch (by the ratio of the radial dimensions). The second pinch device employs a disk shaped discharge with radial current guided between flat metal plates, this configuration being identical to that of the flat hydromagnetic capacitor without external magnetic field. A significant feature of these configurations is the absence of a plasma edge, i.e., there are no regions of sharply curved magnetic field lines anywhere in these discharges. The importance of this fact for stability is not yet fully investigated theoretically. As a third configuration a rectangular, flat pinch tube has been constructed, and the behaviour of a flat plasma sheet with edges is being studied experimentally

  6. Criticality Benchmark Results Using Various MCNP Data Libraries

    International Nuclear Information System (INIS)

    Frankle, Stephanie C.

    1999-01-01

    A suite of 86 criticality benchmarks has been recently implemented in MCNPtrademark as part of the nuclear data validation effort. These benchmarks have been run using two sets of MCNP continuous-energy neutron data: ENDF/B-VI based data through Release 2 (ENDF60) and the ENDF/B-V based data. New evaluations were completed for ENDF/B-VI for a number of the important nuclides such as the isotopes of H, Be, C, N, O, Fe, Ni, 235,238 U, 237 Np, and 239,240 Pu. When examining the results of these calculations for the five manor categories of 233 U, intermediate-enriched 235 U (IEU), highly enriched 235 U (HEU), 239 Pu, and mixed metal assembles, we find the following: (1) The new evaluations for 9 Be, 12 C, and 14 N show no net effect on k eff ; (2) There is a consistent decrease in k eff for all of the solution assemblies for ENDF/B-VI due to 1 H and 16 O, moving k eff further from the benchmark value for uranium solutions and closer to the benchmark value for plutonium solutions; (3) k eff decreased for the ENDF/B-VI Fe isotopic data, moving the calculated k eff further from the benchmark value; (4) k eff decreased for the ENDF/B-VI Ni isotopic data, moving the calculated k eff closer to the benchmark value; (5) The W data remained unchanged and tended to calculate slightly higher than the benchmark values; (6) For metal uranium systems, the ENDF/B-VI data for 235 U tends to decrease k eff while the 238 U data tends to increase k eff . The net result depends on the energy spectrum and material specifications for the particular assembly; (7) For more intermediate-energy systems, the changes in the 235,238 U evaluations tend to increase k eff . For the mixed graphite and normal uranium-reflected assembly, a large increase in k eff due to changes in the 238 U evaluation moved the calculated k eff much closer to the benchmark value. (8) There is little change in k eff for the uranium solutions due to the new 235,238 U evaluations; and (9) There is little change in k eff

  7. Bessel light sheet structured illumination microscopy

    Science.gov (United States)

    Noshirvani Allahabadi, Golchehr

    Biomedical study researchers using animals to model disease and treatment need fast, deep, noninvasive, and inexpensive multi-channel imaging methods. Traditional fluorescence microscopy meets those criteria to an extent. Specifically, two-photon and confocal microscopy, the two most commonly used methods, are limited in penetration depth, cost, resolution, and field of view. In addition, two-photon microscopy has limited ability in multi-channel imaging. Light sheet microscopy, a fast developing 3D fluorescence imaging method, offers attractive advantages over traditional two-photon and confocal microscopy. Light sheet microscopy is much more applicable for in vivo 3D time-lapsed imaging, owing to its selective illumination of tissue layer, superior speed, low light exposure, high penetration depth, and low levels of photobleaching. However, standard light sheet microscopy using Gaussian beam excitation has two main disadvantages: 1) the field of view (FOV) of light sheet microscopy is limited by the depth of focus of the Gaussian beam. 2) Light-sheet images can be degraded by scattering, which limits the penetration of the excitation beam and blurs emission images in deep tissue layers. While two-sided sheet illumination, which doubles the field of view by illuminating the sample from opposite sides, offers a potential solution, the technique adds complexity and cost to the imaging system. We investigate a new technique to address these limitations: Bessel light sheet microscopy in combination with incoherent nonlinear Structured Illumination Microscopy (SIM). Results demonstrate that, at visible wavelengths, Bessel excitation penetrates up to 250 microns deep in the scattering media with single-side illumination. Bessel light sheet microscope achieves confocal level resolution at a lateral resolution of 0.3 micron and an axial resolution of 1 micron. Incoherent nonlinear SIM further reduces the diffused background in Bessel light sheet images, resulting in

  8. Exploring Deliberate Practice & the Use of Skill Sheets in the Collegiate Leadership Competition

    Science.gov (United States)

    Allen, Scott J.; Jenkins, Daniel M.; Krizanovic, Bela

    2018-01-01

    Little has been written about the use of skill sheets in leadership education and this paper demonstrates how they have been implemented in one specific context. Used in a number of domains (e.g., karate, cardiopulmonary resuscitation) skill sheets are checklists or rubrics that record skill performance. The use of skill sheets in leadership…

  9. NASA Software Engineering Benchmarking Effort

    Science.gov (United States)

    Godfrey, Sally; Rarick, Heather

    2012-01-01

    Benchmarking was very interesting and provided a wealth of information (1) We did see potential solutions to some of our "top 10" issues (2) We have an assessment of where NASA stands with relation to other aerospace/defense groups We formed new contacts and potential collaborations (1) Several organizations sent us examples of their templates, processes (2) Many of the organizations were interested in future collaboration: sharing of training, metrics, Capability Maturity Model Integration (CMMI) appraisers, instructors, etc. We received feedback from some of our contractors/ partners (1) Desires to participate in our training; provide feedback on procedures (2) Welcomed opportunity to provide feedback on working with NASA

  10. NEACRP thermal fission product benchmark

    International Nuclear Information System (INIS)

    Halsall, M.J.; Taubman, C.J.

    1989-09-01

    The objective of the thermal fission product benchmark was to compare the range of fission product data in use at the present time. A simple homogeneous problem was set with 200 atoms H/1 atom U235, to be burnt up to 1000 days and then decay for 1000 days. The problem was repeated with 200 atoms H/1 atom Pu239, 20 atoms H/1 atom U235 and 20 atoms H/1 atom Pu239. There were ten participants and the submissions received are detailed in this report. (author)

  11. Benchmarking routine psychological services: a discussion of challenges and methods.

    Science.gov (United States)

    Delgadillo, Jaime; McMillan, Dean; Leach, Chris; Lucock, Mike; Gilbody, Simon; Wood, Nick

    2014-01-01

    Policy developments in recent years have led to important changes in the level of access to evidence-based psychological treatments. Several methods have been used to investigate the effectiveness of these treatments in routine care, with different approaches to outcome definition and data analysis. To present a review of challenges and methods for the evaluation of evidence-based treatments delivered in routine mental healthcare. This is followed by a case example of a benchmarking method applied in primary care. High, average and poor performance benchmarks were calculated through a meta-analysis of published data from services working under the Improving Access to Psychological Therapies (IAPT) Programme in England. Pre-post treatment effect sizes (ES) and confidence intervals were estimated to illustrate a benchmarking method enabling services to evaluate routine clinical outcomes. High, average and poor performance ES for routine IAPT services were estimated to be 0.91, 0.73 and 0.46 for depression (using PHQ-9) and 1.02, 0.78 and 0.52 for anxiety (using GAD-7). Data from one specific IAPT service exemplify how to evaluate and contextualize routine clinical performance against these benchmarks. The main contribution of this report is to summarize key recommendations for the selection of an adequate set of psychometric measures, the operational definition of outcomes, and the statistical evaluation of clinical performance. A benchmarking method is also presented, which may enable a robust evaluation of clinical performance against national benchmarks. Some limitations concerned significant heterogeneity among data sources, and wide variations in ES and data completeness.

  12. Dense sheet Z-pinches

    International Nuclear Information System (INIS)

    Tetsu, Miyamoto

    1999-01-01

    The steady state and quasi-steady processes of infinite- and finite-width sheet z-pinches are studied. The relations corresponding to the Bennett relation and Pease-Braginskii current of cylindrical fiber z-pinches depend on a geometrical factor in the sheet z-pinches. The finite-width sheet z-pinch is approximated by a segment of infinite-width sheet z-pinch, if it is wide enough, and corresponds to a number of (width/thickness) times fiber z-pinch plasmas of the diameter that equals the sheet thickness. If the sheet current equals this number times the fiber current, the plasma created in the sheet z-pinches is as dense as in the fiber z-pinches. The total energy of plasma and magnetic field per unit mass is approximately equal in both pinches. Quasi-static transient processes are different in several aspects from the fiber z-pinch. No radiation collapse occurs in the sheet z-pinch. The stability is improved in the sheet z-pinches. The fusion criterions and the experimental arrangements to produce the sheet z-pinches are also discussed. (author)

  13. Benchmarking: A tool for conducting self-assessment

    International Nuclear Information System (INIS)

    Perkey, D.N.

    1992-01-01

    There is more information on nuclear plant performance available than can reasonably be assimilated and used effectively by plant management or personnel responsible for self-assessment. Also, it is becoming increasingly more important that an effective self-assessment program uses internal parameters not only to evaluate performance, but to incorporate lessons learned from other plants. Because of the quantity of information available, it is important to focus efforts and resources in areas where safety or performance is a concern and where the most improvement can be realized. One of the techniques that is being used to effectively accomplish this is benchmarking. Benchmarking involves the use of various sources of information to self-identify a plant's strengths and weaknesses, identify which plants are strong performers in specific areas, evaluate what makes a top performer, and incorporate the success factors into existing programs. The formality with which benchmarking is being implemented varies widely depending on the objective. It can be as simple as looking at a single indicator, such as systematic assessment of licensee performance (SALP) in engineering and technical support, then surveying the top performers with specific questions. However, a more comprehensive approach may include the performance of a detailed benchmarking study. Both operational and economic indicators may be used in this type of evaluation. Some of the indicators that may be considered and the limitations of each are discussed

  14. Reevaluation of the Jezebel Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Favorite, Jeffrey A. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2014-03-10

    Every nuclear engineering student is familiar with Jezebel, the homogeneous bare sphere of plutonium first assembled at Los Alamos in 1954-1955. The actual Jezebel assembly was neither homogeneous, nor bare, nor spherical; nor was it singular – there were hundreds of Jezebel configurations assembled. The Jezebel benchmark has been reevaluated for the International Criticality Safety Benchmark Evaluation Project (ICSBEP) Handbook. Logbooks, original drawings, mass accountability statements, internal reports, and published reports have been used to model four actual three-dimensional Jezebel assemblies with high fidelity. Because the documentation available today is often inconsistent, three major assumptions were made regarding plutonium part masses and dimensions. The first was that the assembly masses given in Los Alamos report LA-4208 (1969) were correct, and the second was that the original drawing dimension for the polar height of a certain major part was correct. The third assumption was that a change notice indicated on the original drawing was not actually implemented. This talk will describe these assumptions, the alternatives, and the implications. Since the publication of the 2013 ICSBEP Handbook, the actual masses of the major components have turned up. Our assumption regarding the assembly masses was proven correct, but we had the mass distribution incorrect. Work to incorporate the new information is ongoing, and this talk will describe the latest assessment.

  15. Pynamic: the Python Dynamic Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Lee, G L; Ahn, D H; de Supinksi, B R; Gyllenhaal, J C; Miller, P J

    2007-07-10

    Python is widely used in scientific computing to facilitate application development and to support features such as computational steering. Making full use of some of Python's popular features, which improve programmer productivity, leads to applications that access extremely high numbers of dynamically linked libraries (DLLs). As a result, some important Python-based applications severely stress a system's dynamic linking and loading capabilities and also cause significant difficulties for most development environment tools, such as debuggers. Furthermore, using the Python paradigm for large scale MPI-based applications can create significant file IO and further stress tools and operating systems. In this paper, we present Pynamic, the first benchmark program to support configurable emulation of a wide-range of the DLL usage of Python-based applications for large scale systems. Pynamic has already accurately reproduced system software and tool issues encountered by important large Python-based scientific applications on our supercomputers. Pynamic provided insight for our system software and tool vendors, and our application developers, into the impact of several design decisions. As we describe the Pynamic benchmark, we will highlight some of the issues discovered in our large scale system software and tools using Pynamic.

  16. The Isprs Benchmark on Indoor Modelling

    Science.gov (United States)

    Khoshelham, K.; Díaz Vilariño, L.; Peter, M.; Kang, Z.; Acharya, D.

    2017-09-01

    Automated generation of 3D indoor models from point cloud data has been a topic of intensive research in recent years. While results on various datasets have been reported in literature, a comparison of the performance of different methods has not been possible due to the lack of benchmark datasets and a common evaluation framework. The ISPRS benchmark on indoor modelling aims to address this issue by providing a public benchmark dataset and an evaluation framework for performance comparison of indoor modelling methods. In this paper, we present the benchmark dataset comprising several point clouds of indoor environments captured by different sensors. We also discuss the evaluation and comparison of indoor modelling methods based on manually created reference models and appropriate quality evaluation criteria. The benchmark dataset is available for download at: html"target="_blank">http://www2.isprs.org/commissions/comm4/wg5/benchmark-on-indoor-modelling.html.

  17. Perforation of metal sheets

    DEFF Research Database (Denmark)

    Steenstrup, Jens Erik

    simulation is focused on the sheet deformation. However, the effect on the tool and press is included. The process model is based on the upper bound analysis in order to predict the force progress and hole characteristics etc. Parameter analyses are divided into two groups, simulation and experimental tests......The main purposes of this project are:1. Development of a dynamic model for the piercing and performation process2. Analyses of the main parameters3. Establishing demands for process improvements4. Expansion of the existing parameter limitsThe literature survey describes the process influence...

  18. Multi-objective optimization under uncertainty for sheet metal forming

    Directory of Open Access Journals (Sweden)

    Lafon Pascal

    2016-01-01

    Full Text Available Aleatory uncertainties in material properties, blank thickness and friction condition are inherent and irreducible variabilities in sheet metal forming. Optimal design configurations, which are obtained by conventional design optimization methods, are not always able to meet the desired targets due to the effect of uncertainties. This paper proposes a multi-objective robust design optimization that aims to tackle this problem. Results obtained on a U shape draw bending benchmark show that spring-back effect can be controlled by optimizing process parameters.

  19. Analysis of a molten salt reactor benchmark

    International Nuclear Information System (INIS)

    Ghosh, Biplab; Bajpai, Anil; Degweker, S.B.

    2013-01-01

    This paper discusses results of our studies of an IAEA molten salt reactor (MSR) benchmark. The benchmark, proposed by Japan, involves burnup calculations of a single lattice cell of a MSR for burning plutonium and other minor actinides. We have analyzed this cell with in-house developed burnup codes BURNTRAN and McBURN. This paper also presents a comparison of the results of our codes and those obtained by the proposers of the benchmark. (author)

  20. Benchmarking i eksternt regnskab og revision

    DEFF Research Database (Denmark)

    Thinggaard, Frank; Kiertzner, Lars

    2001-01-01

    løbende i en benchmarking-proces. Dette kapitel vil bredt undersøge, hvor man med nogen ret kan få benchmarking-begrebet knyttet til eksternt regnskab og revision. Afsnit 7.1 beskæftiger sig med det eksterne årsregnskab, mens afsnit 7.2 tager fat i revisionsområdet. Det sidste afsnit i kapitlet opsummerer...... betragtningerne om benchmarking i forbindelse med begge områder....

  1. Computational Chemistry Comparison and Benchmark Database

    Science.gov (United States)

    SRD 101 NIST Computational Chemistry Comparison and Benchmark Database (Web, free access)   The NIST Computational Chemistry Comparison and Benchmark Database is a collection of experimental and ab initio thermochemical properties for a selected set of molecules. The goals are to provide a benchmark set of molecules for the evaluation of ab initio computational methods and allow the comparison between different ab initio computational methods for the prediction of thermochemical properties.

  2. HPGMG 1.0: A Benchmark for Ranking High Performance Computing Systems

    Energy Technology Data Exchange (ETDEWEB)

    Adams, Mark; Brown, Jed; Shalf, John; Straalen, Brian Van; Strohmaier, Erich; Williams, Sam

    2014-05-05

    This document provides an overview of the benchmark ? HPGMG ? for ranking large scale general purpose computers for use on the Top500 list [8]. We provide a rationale for the need for a replacement for the current metric HPL, some background of the Top500 list and the challenges of developing such a metric; we discuss our design philosophy and methodology, and an overview of the specification of the benchmark. The primary documentation with maintained details on the specification can be found at hpgmg.org and the Wiki and benchmark code itself can be found in the repository https://bitbucket.org/hpgmg/hpgmg.

  3. Aerodynamic Benchmarking of the Deepwind Design

    DEFF Research Database (Denmark)

    Bedona, Gabriele; Schmidt Paulsen, Uwe; Aagaard Madsen, Helge

    2015-01-01

    The aerodynamic benchmarking for the DeepWind rotor is conducted comparing different rotor geometries and solutions and keeping the comparison as fair as possible. The objective for the benchmarking is to find the most suitable configuration in order to maximize the power production and minimize...... the blade solicitation and the cost of energy. Different parameters are considered for the benchmarking study. The DeepWind blade is characterized by a shape similar to the Troposkien geometry but asymmetric between the top and bottom parts: this shape is considered as a fixed parameter in the benchmarking...

  4. HPC Benchmark Suite NMx, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Intelligent Automation Inc., (IAI) and University of Central Florida (UCF) propose to develop a comprehensive numerical test suite for benchmarking current and...

  5. High Energy Physics (HEP) benchmark program

    International Nuclear Information System (INIS)

    Yasu, Yoshiji; Ichii, Shingo; Yashiro, Shigeo; Hirayama, Hideo; Kokufuda, Akihiro; Suzuki, Eishin.

    1993-01-01

    High Energy Physics (HEP) benchmark programs are indispensable tools to select suitable computer for HEP application system. Industry standard benchmark programs can not be used for this kind of particular selection. The CERN and the SSC benchmark suite are famous HEP benchmark programs for this purpose. The CERN suite includes event reconstruction and event generator programs, while the SSC one includes event generators. In this paper, we found that the results from these two suites are not consistent. And, the result from the industry benchmark does not agree with either of these two. Besides, we describe comparison of benchmark results using EGS4 Monte Carlo simulation program with ones from two HEP benchmark suites. Then, we found that the result from EGS4 in not consistent with the two ones. The industry standard of SPECmark values on various computer systems are not consistent with the EGS4 results either. Because of these inconsistencies, we point out the necessity of a standardization of HEP benchmark suites. Also, EGS4 benchmark suite should be developed for users of applications such as medical science, nuclear power plant, nuclear physics and high energy physics. (author)

  6. Establishing benchmarks and metrics for utilization management.

    Science.gov (United States)

    Melanson, Stacy E F

    2014-01-01

    The changing environment of healthcare reimbursement is rapidly leading to a renewed appreciation of the importance of utilization management in the clinical laboratory. The process of benchmarking of laboratory operations is well established for comparing organizational performance to other hospitals (peers) and for trending data over time through internal benchmarks. However, there are relatively few resources available to assist organizations in benchmarking for laboratory utilization management. This article will review the topic of laboratory benchmarking with a focus on the available literature and services to assist in managing physician requests for laboratory testing. © 2013.

  7. Professional Performance and Bureaucratic Benchmarking Information

    DEFF Research Database (Denmark)

    Schneider, Melanie L.; Mahlendorf, Matthias D.; Schäffer, Utz

    Prior research documents positive effects of benchmarking information provision on performance and attributes this to social comparisons. However, the effects on professional recipients are unclear. Studies of professional control indicate that professional recipients often resist bureaucratic...... controls because of organizational-professional conflicts. We therefore analyze the association between bureaucratic benchmarking information provision and professional performance and suggest that the association is more positive if prior professional performance was low. We test our hypotheses based...... on archival, publicly disclosed, professional performance data for 191 German orthopedics departments, matched with survey data on bureaucratic benchmarking information given to chief orthopedists by the administration. We find a positive association between bureaucratic benchmarking information provision...

  8. Experiments on sheet metal shearing

    OpenAIRE

    Gustafsson, Emil

    2013-01-01

    Within the sheet metal industry, different shear cutting technologies are commonly used in several processing steps, e.g. in cut to length lines, slitting lines, end cropping etc. Shearing has speed and cost advantages over competing cutting methods like laser and plasma cutting, but involves large forces on the equipment and large strains in the sheet material.Numerical models to predict forces and sheared edge geometry for different sheet metal grades and different shear parameter set-ups a...

  9. Thermal Transport Properties of Dry Spun Carbon Nanotube Sheets

    Directory of Open Access Journals (Sweden)

    Heath E. Misak

    2016-01-01

    Full Text Available The thermal properties of carbon nanotube- (CNT- sheet were explored and compared to copper in this study. The CNT-sheet was made from dry spinning CNTs into a nonwoven sheet. This nonwoven CNT-sheet has anisotropic properties in in-plane and out-of-plane directions. The in-plane direction has much higher thermal conductivity than the out-of-plane direction. The in-plane thermal conductivity was found by thermal flash analysis, and the out-of-plane thermal conductivity was found by a hot disk method. The thermal irradiative properties were examined and compared to thermal transport theory. The CNT-sheet was heated in the vacuum and the temperature was measured with an IR Camera. The heat flux of CNT-sheet was compared to that of copper, and it was found that the CNT-sheet has significantly higher specific heat transfer properties compared to those of copper. CNT-sheet is a potential candidate to replace copper in thermal transport applications where weight is a primary concern such as in the automobile, aircraft, and space industries.

  10. Benchmarking of nuclear economics tools

    International Nuclear Information System (INIS)

    Moore, Megan; Korinny, Andriy; Shropshire, David; Sadhankar, Ramesh

    2017-01-01

    Highlights: • INPRO and GIF economic tools exhibited good alignment in total capital cost estimation. • Subtle discrepancies in the cost result from differences in financing and the fuel cycle assumptions. • A common set of assumptions was found to reduce the discrepancies to 1% or less. • Opportunities for harmonisation of economic tools exists. - Abstract: Benchmarking of the economics methodologies developed by the Generation IV International Forum (GIF) and the International Atomic Energy Agency’s International Project on Innovative Nuclear Reactors and Fuel Cycles (INPRO), was performed for three Generation IV nuclear energy systems. The Economic Modeling Working Group of GIF developed an Excel based spreadsheet package, G4ECONS (Generation 4 Excel-based Calculation Of Nuclear Systems), to calculate the total capital investment cost (TCIC) and the levelised unit energy cost (LUEC). G4ECONS is sufficiently generic in the sense that it can accept the types of projected input, performance and cost data that are expected to become available for Generation IV systems through various development phases and that it can model both open and closed fuel cycles. The Nuclear Energy System Assessment (NESA) Economic Support Tool (NEST) was developed to enable an economic analysis using the INPRO methodology to easily calculate outputs including the TCIC, LUEC and other financial figures of merit including internal rate of return, return of investment and net present value. NEST is also Excel based and can be used to evaluate nuclear reactor systems using the open fuel cycle, MOX (mixed oxide) fuel recycling and closed cycles. A Super Critical Water-cooled Reactor system with an open fuel cycle and two Fast Reactor systems, one with a break-even fuel cycle and another with a burner fuel cycle, were selected for the benchmarking exercise. Published data on capital and operating costs were used for economics analyses using G4ECONS and NEST tools. Both G4ECONS and

  11. Benchmark models, planes lines and points for future SUSY searches at the LHC

    International Nuclear Information System (INIS)

    AbdusSalam, S.S.; Allanach, B.C.; Dreiner, H.K.

    2012-03-01

    We define benchmark models for SUSY searches at the LHC, including the CMSSM, NUHM, mGMSB, mAMSB, MM-AMSB and p19MSSM, as well as models with R-parity violation and the NMSSM. Within the parameter spaces of these models, we propose benchmark subspaces, including planes, lines and points along them. The planes may be useful for presenting results of the experimental searches in different SUSY scenarios, while the specific benchmark points may serve for more detailed detector performance tests and comparisons. We also describe algorithms for defining suitable benchmark points along the proposed lines in the parameter spaces, and we define a few benchmark points motivated by recent fits to existing experimental data.

  12. Benchmark models, planes lines and points for future SUSY searches at the LHC

    Energy Technology Data Exchange (ETDEWEB)

    AbdusSalam, S.S. [The Abdus Salam International Centre for Theoretical Physics, Trieste (Italy); Allanach, B.C. [Cambridge Univ. (United Kingdom). Dept. of Applied Mathematics and Theoretical Physics; Dreiner, H.K. [Bonn Univ. (DE). Bethe Center for Theoretical Physics and Physikalisches Inst.] (and others)

    2012-03-15

    We define benchmark models for SUSY searches at the LHC, including the CMSSM, NUHM, mGMSB, mAMSB, MM-AMSB and p19MSSM, as well as models with R-parity violation and the NMSSM. Within the parameter spaces of these models, we propose benchmark subspaces, including planes, lines and points along them. The planes may be useful for presenting results of the experimental searches in different SUSY scenarios, while the specific benchmark points may serve for more detailed detector performance tests and comparisons. We also describe algorithms for defining suitable benchmark points along the proposed lines in the parameter spaces, and we define a few benchmark points motivated by recent fits to existing experimental data.

  13. Benchmark Models, Planes, Lines and Points for Future SUSY Searches at the LHC

    CERN Document Server

    AbdusSalam, S S; Dreiner, H K; Ellis, J; Ellwanger, U; Gunion, J; Heinemeyer, S; Krämer, M; Mangano, M L; Olive, K A; Rogerson, S; Roszkowski, L; Schlaffer, M; Weiglein, G

    2011-01-01

    We define benchmark models for SUSY searches at the LHC, including the CMSSM, NUHM, mGMSB, mAMSB, MM-AMSB and p19MSSM, as well as models with R-parity violation and the NMSSM. Within the parameter spaces of these models, we propose benchmark subspaces, including planes, lines and points along them. The planes may be useful for presenting results of the experimental searches in different SUSY scenarios, while the specific benchmark points may serve for more detailed detector performance tests and comparisons. We also describe algorithms for defining suitable benchmark points along the proposed lines in the parameter spaces, and we define a few benchmark points motivated by recent fits to existing experimental data.

  14. Benchmarking in Thoracic Surgery. Third Edition.

    Science.gov (United States)

    Freixinet Gilart, Jorge; Varela Simó, Gonzalo; Rodríguez Suárez, Pedro; Embún Flor, Raúl; Rivas de Andrés, Juan José; de la Torre Bravos, Mercedes; Molins López-Rodó, Laureano; Pac Ferrer, Joaquín; Izquierdo Elena, José Miguel; Baschwitz, Benno; López de Castro, Pedro E; Fibla Alfara, Juan José; Hernando Trancho, Florentino; Carvajal Carrasco, Ángel; Canalís Arrayás, Emili; Salvatierra Velázquez, Ángel; Canela Cardona, Mercedes; Torres Lanzas, Juan; Moreno Mata, Nicolás

    2016-04-01

    Benchmarking entails continuous comparison of efficacy and quality among products and activities, with the primary objective of achieving excellence. To analyze the results of benchmarking performed in 2013 on clinical practices undertaken in 2012 in 17 Spanish thoracic surgery units. Study data were obtained from the basic minimum data set for hospitalization, registered in 2012. Data from hospital discharge reports were submitted by the participating groups, but staff from the corresponding departments did not intervene in data collection. Study cases all involved hospital discharges recorded in the participating sites. Episodes included were respiratory surgery (Major Diagnostic Category 04, Surgery), and those of the thoracic surgery unit. Cases were labelled using codes from the International Classification of Diseases, 9th revision, Clinical Modification. The refined diagnosis-related groups classification was used to evaluate differences in severity and complexity of cases. General parameters (number of cases, mean stay, complications, readmissions, mortality, and activity) varied widely among the participating groups. Specific interventions (lobectomy, pneumonectomy, atypical resections, and treatment of pneumothorax) also varied widely. As in previous editions, practices among participating groups varied considerably. Some areas for improvement emerge: admission processes need to be standardized to avoid urgent admissions and to improve pre-operative care; hospital discharges should be streamlined and discharge reports improved by including all procedures and complications. Some units have parameters which deviate excessively from the norm, and these sites need to review their processes in depth. Coding of diagnoses and comorbidities is another area where improvement is needed. Copyright © 2015 SEPAR. Published by Elsevier Espana. All rights reserved.

  15. The storm time central plasma sheet

    Directory of Open Access Journals (Sweden)

    R. Schödel

    2002-11-01

    Full Text Available The plasma sheet plays a key role during magnetic storms because it is the bottleneck through which large amounts of magnetic flux that have been eroded from the dayside magnetopause have to be returned to the dayside magnetosphere. Using about five years of Geotail data we studied the average properties of the near- and midtail central plasma sheet (CPS in the 10–30 RE range during magnetic storms. The earthward flux transport rate is greatly enhanced during the storm main phase, but shows a significant earthward decrease. Hence, since the magnetic flux cannot be circulated at a sufficient rate, this leads to an average dipolarization of the central plasma sheet. An increase of the specific entropy of the CPS ion population by a factor of about two during the storm main phase provides evidence for nonadiabatic heating processes. The direction of flux transport during the main phase is consistent with the possible formation of a near-Earth neutral line beyond ~20 RE.Key words. Magnetospheric physics (plasma convection; plasma sheet; storms and substorms

  16. Benchmark for evaluation and validation of reactor simulations (BEAVRS)

    Energy Technology Data Exchange (ETDEWEB)

    Horelik, N.; Herman, B.; Forget, B.; Smith, K. [Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, 77 Massachusetts Avenue, Cambridge, MA 02139 (United States)

    2013-07-01

    Advances in parallel computing have made possible the development of high-fidelity tools for the design and analysis of nuclear reactor cores, and such tools require extensive verification and validation. This paper introduces BEAVRS, a new multi-cycle full-core Pressurized Water Reactor (PWR) depletion benchmark based on two operational cycles of a commercial nuclear power plant that provides a detailed description of fuel assemblies, burnable absorbers, in-core fission detectors, core loading patterns, and numerous in-vessel components. This benchmark enables analysts to develop extremely detailed reactor core models that can be used for testing and validation of coupled neutron transport, thermal-hydraulics, and fuel isotopic depletion. The benchmark also provides measured reactor data for Hot Zero Power (HZP) physics tests, boron letdown curves, and three-dimensional in-core flux maps from fifty-eight instrumented assemblies. Initial comparisons between calculations performed with MIT's OpenMC Monte Carlo neutron transport code and measured cycle 1 HZP test data are presented, and these results display an average deviation of approximately 100 pcm for the various critical configurations and control rod worth measurements. Computed HZP radial fission detector flux maps also agree reasonably well with the available measured data. All results indicate that this benchmark will be extremely useful in validation of coupled-physics codes and uncertainty quantification of in-core physics computational predictions. The detailed BEAVRS specification and its associated data package is hosted online at the MIT Computational Reactor Physics Group web site (http://crpg.mit.edu/), where future revisions and refinements to the benchmark specification will be made publicly available. (authors)

  17. Human factors reliability Benchmark exercise

    International Nuclear Information System (INIS)

    Poucet, A.

    1989-06-01

    The Joint Research Centre of the European Commission has organized a Human Factors Reliability Benchmark Exercise (HF-RBE) with the aim of assessing the state of the art in human reliability modelling and assessment. Fifteen teams from eleven countries, representing industry, utilities, licensing organisations and research institutes, participated in the HF-RBE. The HF-RBE was organized around two study cases: (1) analysis of routine functional Test and Maintenance (T and M) procedures: with the aim of assessing the probability of test induced failures, the probability of failures to remain unrevealed and the potential to initiate transients because of errors performed in the test; (2) analysis of human actions during an operational transient: with the aim of assessing the probability that the operators will correctly diagnose the malfunctions and take proper corrective action. This report contains the final summary reports produced by the participants in the exercise

  18. Experimental and computational benchmark tests

    International Nuclear Information System (INIS)

    Gilliam, D.M.; Briesmeister, J.F.

    1994-01-01

    A program involving principally NIST, LANL, and ORNL has been in progress for about four years now to establish a series of benchmark measurements and calculations related to the moderation and leakage of 252 Cf neutrons from a source surrounded by spherical aqueous moderators of various thicknesses and compositions. The motivation for these studies comes from problems in criticality calculations concerning arrays of multiplying components, where the leakage from one component acts as a source for the other components. This talk compares experimental and calculated values for the fission rates of four nuclides - 235 U, 239 Pu, 238 U, and 237 Np - in the leakage spectrum from moderator spheres of diameters 76.2 mm, 101.6 mm, and 127.0 mm, with either pure water or enriched B-10 solutions as the moderator. Very detailed Monte Carlo calculations were done with the MCNP code, using a open-quotes light waterclose quotes S(α,β) scattering kernel

  19. ENVIRONMENTAL BENCHMARKING FOR LOCAL AUTHORITIES

    Directory of Open Access Journals (Sweden)

    Marinela GHEREŞ

    2010-01-01

    Full Text Available This paper is an attempt to clarify and present the many definitions ofbenchmarking. It also attempts to explain the basic steps of benchmarking, toshow how this tool can be applied by local authorities as well as to discuss itspotential benefits and limitations. It is our strong belief that if cities useindicators and progressively introduce targets to improve management andrelated urban life quality, and to measure progress towards more sustainabledevelopment, we will also create a new type of competition among cities andfoster innovation. This is seen to be important because local authorities’actions play a vital role in responding to the challenges of enhancing thestate of the environment not only in policy-making, but also in the provision ofservices and in the planning process. Local communities therefore need tobe aware of their own sustainability performance levels and should be able toengage in exchange of best practices to respond effectively to the ecoeconomicalchallenges of the century.

  20. Benchmark results in radiative transfer

    International Nuclear Information System (INIS)

    Garcia, R.D.M.; Siewert, C.E.

    1986-02-01

    Several aspects of the F N method are reported, and the method is used to solve accurately some benchmark problems in radiative transfer in the field of atmospheric physics. The method was modified to solve cases of pure scattering and an improved process was developed for computing the radiation intensity. An algorithms for computing several quantities used in the F N method was done. An improved scheme to evaluate certain integrals relevant to the method is done, and a two-term recursion relation that has proved useful for the numerical evaluation of matrix elements, basic for the method, is given. The methods used to solve the encountered linear algebric equations are discussed, and the numerical results are evaluated. (M.C.K.) [pt

  1. Present Status and Extensions of the Monte Carlo Performance Benchmark

    Science.gov (United States)

    Hoogenboom, J. Eduard; Petrovic, Bojan; Martin, William R.

    2014-06-01

    The NEA Monte Carlo Performance benchmark started in 2011 aiming to monitor over the years the abilities to perform a full-size Monte Carlo reactor core calculation with a detailed power production for each fuel pin with axial distribution. This paper gives an overview of the contributed results thus far. It shows that reaching a statistical accuracy of 1 % for most of the small fuel zones requires about 100 billion neutron histories. The efficiency of parallel execution of Monte Carlo codes on a large number of processor cores shows clear limitations for computer clusters with common type computer nodes. However, using true supercomputers the speedup of parallel calculations is increasing up to large numbers of processor cores. More experience is needed from calculations on true supercomputers using large numbers of processors in order to predict if the requested calculations can be done in a short time. As the specifications of the reactor geometry for this benchmark test are well suited for further investigations of full-core Monte Carlo calculations and a need is felt for testing other issues than its computational performance, proposals are presented for extending the benchmark to a suite of benchmark problems for evaluating fission source convergence for a system with a high dominance ratio, for coupling with thermal-hydraulics calculations to evaluate the use of different temperatures and coolant densities and to study the correctness and effectiveness of burnup calculations. Moreover, other contemporary proposals for a full-core calculation with realistic geometry and material composition will be discussed.

  2. Discussion of OECD LWR Uncertainty Analysis in Modelling Benchmark

    International Nuclear Information System (INIS)

    Ivanov, K.; Avramova, M.; Royer, E.; Gillford, J.

    2013-01-01

    The demand for best estimate calculations in nuclear reactor design and safety evaluations has increased in recent years. Uncertainty quantification has been highlighted as part of the best estimate calculations. The modelling aspects of uncertainty and sensitivity analysis are to be further developed and validated on scientific grounds in support of their performance and application to multi-physics reactor simulations. The Organization for Economic Co-operation and Development (OECD) / Nuclear Energy Agency (NEA) Nuclear Science Committee (NSC) has endorsed the creation of an Expert Group on Uncertainty Analysis in Modelling (EGUAM). Within the framework of activities of EGUAM/NSC the OECD/NEA initiated the Benchmark for Uncertainty Analysis in Modelling for Design, Operation, and Safety Analysis of Light Water Reactor (OECD LWR UAM benchmark). The general objective of the benchmark is to propagate the predictive uncertainties of code results through complex coupled multi-physics and multi-scale simulations. The benchmark is divided into three phases with Phase I highlighting the uncertainty propagation in stand-alone neutronics calculations, while Phase II and III are focused on uncertainty analysis of reactor core and system respectively. This paper discusses the progress made in Phase I calculations, the Specifications for Phase II and the incoming challenges in defining Phase 3 exercises. The challenges of applying uncertainty quantification to complex code systems, in particular the time-dependent coupled physics models are the large computational burden and the utilization of non-linear models (expected due to the physics coupling). (authors)

  3. The role of benchmarking for yardstick competition

    International Nuclear Information System (INIS)

    Burns, Phil; Jenkins, Cloda; Riechmann, Christoph

    2005-01-01

    With the increasing interest in yardstick regulation, there is a need to understand the most appropriate method for realigning tariffs at the outset. Benchmarking is the tool used for such realignment and is therefore a necessary first-step in the implementation of yardstick competition. A number of concerns have been raised about the application of benchmarking, making some practitioners reluctant to move towards yardstick based regimes. We assess five of the key concerns often discussed and find that, in general, these are not as great as perceived. The assessment is based on economic principles and experiences with applying benchmarking to regulated sectors, e.g. in the electricity and water industries in the UK, The Netherlands, Austria and Germany in recent years. The aim is to demonstrate that clarity on the role of benchmarking reduces the concern about its application in different regulatory regimes. We find that benchmarking can be used in regulatory settlements, although the range of possible benchmarking approaches that are appropriate will be small for any individual regulatory question. Benchmarking is feasible as total cost measures and environmental factors are better defined in practice than is commonly appreciated and collusion is unlikely to occur in environments with more than 2 or 3 firms (where shareholders have a role in monitoring and rewarding performance). Furthermore, any concern about companies under-recovering costs is a matter to be determined through the regulatory settlement and does not affect the case for using benchmarking as part of that settlement. (author)

  4. Benchmarking set for domestic smart grid management

    NARCIS (Netherlands)

    Bosman, M.G.C.; Bakker, Vincent; Molderink, Albert; Hurink, Johann L.; Smit, Gerardus Johannes Maria

    2010-01-01

    In this paper we propose a benchmark for domestic smart grid management. It consists of an in-depth description of a domestic smart grid, in which local energy consumers, producers and buffers can be controlled. First, from this description a general benchmark framework is derived, which can be used

  5. Medical school benchmarking - from tools to programmes.

    Science.gov (United States)

    Wilkinson, Tim J; Hudson, Judith N; Mccoll, Geoffrey J; Hu, Wendy C Y; Jolly, Brian C; Schuwirth, Lambert W T

    2015-02-01

    Benchmarking among medical schools is essential, but may result in unwanted effects. To apply a conceptual framework to selected benchmarking activities of medical schools. We present an analogy between the effects of assessment on student learning and the effects of benchmarking on medical school educational activities. A framework by which benchmarking can be evaluated was developed and applied to key current benchmarking activities in Australia and New Zealand. The analogy generated a conceptual framework that tested five questions to be considered in relation to benchmarking: what is the purpose? what are the attributes of value? what are the best tools to assess the attributes of value? what happens to the results? and, what is the likely "institutional impact" of the results? If the activities were compared against a blueprint of desirable medical graduate outcomes, notable omissions would emerge. Medical schools should benchmark their performance on a range of educational activities to ensure quality improvement and to assure stakeholders that standards are being met. Although benchmarking potentially has positive benefits, it could also result in perverse incentives with unforeseen and detrimental effects on learning if it is undertaken using only a few selected assessment tools.

  6. Benchmarking in digital circuit design automation

    NARCIS (Netherlands)

    Jozwiak, L.; Gawlowski, D.M.; Slusarczyk, A.S.

    2008-01-01

    This paper focuses on benchmarking, which is the main experimental approach to the design method and EDA-tool analysis, characterization and evaluation. We discuss the importance and difficulties of benchmarking, as well as the recent research effort related to it. To resolve several serious

  7. Benchmark Two-Good Utility Functions

    NARCIS (Netherlands)

    de Jaegher, K.

    Benchmark two-good utility functions involving a good with zero income elasticity and unit income elasticity are well known. This paper derives utility functions for the additional benchmark cases where one good has zero cross-price elasticity, unit own-price elasticity, and zero own price

  8. Repeated Results Analysis for Middleware Regression Benchmarking

    Czech Academy of Sciences Publication Activity Database

    Bulej, Lubomír; Kalibera, T.; Tůma, P.

    2005-01-01

    Roč. 60, - (2005), s. 345-358 ISSN 0166-5316 R&D Projects: GA ČR GA102/03/0672 Institutional research plan: CEZ:AV0Z10300504 Keywords : middleware benchmarking * regression benchmarking * regression testing Subject RIV: JD - Computer Applications, Robotics Impact factor: 0.756, year: 2005

  9. Benchmarking the energy efficiency of commercial buildings

    International Nuclear Information System (INIS)

    Chung, William; Hui, Y.V.; Lam, Y. Miu

    2006-01-01

    Benchmarking energy-efficiency is an important tool to promote the efficient use of energy in commercial buildings. Benchmarking models are mostly constructed in a simple benchmark table (percentile table) of energy use, which is normalized with floor area and temperature. This paper describes a benchmarking process for energy efficiency by means of multiple regression analysis, where the relationship between energy-use intensities (EUIs) and the explanatory factors (e.g., operating hours) is developed. Using the resulting regression model, these EUIs are then normalized by removing the effect of deviance in the significant explanatory factors. The empirical cumulative distribution of the normalized EUI gives a benchmark table (or percentile table of EUI) for benchmarking an observed EUI. The advantage of this approach is that the benchmark table represents a normalized distribution of EUI, taking into account all the significant explanatory factors that affect energy consumption. An application to supermarkets is presented to illustrate the development and the use of the benchmarking method

  10. Benchmarking, Total Quality Management, and Libraries.

    Science.gov (United States)

    Shaughnessy, Thomas W.

    1993-01-01

    Discussion of the use of Total Quality Management (TQM) in higher education and academic libraries focuses on the identification, collection, and use of reliable data. Methods for measuring quality, including benchmarking, are described; performance measures are considered; and benchmarking techniques are examined. (11 references) (MES)

  11. Soft Costs Fact Sheet

    Energy Technology Data Exchange (ETDEWEB)

    None

    2016-05-01

    This fact sheet is an overview of the systems integration subprogram at the U.S. Department of Energy SunShot Initiative. Soft costs can vary significantly as a result of a fragmented energy marketplace. In the U.S., there are 18,000 jurisdictions and 3,000 utilities with different rules and regulations for how to go solar. The same solar equipment may vary widely in its final installation price due to process and market variations across jurisdictions, creating barriers to rapid industry growth. SunShot supports the development of innovative solutions that enable communities to build their local economies and establish clean energy initiatives that meet their needs, while at the same time creating sustainable solar market conditions.

  12. Photovoltaics Fact Sheet

    Energy Technology Data Exchange (ETDEWEB)

    None

    2016-02-01

    This fact sheet is an overview of the Photovoltaics (PV) subprogram at the U.S. Department of Energy SunShot Initiative. The U.S. Department of Energy (DOE)’s Solar Energy Technologies Office works with industry, academia, national laboratories, and other government agencies to advance solar PV, which is the direct conversion of sunlight into electricity by a semiconductor, in support of the goals of the SunShot Initiative. SunShot supports research and development to aggressively advance PV technology by improving efficiency and reliability and lowering manufacturing costs. SunShot’s PV portfolio spans work from early-stage solar cell research through technology commercialization, including work on materials, processes, and device structure and characterization techniques.

  13. Systems Integration Fact Sheet

    Energy Technology Data Exchange (ETDEWEB)

    None

    2016-06-01

    This fact sheet is an overview of the Systems Integration subprogram at the U.S. Department of Energy SunShot Initiative. The Systems Integration subprogram enables the widespread deployment of safe, reliable, and cost-effective solar energy technologies by addressing the associated technical and non-technical challenges. These include timely and cost-effective interconnection procedures, optimal system planning, accurate prediction of solar resources, monitoring and control of solar power, maintaining grid reliability and stability, and many more. To address the challenges associated with interconnecting and integrating hundreds of gigawatts of solar power onto the electricity grid, the Systems Integration program funds research, development, and demonstration projects in four broad, interrelated focus areas: grid performance and reliability, dispatchability, power electronics, and communications.

  14. Hyperspectral light sheet microscopy

    Science.gov (United States)

    Jahr, Wiebke; Schmid, Benjamin; Schmied, Christopher; Fahrbach, Florian O.; Huisken, Jan

    2015-09-01

    To study the development and interactions of cells and tissues, multiple fluorescent markers need to be imaged efficiently in a single living organism. Instead of acquiring individual colours sequentially with filters, we created a platform based on line-scanning light sheet microscopy to record the entire spectrum for each pixel in a three-dimensional volume. We evaluated data sets with varying spectral sampling and determined the optimal channel width to be around 5 nm. With the help of these data sets, we show that our setup outperforms filter-based approaches with regard to image quality and discrimination of fluorophores. By spectral unmixing we resolved overlapping fluorophores with up to nanometre resolution and removed autofluorescence in zebrafish and fruit fly embryos.

  15. Benchmarking of refinery emissions performance : Executive summary

    International Nuclear Information System (INIS)

    2003-07-01

    This study was undertaken to collect emissions performance data for Canadian and comparable American refineries. The objective was to examine parameters that affect refinery air emissions performance and develop methods or correlations to normalize emissions performance. Another objective was to correlate and compare the performance of Canadian refineries to comparable American refineries. For the purpose of this study, benchmarking involved the determination of levels of emission performance that are being achieved for generic groups of facilities. A total of 20 facilities were included in the benchmarking analysis, and 74 American refinery emission correlations were developed. The recommended benchmarks, and the application of those correlations for comparison between Canadian and American refinery performance, were discussed. The benchmarks were: sulfur oxides, nitrogen oxides, carbon monoxide, particulate, volatile organic compounds, ammonia and benzene. For each refinery in Canada, benchmark emissions were developed. Several factors can explain differences in Canadian and American refinery emission performance. 4 tabs., 7 figs

  16. Settlement during vibratory sheet piling

    NARCIS (Netherlands)

    Meijers, P.

    2007-01-01

    During vibratory sheet piling quite often the soil near the sheet pile wall will settle. In many cases this is not a problem. For situations with houses, pipelines, roads or railroads at relative short distance these settlements may not be acceptable. The purpose of the research described in this

  17. Ghera: A Repository of Android App Vulnerability Benchmarks

    OpenAIRE

    Mitra, Joydeep; Ranganath, Venkatesh-Prasad

    2017-01-01

    Security of mobile apps affects the security of their users. This has fueled the development of techniques to automatically detect vulnerabilities in mobile apps and help developers secure their apps; specifically, in the context of Android platform due to openness and ubiquitousness of the platform. Despite a slew of research efforts in this space, there is no comprehensive repository of up-to-date and lean benchmarks that contain most of the known Android app vulnerabilities and, consequent...

  18. Vver-1000 Mox core computational benchmark

    International Nuclear Information System (INIS)

    2006-01-01

    The NEA Nuclear Science Committee has established an Expert Group that deals with the status and trends of reactor physics, fuel performance and fuel cycle issues related to disposing of weapons-grade plutonium in mixed-oxide fuel. The objectives of the group are to provide NEA member countries with up-to-date information on, and to develop consensus regarding, core and fuel cycle issues associated with burning weapons-grade plutonium in thermal water reactors (PWR, BWR, VVER-1000, CANDU) and fast reactors (BN-600). These issues concern core physics, fuel performance and reliability, and the capability and flexibility of thermal water reactors and fast reactors to dispose of weapons-grade plutonium in standard fuel cycles. The activities of the NEA Expert Group on Reactor-based Plutonium Disposition are carried out in close co-operation (jointly, in most cases) with the NEA Working Party on Scientific Issues in Reactor Systems (WPRS). A prominent part of these activities include benchmark studies. At the time of preparation of this report, the following benchmarks were completed or in progress: VENUS-2 MOX Core Benchmarks: carried out jointly with the WPRS (formerly the WPPR) (completed); VVER-1000 LEU and MOX Benchmark (completed); KRITZ-2 Benchmarks: carried out jointly with the WPRS (formerly the WPPR) (completed); Hollow and Solid MOX Fuel Behaviour Benchmark (completed); PRIMO MOX Fuel Performance Benchmark (ongoing); VENUS-2 MOX-fuelled Reactor Dosimetry Calculation (ongoing); VVER-1000 In-core Self-powered Neutron Detector Calculational Benchmark (started); MOX Fuel Rod Behaviour in Fast Power Pulse Conditions (started); Benchmark on the VENUS Plutonium Recycling Experiments Configuration 7 (started). This report describes the detailed results of the benchmark investigating the physics of a whole VVER-1000 reactor core using two-thirds low-enriched uranium (LEU) and one-third MOX fuel. It contributes to the computer code certification process and to the

  19. Plasma dynamics in current sheets

    International Nuclear Information System (INIS)

    Bogdanov, S.Yu.; Drejden, G.V.; Kirij, N.P.; AN SSSR, Leningrad

    1992-01-01

    Plasma dynamics in successive stages of current sheet evolution is investigated on the base of analysis of time-spatial variations of electron density and electrodynamic force fields. Current sheet formation is realized in a two-dimensional magnetic field with zero line under the action of relatively small initial disturbances (linear regimes). It is established that in the limits of the formed sheet is concentrated dense (N e ∼= 10 16 cm -3 ) (T i ≥ 100 eV, bar-Z i ≥ 2) hot pressure of which is balanced by the magnetic action of electrodynamic forces is carried out both plasma compression in the sheet limits and the acceleration along the sheet surface from a middle to narrow side edges

  20. Developing and modeling of the 'Laguna Verde' BWR CRDA benchmark

    International Nuclear Information System (INIS)

    Solis-Rodarte, J.; Fu, H.; Ivanov, K.N.; Matsui, Y.; Hotta, A.

    2002-01-01

    Reactivity initiated accidents (RIA) and design basis transients are one of the most important aspects related to nuclear power reactor safety. These events are re-evaluated whenever core alterations (modifications) are made as part of the nuclear safety analysis performed to a new design. These modifications usually include, but are not limited to, power upgrades, longer cycles, new fuel assembly and control rod designs, etc. The results obtained are compared with pre-established bounding analysis values to see if the new core design fulfills the requirements of safety constraints imposed on the design. The control rod drop accident (CRDA) is the design basis transient for the reactivity events of BWR technology. The CRDA is a very localized event depending on the control rod insertion position and the fuel assemblies surrounding the control rod falling from the core. A numerical benchmark was developed based on the CRDA RIA design basis accident to further asses the performance of coupled 3D neutron kinetics/thermal-hydraulics codes. The CRDA in a BWR is a mostly neutronic driven event. This benchmark is based on a real operating nuclear power plant - unit 1 of the Laguna Verde (LV1) nuclear power plant (NPP). The definition of the benchmark is presented briefly together with the benchmark specifications. Some of the cross-sections were modified in order to make the maximum control rod worth greater than one dollar. The transient is initiated at steady-state by dropping the control rod with maximum worth at full speed. The 'Laguna Verde' (LV1) BWR CRDA transient benchmark is calculated using two coupled codes: TRAC-BF1/NEM and TRAC-BF1/ENTREE. Neutron kinetics and thermal hydraulics models were developed for both codes. Comparison of the obtained results is presented along with some discussion of the sensitivity of results to some modeling assumptions

  1. DIAGNOSIS OF FINANCIAL POSITION BY BALANCE SHEET ANALYSIS - CASE STUDY

    OpenAIRE

    Hada Teodor; Marginean Radu

    2013-01-01

    This study aims to elucidate and to exemplify an important technique for assessing the economic entities, namely the fundamental analysis of the balance sheet, in several significant aspects. The analysis of financial data reported in the balance sheet are, for an economic entity, the basis of a principle diagnosis by determining specific indicators of economic and financial analysis. This analysis aims to provide an insight into the companyâ€(tm)s financial position. The stated aim of this s...

  2. Thermal Performance Benchmarking: Annual Report

    Energy Technology Data Exchange (ETDEWEB)

    Feng, Xuhui [National Renewable Energy Laboratory (NREL), Golden, CO (United States). Transportation and Hydrogen Systems Center

    2017-10-19

    In FY16, the thermal performance of the 2014 Honda Accord Hybrid power electronics thermal management systems were benchmarked. Both experiments and numerical simulation were utilized to thoroughly study the thermal resistances and temperature distribution in the power module. Experimental results obtained from the water-ethylene glycol tests provided the junction-to-liquid thermal resistance. The finite element analysis (FEA) and computational fluid dynamics (CFD) models were found to yield a good match with experimental results. Both experimental and modeling results demonstrate that the passive stack is the dominant thermal resistance for both the motor and power electronics systems. The 2014 Accord power electronics systems yield steady-state thermal resistance values around 42- 50 mm to the 2nd power K/W, depending on the flow rates. At a typical flow rate of 10 liters per minute, the thermal resistance of the Accord system was found to be about 44 percent lower than that of the 2012 Nissan LEAF system that was benchmarked in FY15. The main reason for the difference is that the Accord power module used a metalized-ceramic substrate and eliminated the thermal interface material layers. FEA models were developed to study the transient performance of 2012 Nissan LEAF, 2014 Accord, and two other systems that feature conventional power module designs. The simulation results indicate that the 2012 LEAF power module has lowest thermal impedance at a time scale less than one second. This is probably due to moving low thermally conductive materials further away from the heat source and enhancing the heat spreading effect from the copper-molybdenum plate close to the insulated gate bipolar transistors. When approaching steady state, the Honda system shows lower thermal impedance. Measurement results of the thermal resistance of the 2015 BMW i3 power electronic system indicate that the i3 insulated gate bipolar transistor module has significantly lower junction

  3. Vitamin and Mineral Supplement Fact Sheets

    Science.gov (United States)

    ... website Submit Search NIH Office of Dietary Supplements Vitamin and Mineral Supplement Fact Sheets Search the list ... Supplements: Background Information Botanical Dietary Supplements: Background Information Vitamin and Mineral Fact Sheets Botanical Supplement Fact Sheets ...

  4. SeSBench - An initiative to benchmark reactive transport models for environmental subsurface processes

    Science.gov (United States)

    Jacques, Diederik

    2017-04-01

    As soil functions are governed by a multitude of interacting hydrological, geochemical and biological processes, simulation tools coupling mathematical models for interacting processes are needed. Coupled reactive transport models are a typical example of such coupled tools mainly focusing on hydrological and geochemical coupling (see e.g. Steefel et al., 2015). Mathematical and numerical complexity for both the tool itself or of the specific conceptual model can increase rapidly. Therefore, numerical verification of such type of models is a prerequisite for guaranteeing reliability and confidence and qualifying simulation tools and approaches for any further model application. In 2011, a first SeSBench -Subsurface Environmental Simulation Benchmarking- workshop was held in Berkeley (USA) followed by four other ones. The objective is to benchmark subsurface environmental simulation models and methods with a current focus on reactive transport processes. The final outcome was a special issue in Computational Geosciences (2015, issue 3 - Reactive transport benchmarks for subsurface environmental simulation) with a collection of 11 benchmarks. Benchmarks, proposed by the participants of the workshops, should be relevant for environmental or geo-engineering applications; the latter were mostly related to radioactive waste disposal issues - excluding benchmarks defined for pure mathematical reasons. Another important feature is the tiered approach within a benchmark with the definition of a single principle problem and different sub problems. The latter typically benchmarked individual or simplified processes (e.g. inert solute transport, simplified geochemical conceptual model) or geometries (e.g. batch or one-dimensional, homogeneous). Finally, three codes should be involved into a benchmark. The SeSBench initiative contributes to confidence building for applying reactive transport codes. Furthermore, it illustrates the use of those type of models for different

  5. Deliverable 1.2 Specification of industrial benchmark tests

    DEFF Research Database (Denmark)

    Arentoft, Mogens; Ravn, Bjarne Gottlieb

    Technical report for the Growth project: IMPRESS, Improvement of precision in forming by simultaneous modelling of deflections in workpiece-die-press system - Output from WP1: Numerical simulation of deflections in workpiece-die-press system.......Technical report for the Growth project: IMPRESS, Improvement of precision in forming by simultaneous modelling of deflections in workpiece-die-press system - Output from WP1: Numerical simulation of deflections in workpiece-die-press system....

  6. Validation of gadolinium burnout using PWR benchmark specification

    Energy Technology Data Exchange (ETDEWEB)

    Oettingen, Mikołaj, E-mail: moettin@agh.edu.pl; Cetnar, Jerzy, E-mail: cetnar@mail.ftj.agh.edu.pl

    2014-07-01

    Graphical abstract: - Highlights: • We present methodology for validation of gadolinium burnout in PWR. • We model 17 × 17 PWR fuel assembly using MCB code. • We demonstrate C/E ratios of measured and calculated concentrations of Gd isotopes. • The C/E for Gd154, Gd156, Gd157, Gd158 and Gd160 shows good agreement of ±10%. • The C/E for Gd152 and Gd155 shows poor agreement below ±10%. - Abstract: The paper presents comparative analysis of measured and calculated concentrations of gadolinium isotopes in spent nuclear fuel from the Japanese Ohi-2 PWR. The irradiation of the 17 × 17 fuel assembly containing pure uranium and gadolinia bearing fuel pins was numerically reconstructed using the Monte Carlo Continuous Energy Burnup Code – MCB. The reference concentrations of gadolinium isotopes were measured in early 1990s at Japan Atomic Energy Research Institute. It seems that the measured concentrations were never used for validation of gadolinium burnout. In our study we fill this gap and assess quality of both: applied numerical methodology and experimental data. Additionally we show time evolutions of infinite neutron multiplication factor K{sub inf}, FIMA burnup, U235 and Gd155–Gd158. Gadolinium-based materials are commonly used in thermal reactors as burnable absorbers due to large neutron absorption cross-section of Gd155 and Gd157.

  7. What Randomized Benchmarking Actually Measures

    International Nuclear Information System (INIS)

    Proctor, Timothy; Rudinger, Kenneth; Young, Kevin; Sarovar, Mohan; Blume-Kohout, Robin

    2017-01-01

    Randomized benchmarking (RB) is widely used to measure an error rate of a set of quantum gates, by performing random circuits that would do nothing if the gates were perfect. In the limit of no finite-sampling error, the exponential decay rate of the observable survival probabilities, versus circuit length, yields a single error metric r. For Clifford gates with arbitrary small errors described by process matrices, r was believed to reliably correspond to the mean, over all Clifford gates, of the average gate infidelity between the imperfect gates and their ideal counterparts. We show that this quantity is not a well-defined property of a physical gate set. It depends on the representations used for the imperfect and ideal gates, and the variant typically computed in the literature can differ from r by orders of magnitude. We present new theories of the RB decay that are accurate for all small errors describable by process matrices, and show that the RB decay curve is a simple exponential for all such errors. Here, these theories allow explicit computation of the error rate that RB measures (r), but as far as we can tell it does not correspond to the infidelity of a physically allowed (completely positive) representation of the imperfect gates.

  8. Benchmarking Commercial Conformer Ensemble Generators.

    Science.gov (United States)

    Friedrich, Nils-Ole; de Bruyn Kops, Christina; Flachsenberg, Florian; Sommer, Kai; Rarey, Matthias; Kirchmair, Johannes

    2017-11-27

    We assess and compare the performance of eight commercial conformer ensemble generators (ConfGen, ConfGenX, cxcalc, iCon, MOE LowModeMD, MOE Stochastic, MOE Conformation Import, and OMEGA) and one leading free algorithm, the distance geometry algorithm implemented in RDKit. The comparative study is based on a new version of the Platinum Diverse Dataset, a high-quality benchmarking dataset of 2859 protein-bound ligand conformations extracted from the PDB. Differences in the performance of commercial algorithms are much smaller than those observed for free algorithms in our previous study (J. Chem. Inf. 2017, 57, 529-539). For commercial algorithms, the median minimum root-mean-square deviations measured between protein-bound ligand conformations and ensembles of a maximum of 250 conformers are between 0.46 and 0.61 Å. Commercial conformer ensemble generators are characterized by their high robustness, with at least 99% of all input molecules successfully processed and few or even no substantial geometrical errors detectable in their output conformations. The RDKit distance geometry algorithm (with minimization enabled) appears to be a good free alternative since its performance is comparable to that of the midranked commercial algorithms. Based on a statistical analysis, we elaborate on which algorithms to use and how to parametrize them for best performance in different application scenarios.

  9. Benchmark tests of JENDL-1

    International Nuclear Information System (INIS)

    Kikuchi, Yasuyuki; Hasegawa, Akira; Takano, Hideki; Kamei, Takanobu; Hojuyama, Takeshi; Sasaki, Makoto; Seki, Yuji; Zukeran, Atsushi; Otake, Iwao.

    1982-02-01

    Various benchmark tests were made on JENDL-1. At the first stage, various core center characteristics were tested for many critical assemblies with one-dimensional model. At the second stage, applicability of JENDL-1 was further tested to more sophisticated problems for MOZART and ZPPR-3 assemblies with two-dimensional model. It was proved that JENDL-1 predicted various quantities of fast reactors satisfactorily as a whole. However, the following problems were pointed out: 1) There exists discrepancy of 0.9% in the k sub(eff)-values between the Pu- and U-cores. 2) The fission rate ratio of 239 Pu to 235 U is underestimated by 3%. 3) The Doppler reactivity coefficients are overestimated by about 10%. 4) The control rod worths are underestimated by 4%. 5) The fission rates of 235 U and 239 Pu are underestimated considerably in the outer core and radial blanket regions. 6) The negative sodium void reactivities are overestimated, when the sodium is removed from the outer core. As a whole, most of problems of JENDL-1 seem to be related with the neutron leakage and the neutron spectrum. It was found through the further study that most of these problems came from too small diffusion coefficients and too large elastic removal cross sections above 100 keV, which might be probably caused by overestimation of the total and elastic scattering cross sections for structural materials in the unresolved resonance region up to several MeV. (author)

  10. Human factors reliability benchmark exercise

    International Nuclear Information System (INIS)

    Poucet, A.

    1989-08-01

    The Joint Research Centre of the European Commission has organised a Human Factors Reliability Benchmark Exercise (HF-RBE) with the aim of assessing the state of the art in human reliability modelling and assessment. Fifteen teams from eleven countries, representing industry, utilities, licensing organisations and research institutes, participated in the HF-RBE. The HF-RBE was organised around two study cases: (1) analysis of routine functional Test and Maintenance (TPM) procedures: with the aim of assessing the probability of test induced failures, the probability of failures to remain unrevealed and the potential to initiate transients because of errors performed in the test; (2) analysis of human actions during an operational transient: with the aim of assessing the probability that the operators will correctly diagnose the malfunctions and take proper corrective action. This report summarises the contributions received from the participants and analyses these contributions on a comparative basis. The aim of this analysis was to compare the procedures, modelling techniques and quantification methods used, to obtain insight in the causes and magnitude of the variability observed in the results, to try to identify preferred human reliability assessment approaches and to get an understanding of the current state of the art in the field identifying the limitations that are still inherent to the different approaches

  11. Revaluering benchmarking - A topical theme for the construction industry

    DEFF Research Database (Denmark)

    Rasmussen, Grane Mikael Gregaard

    2011-01-01

    and questioning the concept objectively. This paper addresses the underlying nature of benchmarking, and accounts for the importance of focusing attention on the sociological impacts benchmarking has in organizations. To understand these sociological impacts, benchmarking research needs to transcend...... the perception of benchmarking systems as secondary and derivative and instead studying benchmarking as constitutive of social relations and as irredeemably social phenomena. I have attempted to do so in this paper by treating benchmarking using a calculative practice perspective, and describing how...

  12. Developing integrated benchmarks for DOE performance measurement

    Energy Technology Data Exchange (ETDEWEB)

    Barancik, J.I.; Kramer, C.F.; Thode, Jr. H.C.

    1992-09-30

    The objectives of this task were to describe and evaluate selected existing sources of information on occupational safety and health with emphasis on hazard and exposure assessment, abatement, training, reporting, and control identifying for exposure and outcome in preparation for developing DOE performance benchmarks. Existing resources and methodologies were assessed for their potential use as practical performance benchmarks. Strengths and limitations of current data resources were identified. Guidelines were outlined for developing new or improved performance factors, which then could become the basis for selecting performance benchmarks. Data bases for non-DOE comparison populations were identified so that DOE performance could be assessed relative to non-DOE occupational and industrial groups. Systems approaches were described which can be used to link hazards and exposure, event occurrence, and adverse outcome factors, as needed to generate valid, reliable, and predictive performance benchmarks. Data bases were identified which contain information relevant to one or more performance assessment categories . A list of 72 potential performance benchmarks was prepared to illustrate the kinds of information that can be produced through a benchmark development program. Current information resources which may be used to develop potential performance benchmarks are limited. There is need to develop an occupational safety and health information and data system in DOE, which is capable of incorporating demonstrated and documented performance benchmarks prior to, or concurrent with the development of hardware and software. A key to the success of this systems approach is rigorous development and demonstration of performance benchmark equivalents to users of such data before system hardware and software commitments are institutionalized.

  13. IAEA sodium void reactivity benchmark calculations

    International Nuclear Information System (INIS)

    Hill, R.N.; Finck, P.J.

    1992-01-01

    In this paper, the IAEA-1 992 ''Benchmark Calculation of Sodium Void Reactivity Effect in Fast Reactor Core'' problem is evaluated. The proposed design is a large axially heterogeneous oxide-fueled fast reactor as described in Section 2; the core utilizes a sodium plenum above the core to enhance leakage effects. The calculation methods used in this benchmark evaluation are described in Section 3. In Section 4, the calculated core performance results for the benchmark reactor model are presented; and in Section 5, the influence of steel and interstitial sodium heterogeneity effects is estimated

  14. Benchmarking gate-based quantum computers

    Science.gov (United States)

    Michielsen, Kristel; Nocon, Madita; Willsch, Dennis; Jin, Fengping; Lippert, Thomas; De Raedt, Hans

    2017-11-01

    With the advent of public access to small gate-based quantum processors, it becomes necessary to develop a benchmarking methodology such that independent researchers can validate the operation of these processors. We explore the usefulness of a number of simple quantum circuits as benchmarks for gate-based quantum computing devices and show that circuits performing identity operations are very simple, scalable and sensitive to gate errors and are therefore very well suited for this task. We illustrate the procedure by presenting benchmark results for the IBM Quantum Experience, a cloud-based platform for gate-based quantum computing.

  15. Superfund fact sheet: The remedial program. Fact sheet

    International Nuclear Information System (INIS)

    1992-09-01

    The fact sheet describes what various actions the EPA can take to clean up hazardous wastes sites. Explanations of how the criteria for environmental and public health risk assessment are determined and the role of state and local governments in site remediation are given. The fact sheet is one in a series providing reference information about Superfund issues and is intended for readers with no formal scientific training

  16. Development of an ICSBEP Benchmark Evaluation, Nearly 20 Years of Experience

    International Nuclear Information System (INIS)

    Briggs, J. Blair; Bess, John D.

    2011-01-01

    The basic structure of all ICSBEP benchmark evaluations is essentially the same and includes (1) a detailed description of the experiment; (2) an evaluation of the experiment, including an exhaustive effort to quantify the effects of uncertainties on measured quantities; (3) a concise presentation of benchmark-model specifications; (4) sample calculation results; and (5) a summary of experimental references. Computer code input listings and other relevant information are generally preserved in appendixes. Details of an ICSBEP evaluation is presented.

  17. Semi-Analytical Benchmarks for MCNP6

    Energy Technology Data Exchange (ETDEWEB)

    Grechanuk, Pavel Aleksandrovi [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-11-07

    Code verification is an extremely important process that involves proving or disproving the validity of code algorithms by comparing them against analytical results of the underlying physics or mathematical theory on which the code is based. Monte Carlo codes such as MCNP6 must undergo verification and testing upon every release to ensure that the codes are properly simulating nature. Specifically, MCNP6 has multiple sets of problems with known analytic solutions that are used for code verification. Monte Carlo codes primarily specify either current boundary sources or a volumetric fixed source, either of which can be very complicated functions of space, energy, direction and time. Thus, most of the challenges with modeling analytic benchmark problems in Monte Carlo codes come from identifying the correct source definition to properly simulate the correct boundary conditions. The problems included in this suite all deal with mono-energetic neutron transport without energy loss, in a homogeneous material. The variables that differ between the problems are source type (isotropic/beam), medium dimensionality (infinite/semi-infinite), etc.

  18. Benchmark for license plate character segmentation

    Science.gov (United States)

    Gonçalves, Gabriel Resende; da Silva, Sirlene Pio Gomes; Menotti, David; Shwartz, William Robson

    2016-09-01

    Automatic license plate recognition (ALPR) has been the focus of many researches in the past years. In general, ALPR is divided into the following problems: detection of on-track vehicles, license plate detection, segmentation of license plate characters, and optical character recognition (OCR). Even though commercial solutions are available for controlled acquisition conditions, e.g., the entrance of a parking lot, ALPR is still an open problem when dealing with data acquired from uncontrolled environments, such as roads and highways when relying only on imaging sensors. Due to the multiple orientations and scales of the license plates captured by the camera, a very challenging task of the ALPR is the license plate character segmentation (LPCS) step, because its effectiveness is required to be (near) optimal to achieve a high recognition rate by the OCR. To tackle the LPCS problem, this work proposes a benchmark composed of a dataset designed to focus specifically on the character segmentation step of the ALPR within an evaluation protocol. Furthermore, we propose the Jaccard-centroid coefficient, an evaluation measure more suitable than the Jaccard coefficient regarding the location of the bounding box within the ground-truth annotation. The dataset is composed of 2000 Brazilian license plates consisting of 14000 alphanumeric symbols and their corresponding bounding box annotations. We also present a straightforward approach to perform LPCS efficiently. Finally, we provide an experimental evaluation for the dataset based on five LPCS approaches and demonstrate the importance of character segmentation for achieving an accurate OCR.

  19. RIA Fuel Codes Benchmark - Volume 1

    International Nuclear Information System (INIS)

    Marchand, Olivier; Georgenthum, Vincent; Petit, Marc; Udagawa, Yutaka; Nagase, Fumihisa; Sugiyama, Tomoyuki; Arffman, Asko; Cherubini, Marco; Dostal, Martin; Klouzal, Jan; Geelhood, Kenneth; Gorzel, Andreas; Holt, Lars; Jernkvist, Lars Olof; Khvostov, Grigori; Maertens, Dietmar; Spykman, Gerold; Nakajima, Tetsuo; Nechaeva, Olga; Panka, Istvan; Rey Gayo, Jose M.; Sagrado Garcia, Inmaculada C.; Shin, An-Dong; Sonnenburg, Heinz Guenther; Umidova, Zeynab; Zhang, Jinzhao; Voglewede, John

    2013-01-01

    Reactivity-initiated accident (RIA) fuel rod codes have been developed for a significant period of time and they all have shown their ability to reproduce some experimental results with a certain degree of adequacy. However, they sometimes rely on different specific modelling assumptions the influence of which on the final results of the calculations is difficult to evaluate. The NEA Working Group on Fuel Safety (WGFS) is tasked with advancing the understanding of fuel safety issues by assessing the technical basis for current safety criteria and their applicability to high burnup and to new fuel designs and materials. The group aims at facilitating international convergence in this area, including the review of experimental approaches as well as the interpretation and use of experimental data relevant for safety. As a contribution to this task, WGFS conducted a RIA code benchmark based on RIA tests performed in the Nuclear Safety Research Reactor in Tokai, Japan and tests performed or planned in CABRI reactor in Cadarache, France. Emphasis was on assessment of different modelling options for RIA fuel rod codes in terms of reproducing experimental results as well as extrapolating to typical reactor conditions. This report provides a summary of the results of this task. (authors)

  20. Statistical benchmark for BosonSampling

    International Nuclear Information System (INIS)

    Walschaers, Mattia; Mayer, Klaus; Buchleitner, Andreas; Kuipers, Jack; Urbina, Juan-Diego; Richter, Klaus; Tichy, Malte Christopher

    2016-01-01

    Boson samplers—set-ups that generate complex many-particle output states through the transmission of elementary many-particle input states across a multitude of mutually coupled modes—promise the efficient quantum simulation of a classically intractable computational task, and challenge the extended Church–Turing thesis, one of the fundamental dogmas of computer science. However, as in all experimental quantum simulations of truly complex systems, one crucial problem remains: how to certify that a given experimental measurement record unambiguously results from enforcing the claimed dynamics, on bosons, fermions or distinguishable particles? Here we offer a statistical solution to the certification problem, identifying an unambiguous statistical signature of many-body quantum interference upon transmission across a multimode, random scattering device. We show that statistical analysis of only partial information on the output state allows to characterise the imparted dynamics through particle type-specific features of the emerging interference patterns. The relevant statistical quantifiers are classically computable, define a falsifiable benchmark for BosonSampling, and reveal distinctive features of many-particle quantum dynamics, which go much beyond mere bunching or anti-bunching effects. (fast track communication)

  1. Benchmarking strategies for measuring the quality of healthcare: problems and prospects.

    Science.gov (United States)

    Lovaglio, Pietro Giorgio

    2012-01-01

    Over the last few years, increasing attention has been directed toward the problems inherent to measuring the quality of healthcare and implementing benchmarking strategies. Besides offering accreditation and certification processes, recent approaches measure the performance of healthcare institutions in order to evaluate their effectiveness, defined as the capacity to provide treatment that modifies and improves the patient's state of health. This paper, dealing with hospital effectiveness, focuses on research methods for effectiveness analyses within a strategy comparing different healthcare institutions. The paper, after having introduced readers to the principle debates on benchmarking strategies, which depend on the perspective and type of indicators used, focuses on the methodological problems related to performing consistent benchmarking analyses. Particularly, statistical methods suitable for controlling case-mix, analyzing aggregate data, rare events, and continuous outcomes measured with error are examined. Specific challenges of benchmarking strategies, such as the risk of risk adjustment (case-mix fallacy, underreporting, risk of comparing noncomparable hospitals), selection bias, and possible strategies for the development of consistent benchmarking analyses, are discussed. Finally, to demonstrate the feasibility of the illustrated benchmarking strategies, an application focused on determining regional benchmarks for patient satisfaction (using 2009 Lombardy Region Patient Satisfaction Questionnaire) is proposed.

  2. Benchmarking Strategies for Measuring the Quality of Healthcare: Problems and Prospects

    Science.gov (United States)

    Lovaglio, Pietro Giorgio

    2012-01-01

    Over the last few years, increasing attention has been directed toward the problems inherent to measuring the quality of healthcare and implementing benchmarking strategies. Besides offering accreditation and certification processes, recent approaches measure the performance of healthcare institutions in order to evaluate their effectiveness, defined as the capacity to provide treatment that modifies and improves the patient's state of health. This paper, dealing with hospital effectiveness, focuses on research methods for effectiveness analyses within a strategy comparing different healthcare institutions. The paper, after having introduced readers to the principle debates on benchmarking strategies, which depend on the perspective and type of indicators used, focuses on the methodological problems related to performing consistent benchmarking analyses. Particularly, statistical methods suitable for controlling case-mix, analyzing aggregate data, rare events, and continuous outcomes measured with error are examined. Specific challenges of benchmarking strategies, such as the risk of risk adjustment (case-mix fallacy, underreporting, risk of comparing noncomparable hospitals), selection bias, and possible strategies for the development of consistent benchmarking analyses, are discussed. Finally, to demonstrate the feasibility of the illustrated benchmarking strategies, an application focused on determining regional benchmarks for patient satisfaction (using 2009 Lombardy Region Patient Satisfaction Questionnaire) is proposed. PMID:22666140

  3. Financing gas plants using off balance sheet structures

    International Nuclear Information System (INIS)

    Best, R.J.; Malcolm, V.

    1999-01-01

    A means by which to finance oil and gas facilities using off balance sheet structures was presented. Off balance sheet facility financing means the sale by an oil and gas producer of a processing and/or transportation facility to a financial intermediary, who under a Management Agreement, appoints the producer as the operator of the facility. The financial intermediary charges a fixed processing fee to the producer and all the benefits and upside of ownership are retained by the producer. This paper deals specifically with a flexible off balance sheet facility financing structure that can be used to make effective use of discretionary capital which is committed to gas processing and to the construction of new gas processing facilities. Off balance sheet financing is an attractive alternative method of ownership that frees up capital that is locked into the facilities while allowing the producer to retain strategic control of the processing facility

  4. How benchmarking can improve patient nutrition.

    Science.gov (United States)

    Ellis, Jane

    Benchmarking is a tool that originated in business to enable organisations to compare their services with industry-wide best practice. Early last year the Department of Health published The Essence of Care, a benchmarking toolkit adapted for use in health care. It focuses on eight elements of care that are crucial to patients' experiences. Nurses and other health care professionals at a London NHS trust have begun a trust-wide benchmarking project. The aim is to improve patients' experiences of health care by sharing and comparing information, and by identifying examples of good practice and areas for improvement. The project began with two of the eight elements of The Essence of Care, with the intention of covering the rest later. This article describes the benchmarking process for nutrition and some of the consequent improvements in care.

  5. Benchmarking and validation activities within JEFF project

    Directory of Open Access Journals (Sweden)

    Cabellos O.

    2017-01-01

    Full Text Available The challenge for any nuclear data evaluation project is to periodically release a revised, fully consistent and complete library, with all needed data and covariances, and ensure that it is robust and reliable for a variety of applications. Within an evaluation effort, benchmarking activities play an important role in validating proposed libraries. The Joint Evaluated Fission and Fusion (JEFF Project aims to provide such a nuclear data library, and thus, requires a coherent and efficient benchmarking process. The aim of this paper is to present the activities carried out by the new JEFF Benchmarking and Validation Working Group, and to describe the role of the NEA Data Bank in this context. The paper will also review the status of preliminary benchmarking for the next JEFF-3.3 candidate cross-section files.

  6. Measuring Distribution Performance? Benchmarking Warrants Your Attention

    Energy Technology Data Exchange (ETDEWEB)

    Ericson, Sean J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Alvarez, Paul [The Wired Group

    2018-04-13

    Identifying, designing, and measuring performance metrics is critical to securing customer value, but can be a difficult task. This article examines the use of benchmarks based on publicly available performance data to set challenging, yet fair, metrics and targets.

  7. Numerical methods: Analytical benchmarking in transport theory

    International Nuclear Information System (INIS)

    Ganapol, B.D.

    1988-01-01

    Numerical methods applied to reactor technology have reached a high degree of maturity. Certainly one- and two-dimensional neutron transport calculations have become routine, with several programs available on personal computer and the most widely used programs adapted to workstation and minicomputer computational environments. With the introduction of massive parallelism and as experience with multitasking increases, even more improvement in the development of transport algorithms can be expected. Benchmarking an algorithm is usually not a very pleasant experience for the code developer. Proper algorithmic verification by benchmarking involves the following considerations: (1) conservation of particles, (2) confirmation of intuitive physical behavior, and (3) reproduction of analytical benchmark results. By using today's computational advantages, new basic numerical methods have been developed that allow a wider class of benchmark problems to be considered

  8. Benchmarking Linked Open Data Management Systems

    NARCIS (Netherlands)

    R. Angles Rojas (Renzo); M.-D. Pham (Minh-Duc); P.A. Boncz (Peter)

    2014-01-01

    htmlabstractWith inherent support for storing and analysing highly interconnected data, graph and RDF databases appear as natural solutions for developing Linked Open Data applications. However, current benchmarks for these database technologies do not fully attain the desirable characteristics

  9. Professional Performance and Bureaucratic Benchmarking Information

    DEFF Research Database (Denmark)

    Schneider, Melanie L.; Mahlendorf, Matthias D.; Schäffer, Utz

    controls because of organizational-professional conflicts. We therefore analyze the association between bureaucratic benchmarking information provision and professional performance and suggest that the association is more positive if prior professional performance was low. We test our hypotheses based...

  10. Second benchmark problem for WIPP structural computations

    International Nuclear Information System (INIS)

    Krieg, R.D.; Morgan, H.S.; Hunter, T.O.

    1980-12-01

    This report describes the second benchmark problem for comparison of the structural codes used in the WIPP project. The first benchmark problem consisted of heated and unheated drifts at a depth of 790 m, whereas this problem considers a shallower level (650 m) more typical of the repository horizon. But more important, the first problem considered a homogeneous salt configuration, whereas this problem considers a configuration with 27 distinct geologic layers, including 10 clay layers - 4 of which are to be modeled as possible slip planes. The inclusion of layering introduces complications in structural and thermal calculations that were not present in the first benchmark problem. These additional complications will be handled differently by the various codes used to compute drift closure rates. This second benchmark problem will assess these codes by evaluating the treatment of these complications

  11. Reactor fuel depletion benchmark of TINDER

    International Nuclear Information System (INIS)

    Martin, W.J.; Oliveira, C.R.E. de; Hecht, A.A.

    2014-01-01

    Highlights: • A reactor burnup benchmark of TINDER, coupling MCNP6 to CINDER2008, was performed. • TINDER is a poor candidate for fuel depletion calculations using its current libraries. • Data library modification is necessary if fuel depletion is desired from TINDER. - Abstract: Accurate burnup calculations are key to proper nuclear reactor design, fuel cycle modeling, and disposal estimations. The TINDER code, originally designed for activation analyses, has been modified to handle full burnup calculations, including the widely used predictor–corrector feature. In order to properly characterize the performance of TINDER for this application, a benchmark calculation was performed. Although the results followed the trends of past benchmarked codes for a UO 2 PWR fuel sample from the Takahama-3 reactor, there were obvious deficiencies in the final result, likely in the nuclear data library that was used. Isotopic comparisons versus experiment and past code benchmarks are given, as well as hypothesized areas of deficiency and future work

  12. Benchmarking and validation activities within JEFF project

    Science.gov (United States)

    Cabellos, O.; Alvarez-Velarde, F.; Angelone, M.; Diez, C. J.; Dyrda, J.; Fiorito, L.; Fischer, U.; Fleming, M.; Haeck, W.; Hill, I.; Ichou, R.; Kim, D. H.; Klix, A.; Kodeli, I.; Leconte, P.; Michel-Sendis, F.; Nunnenmann, E.; Pecchia, M.; Peneliau, Y.; Plompen, A.; Rochman, D.; Romojaro, P.; Stankovskiy, A.; Sublet, J. Ch.; Tamagno, P.; Marck, S. van der

    2017-09-01

    The challenge for any nuclear data evaluation project is to periodically release a revised, fully consistent and complete library, with all needed data and covariances, and ensure that it is robust and reliable for a variety of applications. Within an evaluation effort, benchmarking activities play an important role in validating proposed libraries. The Joint Evaluated Fission and Fusion (JEFF) Project aims to provide such a nuclear data library, and thus, requires a coherent and efficient benchmarking process. The aim of this paper is to present the activities carried out by the new JEFF Benchmarking and Validation Working Group, and to describe the role of the NEA Data Bank in this context. The paper will also review the status of preliminary benchmarking for the next JEFF-3.3 candidate cross-section files.

  13. 2012 Swimming Season Fact Sheets

    Science.gov (United States)

    To help beachgoers make informed decisions about swimming at U.S. beaches, EPA annually publishes state-by-state data about beach closings and advisories for the previous year's swimming season. These fact sheets summarize that information by state.

  14. State Fact Sheets on COPD

    Science.gov (United States)

    ... Submit Search The CDC Chronic Obstructive Pulmonary Disease (COPD) Note: Javascript is disabled or is not supported ... message, please visit this page: About CDC.gov . COPD Homepage Data and Statistics Fact Sheets Publications Publications ...

  15. BENCHMARKING ORTEC ISOTOPIC MEASUREMENTS AND CALCULATIONS

    Energy Technology Data Exchange (ETDEWEB)

    Dewberry, R; Raymond Sigg, R; Vito Casella, V; Nitin Bhatt, N

    2008-09-29

    This report represents a description of compiled benchmark tests conducted to probe and to demonstrate the extensive utility of the Ortec ISOTOPIC {gamma}-ray analysis computer program. The ISOTOPIC program performs analyses of {gamma}-ray spectra applied to specific acquisition configurations in order to apply finite-geometry correction factors and sample-matrix-container photon absorption correction factors. The analysis program provides an extensive set of preset acquisition configurations to which the user can add relevant parameters in order to build the geometry and absorption correction factors that the program determines from calculus and from nuclear g-ray absorption and scatter data. The Analytical Development Section field nuclear measurement group of the Savannah River National Laboratory uses the Ortec ISOTOPIC analysis program extensively for analyses of solid waste and process holdup applied to passive {gamma}-ray acquisitions. Frequently the results of these {gamma}-ray acquisitions and analyses are to determine compliance with facility criticality safety guidelines. Another use of results is to designate 55-gallon drum solid waste as qualified TRU waste3 or as low-level waste. Other examples of the application of the ISOTOPIC analysis technique to passive {gamma}-ray acquisitions include analyses of standard waste box items and unique solid waste configurations. In many passive {gamma}-ray acquisition circumstances the container and sample have sufficient density that the calculated energy-dependent transmission correction factors have intrinsic uncertainties in the range 15%-100%. This is frequently the case when assaying 55-gallon drums of solid waste with masses of up to 400 kg and when assaying solid waste in extensive unique containers. Often an accurate assay of the transuranic content of these containers is not required, but rather a good defensible designation as >100 nCi/g (TRU waste) or <100 nCi/g (low level solid waste) is required. In

  16. Australian Government Balance Sheet Management

    OpenAIRE

    Wilson Au-Yeung; Jason McDonald; Amanda Sayegh

    2006-01-01

    Since almost eliminating net debt, the Australian Government%u2019s attention has turned to the financing of broader balance sheet liabilities, such as public sector superannuation. Australia will be developing a significant financial asset portfolio in the %u2018Future Fund%u2019 to smooth the financing of expenses through time. This raises the significant policy question of how best to manage the government balance sheet to reduce risk. This paper provides a framework for optimal balance sh...

  17. Energy information sheets, July 1998

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-07-01

    The National Energy Information Center (NEIC), as part of its mission, provides energy information and referral assistance to Federal, State, and local governments, the academic community, business and industrial organizations, and the public. The Energy Information Sheets was developed to provide general information on various aspects of fuel production, prices, consumption, and capability. Additional information on related subject matter can be found in other Energy Information Administration (EIA) publications as referenced at the end of each sheet.

  18. Energy information sheets, September 1996

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-09-01

    The National Energy Information Center (NEIC), as part of its mission, provides energy information and referral assistance to Federal, State, and local governments, the academic community, business and industrial organizations, and the public. The Energy Information Sheets was developed to provide general information on various aspects of fuel production, prices, consumption, and capability. Additional information on related subject matter can be found in other Energy Information Administration (EIA) publications as referenced at the end of each sheet.

  19. Benchmarking Danish Vocational Education and Training Programmes

    DEFF Research Database (Denmark)

    Bogetoft, Peter; Wittrup, Jesper

    This study paper discusses methods whereby Danish vocational education and training colleges can be benchmarked, and presents results from a number of models. It is conceptually complicated to benchmark vocational colleges, as the various colleges in Denmark offer a wide range of course programmes...... attempt to summarise the various effects that the colleges have in two relevant figures, namely retention rates of students and employment rates among students who have completed training programmes....

  20. A framework for benchmarking land models

    Directory of Open Access Journals (Sweden)

    Y. Q. Luo

    2012-10-01

    Full Text Available Land models, which have been developed by the modeling community in the past few decades to predict future states of ecosystems and climate, have to be critically evaluated for their performance skills of simulating ecosystem responses and feedback to climate change. Benchmarking is an emerging procedure to measure performance of models against a set of defined standards. This paper proposes a benchmarking framework for evaluation of land model performances and, meanwhile, highlights major challenges at this infant stage of benchmark analysis. The framework includes (1 targeted aspects of model performance to be evaluated, (2 a set of benchmarks as defined references to test model performance, (3 metrics to measure and compare performance skills among models so as to identify model strengths and deficiencies, and (4 model improvement. Land models are required to simulate exchange of water, energy, carbon and sometimes other trace gases between the atmosphere and land surface, and should be evaluated for their simulations of biophysical processes, biogeochemical cycles, and vegetation dynamics in response to climate change across broad temporal and spatial scales. Thus, one major challenge is to select and define a limited number of benchmarks to effectively evaluate land model performance. The second challenge is to develop metrics of measuring mismatches between models and benchmarks. The metrics may include (1 a priori thresholds of acceptable model performance and (2 a scoring system to combine data–model mismatches for various processes at different temporal and spatial scales. The benchmark analyses should identify clues of weak model performance to guide future development, thus enabling improved predictions of future states of ecosystems and climate. The near-future research effort should be on development of a set of widely acceptable benchmarks that can be used to objectively, effectively, and reliably evaluate fundamental properties

  1. EPRI depletion benchmark calculations using PARAGON

    International Nuclear Information System (INIS)

    Kucukboyaci, Vefa N.

    2015-01-01

    Highlights: • PARAGON depletion calculations are benchmarked against the EPRI reactivity decrement experiments. • Benchmarks cover a wide range of enrichments, burnups, cooling times, and burnable absorbers, and different depletion and storage conditions. • Results from PARAGON-SCALE scheme are more conservative relative to the benchmark data. • ENDF/B-VII based data reduces the excess conservatism and brings the predictions closer to benchmark reactivity decrement values. - Abstract: In order to conservatively apply burnup credit in spent fuel pool criticality analyses, code validation for both fresh and used fuel is required. Fresh fuel validation is typically done by modeling experiments from the “International Handbook.” A depletion validation can determine a bias and bias uncertainty for the worth of the isotopes not found in the fresh fuel critical experiments. Westinghouse’s burnup credit methodology uses PARAGON™ (Westinghouse 2-D lattice physics code) and its 70-group cross-section library, which have been benchmarked, qualified, and licensed both as a standalone transport code and as a nuclear data source for core design simulations. A bias and bias uncertainty for the worth of depletion isotopes, however, are not available for PARAGON. Instead, the 5% decrement approach for depletion uncertainty is used, as set forth in the Kopp memo. Recently, EPRI developed a set of benchmarks based on a large set of power distribution measurements to ascertain reactivity biases. The depletion reactivity has been used to create 11 benchmark cases for 10, 20, 30, 40, 50, and 60 GWd/MTU and 3 cooling times 100 h, 5 years, and 15 years. These benchmark cases are analyzed with PARAGON and the SCALE package and sensitivity studies are performed using different cross-section libraries based on ENDF/B-VI.3 and ENDF/B-VII data to assess that the 5% decrement approach is conservative for determining depletion uncertainty

  2. Benchmarking infrastructure for mutation text mining.

    Science.gov (United States)

    Klein, Artjom; Riazanov, Alexandre; Hindle, Matthew M; Baker, Christopher Jo

    2014-02-25

    Experimental research on the automatic extraction of information about mutations from texts is greatly hindered by the lack of consensus evaluation infrastructure for the testing and benchmarking of mutation text mining systems. We propose a community-oriented annotation and benchmarking infrastructure to support development, testing, benchmarking, and comparison of mutation text mining systems. The design is based on semantic standards, where RDF is used to represent annotations, an OWL ontology provides an extensible schema for the data and SPARQL is used to compute various performance metrics, so that in many cases no programming is needed to analyze results from a text mining system. While large benchmark corpora for biological entity and relation extraction are focused mostly on genes, proteins, diseases, and species, our benchmarking infrastructure fills the gap for mutation information. The core infrastructure comprises (1) an ontology for modelling annotations, (2) SPARQL queries for computing performance metrics, and (3) a sizeable collection of manually curated documents, that can support mutation grounding and mutation impact extraction experiments. We have developed the principal infrastructure for the benchmarking of mutation text mining tasks. The use of RDF and OWL as the representation for corpora ensures extensibility. The infrastructure is suitable for out-of-the-box use in several important scenarios and is ready, in its current state, for initial community adoption.

  3. Ad hoc committee on reactor physics benchmarks

    International Nuclear Information System (INIS)

    Diamond, D.J.; Mosteller, R.D.; Gehin, J.C.

    1996-01-01

    In the spring of 1994, an ad hoc committee on reactor physics benchmarks was formed under the leadership of two American Nuclear Society (ANS) organizations. The ANS-19 Standards Subcommittee of the Reactor Physics Division and the Computational Benchmark Problem Committee of the Mathematics and Computation Division had both seen a need for additional benchmarks to help validate computer codes used for light water reactor (LWR) neutronics calculations. Although individual organizations had employed various means to validate the reactor physics methods that they used for fuel management, operations, and safety, additional work in code development and refinement is under way, and to increase accuracy, there is a need for a corresponding increase in validation. Both organizations thought that there was a need to promulgate benchmarks based on measured data to supplement the LWR computational benchmarks that have been published in the past. By having an organized benchmark activity, the participants also gain by being able to discuss their problems and achievements with others traveling the same route

  4. Benchmarking for controllere: metoder, teknikker og muligheder

    DEFF Research Database (Denmark)

    Bukh, Per Nikolaj; Sandalgaard, Niels Erik; Dietrichson, Lars Grubbe

    2008-01-01

    Benchmarking indgår på mange måder i både private og offentlige virksomheders ledelsespraksis. I økonomistyring anvendes benchmark-baserede indikatorer (eller nøgletal), eksempelvis ved fastlæggelse af mål i resultatkontrakter eller for at angive det ønskede niveau for visse nøgletal i et Balanced...... Scorecard eller tilsvarende målstyringsmodeller. Artiklen redegør for begrebet benchmarking ved at præsentere og diskutere forskellige facetter af det, samt redegør for fire forskellige anvendelser af benchmarking for at vise begrebets bredde og væsentligheden af at klarlægge formålet med et...... benchmarkingprojekt. Dernæst bliver forskellen på resultatbenchmarking og procesbenchmarking behandlet, hvorefter brugen af intern hhv. ekstern benchmarking, samt brugen af benchmarking i budgetlægning og budgetopfølgning, behandles....

  5. Pool critical assembly pressure vessel facility benchmark

    International Nuclear Information System (INIS)

    Remec, I.; Kam, F.B.K.

    1997-07-01

    This pool critical assembly (PCA) pressure vessel wall facility benchmark (PCA benchmark) is described and analyzed in this report. Analysis of the PCA benchmark can be used for partial fulfillment of the requirements for the qualification of the methodology for pressure vessel neutron fluence calculations, as required by the US Nuclear Regulatory Commission regulatory guide DG-1053. Section 1 of this report describes the PCA benchmark and provides all data necessary for the benchmark analysis. The measured quantities, to be compared with the calculated values, are the equivalent fission fluxes. In Section 2 the analysis of the PCA benchmark is described. Calculations with the computer code DORT, based on the discrete-ordinates method, were performed for three ENDF/B-VI-based multigroup libraries: BUGLE-93, SAILOR-95, and BUGLE-96. An excellent agreement of the calculated (C) and measures (M) equivalent fission fluxes was obtained. The arithmetic average C/M for all the dosimeters (total of 31) was 0.93 ± 0.03 and 0.92 ± 0.03 for the SAILOR-95 and BUGLE-96 libraries, respectively. The average C/M ratio, obtained with the BUGLE-93 library, for the 28 measurements was 0.93 ± 0.03 (the neptunium measurements in the water and air regions were overpredicted and excluded from the average). No systematic decrease in the C/M ratios with increasing distance from the core was observed for any of the libraries used

  6. Benchmarking infrastructure for mutation text mining

    Science.gov (United States)

    2014-01-01

    Background Experimental research on the automatic extraction of information about mutations from texts is greatly hindered by the lack of consensus evaluation infrastructure for the testing and benchmarking of mutation text mining systems. Results We propose a community-oriented annotation and benchmarking infrastructure to support development, testing, benchmarking, and comparison of mutation text mining systems. The design is based on semantic standards, where RDF is used to represent annotations, an OWL ontology provides an extensible schema for the data and SPARQL is used to compute various performance metrics, so that in many cases no programming is needed to analyze results from a text mining system. While large benchmark corpora for biological entity and relation extraction are focused mostly on genes, proteins, diseases, and species, our benchmarking infrastructure fills the gap for mutation information. The core infrastructure comprises (1) an ontology for modelling annotations, (2) SPARQL queries for computing performance metrics, and (3) a sizeable collection of manually curated documents, that can support mutation grounding and mutation impact extraction experiments. Conclusion We have developed the principal infrastructure for the benchmarking of mutation text mining tasks. The use of RDF and OWL as the representation for corpora ensures extensibility. The infrastructure is suitable for out-of-the-box use in several important scenarios and is ready, in its current state, for initial community adoption. PMID:24568600

  7. Benchmarking for Cost Improvement. Final report

    Energy Technology Data Exchange (ETDEWEB)

    1993-09-01

    The US Department of Energy`s (DOE) Office of Environmental Restoration and Waste Management (EM) conducted the Benchmarking for Cost Improvement initiative with three objectives: Pilot test benchmarking as an EM cost improvement tool; identify areas for cost improvement and recommend actions to address these areas; provide a framework for future cost improvement. The benchmarking initiative featured the use of four principal methods (program classification, nationwide cost improvement survey, paired cost comparison and component benchmarking). Interested parties contributed during both the design and execution phases. The benchmarking initiative was conducted on an accelerated basis. Of necessity, it considered only a limited set of data that may not be fully representative of the diverse and complex conditions found at the many DOE installations. The initiative generated preliminary data about cost differences and it found a high degree of convergence on several issues. Based on this convergence, the report recommends cost improvement strategies and actions. This report describes the steps taken as part of the benchmarking initiative and discusses the findings and recommended actions for achieving cost improvement. The results and summary recommendations, reported below, are organized by the study objectives.

  8. Benchmarking local healthcare-associated infections: Available benchmarks and interpretation challenges

    Directory of Open Access Journals (Sweden)

    Aiman El-Saed

    2013-10-01

    Full Text Available Summary: Growing numbers of healthcare facilities are routinely collecting standardized data on healthcare-associated infection (HAI, which can be used not only to track internal performance but also to compare local data to national and international benchmarks. Benchmarking overall (crude HAI surveillance metrics without accounting or adjusting for potential confounders can result in misleading conclusions. Methods commonly used to provide risk-adjusted metrics include multivariate logistic regression analysis, stratification, indirect standardization, and restrictions. The characteristics of recognized benchmarks worldwide, including the advantages and limitations are described. The choice of the right benchmark for the data from the Gulf Cooperation Council (GCC states is challenging. The chosen benchmark should have similar data collection and presentation methods. Additionally, differences in surveillance environments including regulations should be taken into consideration when considering such a benchmark. The GCC center for infection control took some steps to unify HAI surveillance systems in the region. GCC hospitals still need to overcome legislative and logistic difficulties in sharing data to create their own benchmark. The availability of a regional GCC benchmark may better enable health care workers and researchers to obtain more accurate and realistic comparisons. Keywords: Benchmarking, Comparison, Surveillance, Healthcare-associated infections

  9. Spread-sheet application to classify radioactive material for shipment

    International Nuclear Information System (INIS)

    Brown, A.N.

    1998-01-01

    A spread-sheet application has been developed at the Idaho National Engineering and Environmental Laboratory to aid the shipper when classifying nuclide mixtures of normal form, radioactive materials. The results generated by this spread-sheet are used to confirm the proper US DOT classification when offering radioactive material packages for transport. The user must input to the spread-sheet the mass of the material being classified, the physical form (liquid or not) and the activity of each regulated nuclide. The spread-sheet uses these inputs to calculate two general values: 1)the specific activity of the material and a summation calculation of the nuclide content. The specific activity is used to determine if the material exceeds the DOT minimal threshold for a radioactive material. If the material is calculated to be radioactive, the specific activity is also used to determine if the material meets the activity requirement for one of the three low specific activity designations (LSA-I, LSA-II, LSA-III, or not LSA). Again, if the material is calculated to be radioactive, the summation calculation is then used to determine which activity category the material will meet (Limited Quantity, Type A, Type B, or Highway Route Controlled Quantity). This spread-sheet has proven to be an invaluable aid for shippers of radioactive materials at the Idaho National Engineering and Environmental Laboratory. (authors)

  10. Lesson learned from the SARNET wall condensation benchmarks

    International Nuclear Information System (INIS)

    Ambrosini, W.; Forgione, N.; Merli, F.; Oriolo, F.; Paci, S.; Kljenak, I.; Kostka, P.; Vyskocil, L.; Travis, J.R.; Lehmkuhl, J.; Kelm, S.; Chin, Y.-S.; Bucci, M.

    2014-01-01

    Highlights: • The results of the benchmarking activity on wall condensation are reported. • The work was performed in the frame of SARNET. • General modelling techniques for condensation are discussed. • Results of University of Pisa and of other benchmark participants are discussed. • The lesson learned is drawn. - Abstract: The prediction of condensation in the presence of noncondensable gases has received continuing attention in the frame of the Severe Accident Research Network of Excellence, both in the first (2004–2008) and in the second (2009–2013) EC integrated projects. Among the different reasons for considering so relevant this basic phenomenon, coped with by classical treatments dated in the first decades of the last century, there is the interest for developing updated CFD models for reactor containment analysis, requiring validating at a different level the available modelling techniques. In the frame of SARNET, benchmarking activities were undertaken taking advantage of the work performed at different institutions in setting up and developing models for steam condensation in conditions of interest for nuclear reactor containment. Four steps were performed in the activity, involving: (1) an idealized problem freely inspired at the actual conditions occurring in an experimental facility, CONAN, installed at the University of Pisa; (2) a first comparison with experimental data purposely collected by the CONAN facility; (3) a second comparison with data available from experimental campaigns performed in the same apparatus before the inclusion of the activities in SARNET; (4) a third exercise involving data obtained at lower mixture velocity than in previous campaigns, aimed at providing conditions closer to those addressed in reactor containment analyses. The last step of the benchmarking activity required to change the configuration of the experimental apparatus to achieve the lower flow rates involved in the new test specifications. The

  11. Benchmarking passive transfer of immunity and growth in dairy calves.

    Science.gov (United States)

    Atkinson, D J; von Keyserlingk, M A G; Weary, D M

    2017-05-01

    Poor health and growth in young dairy calves can have lasting effects on their development and future production. This study benchmarked calf-rearing outcomes in a cohort of Canadian dairy farms, reported these findings back to producers and their veterinarians, and documented the results. A total of 18 Holstein dairy farms were recruited, all in British Columbia. Blood samples were collected from calves aged 1 to 7 d. We estimated serum total protein levels using digital refractometry, and failure of passive transfer (FPT) was defined as values below 5.2 g/dL. We estimated average daily gain (ADG) for preweaned heifers (1 to 70 d old) using heart-girth tape measurements, and analyzed early (≤35 d) and late (>35 d) growth separately. At first assessment, the average farm FPT rate was 16%. Overall, ADG was 0.68 kg/d, with early and late growth rates of 0.51 and 0.90 kg/d, respectively. Following delivery of the benchmark reports, all participants volunteered to undergo a second assessment. The majority (83%) made at least 1 change in their colostrum-management or milk-feeding practices, including increased colostrum at first feeding, reduced time to first colostrum, and increased initial and maximum daily milk allowances. The farms that made these changes experienced improved outcomes. On the 11 farms that made changes to improve colostrum feeding, the rate of FPT declined from 21 ± 10% before benchmarking to 11 ± 10% after making the changes. On the 10 farms that made changes to improve calf growth, ADG improved from 0.66 ± 0.09 kg/d before benchmarking to 0.72 ± 0.08 kg/d after making the management changes. Increases in ADG were greatest in the early milk-feeding period, averaging 0.13 kg/d higher than pre-benchmarking values for calves ≤35 d of age. Benchmarking specific outcomes associated with calf rearing can motivate producer engagement in calf care, leading to improved outcomes for calves on farms that apply relevant management changes. Copyright

  12. Benchmarking - a validation of UTDefect

    International Nuclear Information System (INIS)

    Niklasson, Jonas; Bostroem, Anders; Wirdelius, Haakan

    2006-06-01

    New and stronger demands on reliability of used NDE/NDT procedures and methods have stimulated the development of simulation tools of NDT. Modelling of ultrasonic non-destructive testing is useful for a number of reasons, e.g. physical understanding, parametric studies and in the qualification of procedures and personnel. The traditional way of qualifying a procedure is to generate a technical justification by employing experimental verification of the chosen technique. The manufacturing of test pieces is often very expensive and time consuming. It also tends to introduce a number of possible misalignments between the actual NDT situation and the proposed experimental simulation. The UTDefect computer code (SUNDT/simSUNDT) has been developed, together with the Dept. of Mechanics at Chalmers Univ. of Technology, during a decade and simulates the entire ultrasonic testing situation. A thorough validated model has the ability to be an alternative and a complement to the experimental work in order to reduce the extensive cost. The validation can be accomplished by comparisons with other models, but ultimately by comparisons with experiments. This project addresses the last alternative but provides an opportunity to, in a later stage, compare with other software when all data are made public and available. The comparison has been with experimental data from an international benchmark study initiated by the World Federation of NDE Centers. The experiments have been conducted with planar and spherically focused immersion transducers. The defects considered are side-drilled holes, flat-bottomed holes, and a spherical cavity. The data from the experiments are a reference signal used for calibration (the signal from the front surface of the test block at normal incidence) and the raw output from the scattering experiment. In all, more than forty cases have been compared. The agreement between UTDefect and the experiments was in general good (deviation less than 2dB) when the

  13. FDTD modeling of thin impedance sheets

    Science.gov (United States)

    Luebbers, Raymond J.; Kunz, Karl S.

    1991-01-01

    Thin sheets of resistive or dielectric material are commonly encountered in radar cross section calculations. Analysis of such sheets is simplified by using sheet impedances. In this paper it is shown that sheet impedances can be modeled easily and accurately using Finite Difference Time Domain (FDTD) methods.

  14. Springback study in aluminum alloys based on the Demeri Benchmark Test : influence of material model

    International Nuclear Information System (INIS)

    Greze, R.; Laurent, H.; Manach, P. Y.

    2007-01-01

    Springback is a serious problem in sheet metal forming. Its origin lies in the elastic recovery of materials after a deep drawing operation. Springback modifies the final shape of the part when removed from the die after forming. This study deals with Springback in an Al5754-O aluminum alloy. An experimental test similar to the Demeri Benchmark Test has been developed. The experimentally measured Springback is compared to predicted Springback simulation using Abaqus software. Several material models are analyzed, all models using isotropic hardening of Voce type and plasticity criteria such as Von Mises and Hill48's yield criterion

  15. NREL Benchmarks the Installed Cost of Residential Solar Photovoltaics with Energy Storage for the First Time

    Energy Technology Data Exchange (ETDEWEB)

    2017-06-13

    Fact sheet summarizing technical report TP-7A40-67474. New National Renewable Energy Laboratory research fills a gap in the existing knowledge about barriers to PV-plus-storage systems by providing detailed component- and system-level installed cost benchmarks for systems in the first quarter of 2016. The report is meant to help technology manufacturers, installers, and other stakeholders identify cost-reduction opportunities and inform decision makers about regulatory, policy, and market characteristics that impede PV-plus-storage deployment.

  16. Behavior of protruding lateral plane graphene sheets in liquid dodecane: molecular dynamics simulations

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Shenghui; Sun, Shuangqing, E-mail: sunshuangqing@upc.edu.cn; Li, Chunling [China University of Petroleum (East China), College of Science (China); Pittman, Charles U. [Mississippi State University, Department of Chemistry (United States); Lacy, Thomas E. [Mississippi State University, Department of Aerospace Engineering (United States); Hu, Songqing, E-mail: songqinghu@upc.edu.cn [China University of Petroleum (East China), College of Science (China); Gwaltney, Steven R. [Mississippi State University, Department of Chemistry (United States)

    2016-11-15

    Molecular dynamics simulations are used to investigate the behavior of two parallel graphene sheets fixed on one edge (lateral plane) in liquid dodecane. The interactions of these sheets and dodecane molecules are studied with different starting inter-sheet distances. The structure of the dodecane solvent is also analyzed. The results show that when the distance between the two graphene sheets is short (less than 6.8 Å), the sheets will expel the dodecane molecules between them and stack together. However, when the distance between two sheets is large (greater than 10.2 Å), the two sheets do not come together, and the dodecane molecules will form ordered layers in the interlayer spacing. The equilibrium distance between the graphene sheets can only take on specific discrete values (3.4, 7.8, and 12.1 Å), because only an integer number of dodecane layers forms between the two sheets. Once the graphene sheets are in contact, they remain in contact; the sheets do not separate to allow dodecane into the interlayer spacing.

  17. International Handbook of Evaluated Criticality Safety Benchmark Experiments - ICSBEP (DVD), Version 2013

    International Nuclear Information System (INIS)

    2013-01-01

    The Criticality Safety Benchmark Evaluation Project (CSBEP) was initiated in October of 1992 by the United States Department of Energy. The project quickly became an international effort as scientists from other interested countries became involved. The International Criticality Safety Benchmark Evaluation Project (ICSBEP) became an official activity of the Organisation for Economic Co-operation and Development (OECD) Nuclear Energy Agency (NEA) in 1995. This handbook contains criticality safety benchmark specifications that have been derived from experiments performed at various nuclear critical experiment facilities around the world. The benchmark specifications are intended for use by criticality safety engineers to validate calculational techniques used to establish minimum subcritical margins for operations with fissile material and to determine criticality alarm requirement and placement. Many of the specifications are also useful for nuclear data testing. Example calculations are presented; however, these calculations do not constitute a validation of the codes or cross section data. The evaluated criticality safety benchmark data are given in nine volumes. These volumes span nearly 66,000 pages and contain 558 evaluations with benchmark specifications for 4,798 critical, near critical or subcritical configurations, 24 criticality alarm placement/shielding configurations with multiple dose points for each and 200 configurations that have been categorised as fundamental physics measurements that are relevant to criticality safety applications. New to the Handbook are benchmark specifications for Critical, Bare, HEU(93.2)- Metal Sphere experiments referred to as ORSphere that were performed by a team of experimenters at Oak Ridge National Laboratory in the early 1970's. A photograph of this assembly is shown on the front cover

  18. The Development of a Benchmark Tool for NoSQL Databases

    Directory of Open Access Journals (Sweden)

    Ion LUNGU

    2013-07-01

    Full Text Available The aim of this article is to describe a proposed benchmark methodology and software application targeted at measuring the performance of both SQL and NoSQL databases. These represent the results obtained during PhD research (being actually a part of a larger application intended for NoSQL database management. A reason for aiming at this particular subject is the complete lack of benchmarking tools for NoSQL databases, except for YCBS [1] and a benchmark tool made specifically to compare Redis to RavenDB. While there are several well-known benchmarking systems for classical relational databases (starting with the canon TPC-C, TPC-E and TPC-H, on the other side of databases world such tools are mostly missing and seriously needed.

  19. BUGLE-93 (ENDF/B-VI) cross-section library data testing using shielding benchmarks

    International Nuclear Information System (INIS)

    Hunter, H.T.; Slater, C.O.; White, J.E.

    1994-01-01

    Several integral shielding benchmarks were selected to perform data testing for new multigroup cross-section libraries compiled from the ENDF/B-VI data for light water reactor (LWR) shielding and dosimetry. The new multigroup libraries, BUGLE-93 and VITAMIN-B6, were studied to establish their reliability and response to the benchmark measurements by use of radiation transport codes, ANISN and DORT. Also, direct comparisons of BUGLE-93 and VITAMIN-B6 to BUGLE-80 (ENDF/B-IV) and VITAMIN-E (ENDF/B-V) were performed. Some benchmarks involved the nuclides used in LWR shielding and dosimetry applications, and some were sensitive specific nuclear data, i.e. iron due to its dominant use in nuclear reactor systems and complex set of cross-section resonances. Five shielding benchmarks (four experimental and one calculational) are described and results are presented

  20. Benchmarking von Krankenhausinformationssystemen – eine vergleichende Analyse deutschsprachiger Benchmarkingcluster

    Directory of Open Access Journals (Sweden)

    Jahn, Franziska

    2015-08-01

    Full Text Available Benchmarking is a method of strategic information management used by many hospitals today. During the last years, several benchmarking clusters have been established within the German-speaking countries. They support hospitals in comparing and positioning their information system’s and information management’s costs, performance and efficiency against other hospitals. In order to differentiate between these benchmarking clusters and to provide decision support in selecting an appropriate benchmarking cluster, a classification scheme is developed. The classification scheme observes both general conditions and examined contents of the benchmarking clusters. It is applied to seven benchmarking clusters which have been active in the German-speaking countries within the last years. Currently, performance benchmarking is the most frequent benchmarking type, whereas the observed benchmarking clusters differ in the number of benchmarking partners and their cooperation forms. The benchmarking clusters also deal with different benchmarking subjects. Assessing costs and quality application systems, physical data processing systems, organizational structures of information management and IT services processes are the most frequent benchmarking subjects. There is still potential for further activities within the benchmarking clusters to measure strategic and tactical information management, IT governance and quality of data and data-processing processes. Based on the classification scheme and the comparison of the benchmarking clusters, we derive general recommendations for benchmarking of hospital information systems.

  1. Raising Quality and Achievement. A College Guide to Benchmarking.

    Science.gov (United States)

    Owen, Jane

    This booklet introduces the principles and practices of benchmarking as a way of raising quality and achievement at further education colleges in Britain. Section 1 defines the concept of benchmarking. Section 2 explains what benchmarking is not and the steps that should be taken before benchmarking is initiated. The following aspects and…

  2. Benchmarks: The Development of a New Approach to Student Evaluation.

    Science.gov (United States)

    Larter, Sylvia

    The Toronto Board of Education Benchmarks are libraries of reference materials that demonstrate student achievement at various levels. Each library contains video benchmarks, print benchmarks, a staff handbook, and summary and introductory documents. This book is about the development and the history of the benchmark program. It has taken over 3…

  3. On Jovian plasma sheet structure

    International Nuclear Information System (INIS)

    Khurana, K.K.; Kivelson, M.G.

    1989-01-01

    The authors evaluate several models of Jovian plasma sheet structure by determining how well they organize several aspects of the observed Voyager 2 magnetic field characteristics as a function of Jovicentric radial distance. It is shown that in the local time sector of the Voyager 2 outbound pass (near 0300 LT) the published hinged-magnetodisc models with wave (i.e., models corrected for finite wave velocity effects) are more successful than the published magnetic anomaly model in predicting locations of current sheet crossings. They also consider the boundary between the plasma sheet and the magnetotail lobe which is expected to vary slowly with radial distance. They use this boundary location as a further test of the models of the magnetotail. They show that the compressional MHD waves have much smaller amplitude in the lobes than in the plasma sheet and use this criterion to refine the identification of the plasma-sheet-lobe boundary. When the locations of crossings into and out of the lobes are examined, it becomes evident that the magnetic-anomaly model yields a flaring plasma sheet with a halfwidth of ∼ 3 R J at a radial distance of 20 R J and ∼ 12 R J at a radial distance of 100 R J . The hinged-magnetodisc models with wave, on the other hand, predict a halfwidth of ∼ 3.5 R J independent of distance beyond 20 R J . New optimized versions of the two models locate both the current sheet crossings and lobe encounters equally successfully. The optimized hinged-magnetodisc model suggests that the wave velocity decreases with increasing radial distance. The optimized magnetic anomaly model yields lower velocity contrast than the model of Vasyliunas and Dessler (1981)

  4. Full sphere hydrodynamic and dynamo benchmarks

    KAUST Repository

    Marti, P.

    2014-01-26

    Convection in planetary cores can generate fluid flow and magnetic fields, and a number of sophisticated codes exist to simulate the dynamic behaviour of such systems. We report on the first community activity to compare numerical results of computer codes designed to calculate fluid flow within a whole sphere. The flows are incompressible and rapidly rotating and the forcing of the flow is either due to thermal convection or due to moving boundaries. All problems defined have solutions that alloweasy comparison, since they are either steady, slowly drifting or perfectly periodic. The first two benchmarks are defined based on uniform internal heating within the sphere under the Boussinesq approximation with boundary conditions that are uniform in temperature and stress-free for the flow. Benchmark 1 is purely hydrodynamic, and has a drifting solution. Benchmark 2 is a magnetohydrodynamic benchmark that can generate oscillatory, purely periodic, flows and magnetic fields. In contrast, Benchmark 3 is a hydrodynamic rotating bubble benchmark using no slip boundary conditions that has a stationary solution. Results from a variety of types of code are reported, including codes that are fully spectral (based on spherical harmonic expansions in angular coordinates and polynomial expansions in radius), mixed spectral and finite difference, finite volume, finite element and also a mixed Fourier-finite element code. There is good agreement between codes. It is found that in Benchmarks 1 and 2, the approximation of a whole sphere problem by a domain that is a spherical shell (a sphere possessing an inner core) does not represent an adequate approximation to the system, since the results differ from whole sphere results. © The Authors 2014. Published by Oxford University Press on behalf of The Royal Astronomical Society.

  5. Present status and extensions of the Monte Carlo performance benchmark

    International Nuclear Information System (INIS)

    Hoogenboom, J.E.; Petrovic, B.; Martin, W.R.

    2013-01-01

    The NEA Monte Carlo Performance benchmark started in 2011 aiming to monitor over the years the abilities to perform a full-size Monte Carlo reactor core calculation with a detailed power production for each fuel pin with axial distribution. This paper gives an overview of the contributed results thus far. It shows that reaching a statistical accuracy of 1 % for most of the small fuel zones requires about 100 billion neutron histories. The efficiency of parallel execution of Monte Carlo codes on a large number of processor cores shows clear limitations for computer clusters with common type computer nodes. However, using true supercomputers the speedup of parallel calculations is increasing up to large numbers of processor cores. More experience is needed from calculations on true supercomputers using large numbers of processors in order to predict if the requested calculations can be done in a short time. As the specifications of the reactor geometry for this benchmark test are well suited for further investigations of full-core Monte Carlo calculations and a need is felt for testing other issues than its computational performance, proposals are presented for extending the benchmark to a suite of benchmark problems for evaluating fission source convergence for a system with a high dominance ratio, for coupling with thermal-hydraulics calculations to evaluate the use of different temperatures and coolant densities and to study the correctness and effectiveness of burnup calculations. Moreover, other contemporary proposals for a full-core calculation with realistic geometry and material composition will be discussed. (authors)

  6. Energy efficiency benchmarking of energy-intensive industries in Taiwan

    International Nuclear Information System (INIS)

    Chan, David Yih-Liang; Huang, Chi-Feng; Lin, Wei-Chun; Hong, Gui-Bing

    2014-01-01

    Highlights: • Analytical tool was applied to estimate the energy efficiency indicator of energy intensive industries in Taiwan. • The carbon dioxide emission intensity in selected energy-intensive industries is also evaluated in this study. • The obtained energy efficiency indicator can serve as a base case for comparison to the other regions in the world. • This analysis results can serve as a benchmark for selected energy-intensive industries. - Abstract: Taiwan imports approximately 97.9% of its primary energy as rapid economic development has significantly increased energy and electricity demands. Increased energy efficiency is necessary for industry to comply with energy-efficiency indicators and benchmarking. Benchmarking is applied in this work as an analytical tool to estimate the energy-efficiency indicators of major energy-intensive industries in Taiwan and then compare them to other regions of the world. In addition, the carbon dioxide emission intensity in the iron and steel, chemical, cement, textile and pulp and paper industries are evaluated in this study. In the iron and steel industry, the energy improvement potential of blast furnace–basic oxygen furnace (BF–BOF) based on BPT (best practice technology) is about 28%. Between 2007 and 2011, the average specific energy consumption (SEC) of styrene monomer (SM), purified terephthalic acid (PTA) and low-density polyethylene (LDPE) was 9.6 GJ/ton, 5.3 GJ/ton and 9.1 GJ/ton, respectively. The energy efficiency of pulping would be improved by 33% if BAT (best available technology) were applied. The analysis results can serve as a benchmark for these industries and as a base case for stimulating changes aimed at more efficient energy utilization

  7. Automobile sheet metal part production with incremental sheet forming

    Directory of Open Access Journals (Sweden)

    İsmail DURGUN

    2016-02-01

    Full Text Available Nowadays, effect of global warming is increasing drastically so it leads to increased interest on energy efficiency and sustainable production methods. As a result of adverse conditions, national and international project platforms, OEMs (Original Equipment Manufacturers, SMEs (Small and Mid-size Manufacturers perform many studies or improve existing methodologies in scope of advanced manufacturing techniques. In this study, advanced manufacturing and sustainable production method "Incremental Sheet Metal Forming (ISF" was used for sheet metal forming process. A vehicle fender was manufactured with or without die by using different toolpath strategies and die sets. At the end of the study, Results have been investigated under the influence of method and parameters used.Keywords: Template incremental sheet metal, Metal forming

  8. Benchmarking – A tool for judgment or improvement?

    DEFF Research Database (Denmark)

    Rasmussen, Grane Mikael Gregaard

    2010-01-01

    perceptions of benchmarking will be presented; public benchmarking and best practice benchmarking. These two types of benchmarking are used to characterize and discuss the Danish benchmarking system and to enhance which effects, possibilities and challenges that follow in the wake of using this kind......Change in construction is high on the agenda for the Danish government and a comprehensive effort is done in improving quality and efficiency. This has led to an initiated governmental effort in bringing benchmarking into the Danish construction sector. This paper is an appraisal of benchmarking...... as it is presently carried out in the Danish construction sector. Many different perceptions of benchmarking and the nature of the construction sector, lead to an uncertainty in how to perceive and use benchmarking, hence, generating an uncertainty in understanding the effects of benchmarking. This paper addresses...

  9. BALANCE-SHEET vs. ARBITRAGE CDOs

    Directory of Open Access Journals (Sweden)

    SILVIU EDUARD DINCA

    2016-02-01

    Full Text Available During the past few years, in the recent post-crisis aftermath, global asset managers are constantly searching new ways to optimize their investment portfolios while financial and banking institutions around the world are exploring new alternatives to better secure their financing and refinancing demands altogether with the enhancement of their risk management capabilities. We will exhibit herewith a comparison between the balance-sheet and arbitrage CDO securitizations as financial markets-based funding, investment and risks mitigation techniques, highlighting certain key structuring and implementation specifics on each of them.

  10. Ice_Sheets_CCI: Essential Climate Variables for the Greenland Ice Sheet

    Science.gov (United States)

    Forsberg, R.; Sørensen, L. S.; Khan, A.; Aas, C.; Evansberget, D.; Adalsteinsdottir, G.; Mottram, R.; Andersen, S. B.; Ahlstrøm, A.; Dall, J.; Kusk, A.; Merryman, J.; Hvidberg, C.; Khvorostovsky, K.; Nagler, T.; Rott, H.; Scharrer, M.; Shepard, A.; Ticconi, F.; Engdahl, M.

    2012-04-01

    As part of the ESA Climate Change Initiative (www.esa-cci.org) a long-term project "ice_sheets_cci" started January 1, 2012, in addition to the existing 11 projects already generating Essential Climate Variables (ECV) for the Global Climate Observing System (GCOS). The "ice_sheets_cci" goal is to generate a consistent, long-term and timely set of key climate parameters for the Greenland ice sheet, to maximize the impact of European satellite data on climate research, from missions such as ERS, Envisat and the future Sentinel satellites. The climate parameters to be provided, at first in a research context, and in the longer perspective by a routine production system, would be grids of Greenland ice sheet elevation changes from radar altimetry, ice velocity from repeat-pass SAR data, as well as time series of marine-terminating glacier calving front locations and grounding lines for floating-front glaciers. The ice_sheets_cci project will involve a broad interaction of the relevant cryosphere and climate communities, first through user consultations and specifications, and later in 2012 optional participation in "best" algorithm selection activities, where prototype climate parameter variables for selected regions and time frames will be produced and validated using an objective set of criteria ("Round-Robin intercomparison"). This comparative algorithm selection activity will be completely open, and we invite all interested scientific groups with relevant experience to participate. The results of the "Round Robin" exercise will form the algorithmic basis for the future ECV production system. First prototype results will be generated and validated by early 2014. The poster will show the planned outline of the project and some early prototype results.

  11. Hanford Site Treated Effluent Disposal Facility process flow sheet

    International Nuclear Information System (INIS)

    Bendixsen, R.B.

    1993-04-01

    This report presents a novel method of using precipitation, destruction and recycle factors to prepare a process flow sheet. The 300 Area Treated Effluent Disposal Facility (TEDF) will treat process sewer waste water from the 300 Area of the Hanford Site, located near Richland, Washington, and discharge a permittable effluent flow into the Columbia River. When completed and operating, the TEDF effluent water flow will meet or exceed water quality standards for the 300 Area process sewer effluents. A preliminary safety analysis document (PSAD), a preconstruction requirement, needed a process flow sheet detailing the concentrations of radionuclides, inorganics and organics throughout the process, including the effluents, and providing estimates of stream flow quantities, activities, composition, and properties (i.e. temperature, pressure, specific gravity, pH and heat transfer rates). As the facility begins to operate, data from process samples can be used to provide better estimates of the factors, the factors can be entered into the flow sheet and the flow sheet will estimate more accurate steady state concentrations for the components. This report shows how the factors were developed and how they were used in developing a flow sheet to estimate component concentrations for the process flows. The report concludes with how TEDF sample data can improve the ability of the flow sheet to accurately predict concentrations of components in the process

  12. OECD/DOE/CEA VVER-1000 Coolant Transient Benchmark. Summary Record of the First Workshop (V1000-CT1)

    International Nuclear Information System (INIS)

    2003-01-01

    The first workshop for the VVER-1000 Coolant Transient Benchmark TT Benchmark was hosted by the Commissariat a l'Energie Atomique, Centre d'Etudes de Saclay, France. The V1000CT benchmark defines standard problems for validation of coupled three-dimensional (3-D) neutron-kinetics/system thermal-hydraulics codes for application to Soviet-designed VVER-1000 reactors using actual plant data without any scaling. The overall objective is to access computer codes used in the safety analysis of VVER power plants, specifically for their use in reactivity transient simulations in a VVER-1000. The V1000CT benchmark consists of two phases: V1000CT-1 - simulation of the switching on of one main coolant pump (MCP) while the other three MCP are in operation, and V1000CT- 2 - calculation of coolant mixing tests and Main Steam Line Break (MSLB) scenario. Further background information on this benchmark can be found at the OECD/NEA benchmark web site . The purpose of the first workshop was to review the benchmark activities after the Starter Meeting held last year in Dresden, Germany: to discuss the participants' feedback and modifications introduced in the Benchmark Specifications on Phase 1; to present and to discuss modelling issues and preliminary results from the three exercises of Phase 1; to discuss the modelling issues of Exercise 1 of Phase 2; and to define work plan and schedule in order to complete the two phases

  13. The Concepts "Benchmarks and Benchmarking" Used in Education Planning: Teacher Education as Example

    Science.gov (United States)

    Steyn, H. J.

    2015-01-01

    Planning in education is a structured activity that includes several phases and steps that take into account several kinds of information (Steyn, Steyn, De Waal & Wolhuter, 2002: 146). One of the sets of information that are usually considered is the (so-called) "benchmarks" and "benchmarking" regarding the focus of a…

  14. Uranium mining sites - Thematic sheets

    International Nuclear Information System (INIS)

    2009-01-01

    A first sheet proposes comments, data and key numbers about uranium extraction in France: general overview of uranium mining sites, status of waste rock and tailings after exploitation, site rehabilitation. The second sheet addresses the sources of exposure to ionizing radiations due to ancient uranium mining sites: discussion on the identification of these sources associated with these sites, properly due to mining activities or to tailings, or due to the transfer of radioactive substances towards water and to the contamination of sediments, description of the practice and assessment of radiological control of mining sites. A third sheet addresses the radiological exposure of public to waste rocks, and the dose assessment according to exposure scenarios: main exposure ways to be considered, studied exposure scenarios (passage on backfilled path and grounds, stay in buildings built on waste rocks, keeping mineralogical samples at home). The fourth sheet addresses research programmes of the IRSN on uranium and radon: epidemiological studies (performed on mine workers; on French and on European cohorts, French and European studies on the risk of lung cancer associated with radon in housing), study of the biological effects of chronic exposures. The last sheet addresses studies and expertises performed by the IRSN on ancient uranium mining sites in France: studies commissioned by public authorities, radioactivity control studies performed by the IRSN about mining sites, participation of the IRSN to actions to promote openness to civil society

  15. Criteria of benchmark selection for efficient flexible multibody system formalisms

    Directory of Open Access Journals (Sweden)

    Valášek M.

    2007-10-01

    Full Text Available The paper deals with the selection process of benchmarks for testing and comparing efficient flexible multibody formalisms. The existing benchmarks are briefly summarized. The purposes for benchmark selection are investigated. The result of this analysis is the formulation of the criteria of benchmark selection for flexible multibody formalisms. Based on them the initial set of suitable benchmarks is described. Besides that the evaluation measures are revised and extended.

  16. NUMISHEET 2016: 10th International Conference and Workshop on Numerical Simulation of 3D Sheet Metal Forming Processes

    International Nuclear Information System (INIS)

    2016-01-01

    The NUMISHEET conference series have been established as a world-class forum through which new intellectual ideas and technologies in the area of sheet metal forming simulation are exchanged. Previous NUMISHEET conferences have given enormous contributions to industry and academia in what regards the development of new methods and ideas for the numerical simulation of sheet metal forming processes. Previous NUMISHEET conferences were held in: Zurich (Switzerland, 1991), Isehara (Japan, 1993), Dearborn (USA, 1996), Besancon (France, 1999), Jeju Island (South Korea, 2002), Detroit (USA, 2005), Interlaken (Switzerland, 2008), Seoul (South Korea, 2011) and Melbourne (Australia, 2014). The NUMISHEET 2016 conference will be held in Bristol, UK. It features technical, keynote and plenary sessions and mini-symposiums in diverse sheet metal forming areas including the recently introduced incremental sheet forming and electromagnetic forming, as well as new prominent numerical methods such as IsoGeometric Analysis and meshless methods for sheet analysis. NUMISHEET 2016 will have eight academic plenary lectures delivered by worldwide recognised experts in the areas of sheet metal forming, material modelling and numerical methods in general. Also, NUMISHEET 2016 will have three industrial plenary lectures which will be addressed by three different companies with strong businesses in sheet metal forming processes: AutoForm, Crown Technology and Jaguar Land Rover. One of the most distinguishing features of NUMISHEET conference series is the industrial benchmark sessions, during which numerical simulations of industrial sheet formed parts are compared with experimental results from the industry. The benchmark sessions provide an extraordinary opportunity for networking, for the exchange of technologies related to sheet metal forming and for the numerical validation of sheet metal forming codes/software. Three benchmark studies have been organised in NUMISHEET 2016: BM1) 'Benchmark

  17. Toxicological benchmarks for wildlife: 1994 Revision

    International Nuclear Information System (INIS)

    Opresko, D.M.; Sample, B.E.; Suter, G.W. II.

    1994-09-01

    The process by which ecological risks of environmental contaminants are evaluated is two-tiered. The first tier is a screening assessment where concentrations of contaminants in the environment are compared to toxicological benchmarks which represent concentrations of chemicals in environmental media (water, sediment, soil, food, etc.) that are presumed to be nonhazardous to the surrounding biota. The second tier is a baseline ecological risk assessment where toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. The report presents toxicological benchmarks for assessment of effects of 76 chemicals on 8 representative mammalian wildlife species and 31 chemicals on 9 avian wildlife species. The chemicals are some of those that occur at United States Department of Energy waste sites; the wildlife species were chosen because they are widely distributed and provide a representative range of body sizes and diets. Further descriptions of the chosen wildlife species and chemicals are provided in the report. The benchmarks presented in this report represent values believed to be nonhazardous for the listed wildlife species. These benchmarks only consider contaminant exposure through oral ingestion of contaminated media; exposure through inhalation or direct dermal exposure are not considered in this report

  18. Toxicological benchmarks for wildlife: 1994 Revision

    Energy Technology Data Exchange (ETDEWEB)

    Opresko, D.M.; Sample, B.E.; Suter, G.W. II

    1994-09-01

    The process by which ecological risks of environmental contaminants are evaluated is two-tiered. The first tier is a screening assessment where concentrations of contaminants in the environment are compared to toxicological benchmarks which represent concentrations of chemicals in environmental media (water, sediment, soil, food, etc.) that are presumed to be nonhazardous to the surrounding biota. The second tier is a baseline ecological risk assessment where toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. The report presents toxicological benchmarks for assessment of effects of 76 chemicals on 8 representative mammalian wildlife species and 31 chemicals on 9 avian wildlife species. The chemicals are some of those that occur at United States Department of Energy waste sites; the wildlife species were chosen because they are widely distributed and provide a representative range of body sizes and diets. Further descriptions of the chosen wildlife species and chemicals are provided in the report. The benchmarks presented in this report represent values believed to be nonhazardous for the listed wildlife species. These benchmarks only consider contaminant exposure through oral ingestion of contaminated media; exposure through inhalation or direct dermal exposure are not considered in this report.

  19. Benchmarks and statistics of entanglement dynamics

    International Nuclear Information System (INIS)

    Tiersch, Markus

    2009-01-01

    In the present thesis we investigate how the quantum entanglement of multicomponent systems evolves under realistic conditions. More specifically, we focus on open quantum systems coupled to the (uncontrolled) degrees of freedom of an environment. We identify key quantities that describe the entanglement dynamics, and provide efficient tools for its calculation. For quantum systems of high dimension, entanglement dynamics can be characterized with high precision. In the first part of this work, we derive evolution equations for entanglement. These formulas determine the entanglement after a given time in terms of a product of two distinct quantities: the initial amount of entanglement and a factor that merely contains the parameters that characterize the dynamics. The latter is given by the entanglement evolution of an initially maximally entangled state. A maximally entangled state thus benchmarks the dynamics, and hence allows for the immediate calculation or - under more general conditions - estimation of the change in entanglement. Thereafter, a statistical analysis supports that the derived (in-)equalities describe the entanglement dynamics of the majority of weakly mixed and thus experimentally highly relevant states with high precision. The second part of this work approaches entanglement dynamics from a topological perspective. This allows for a quantitative description with a minimum amount of assumptions about Hilbert space (sub-)structure and environment coupling. In particular, we investigate the limit of increasing system size and density of states, i.e. the macroscopic limit. In this limit, a universal behaviour of entanglement emerges following a ''reference trajectory'', similar to the central role of the entanglement dynamics of a maximally entangled state found in the first part of the present work. (orig.)

  20. Benchmarks and statistics of entanglement dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Tiersch, Markus

    2009-09-04

    In the present thesis we investigate how the quantum entanglement of multicomponent systems evolves under realistic conditions. More specifically, we focus on open quantum systems coupled to the (uncontrolled) degrees of freedom of an environment. We identify key quantities that describe the entanglement dynamics, and provide efficient tools for its calculation. For quantum systems of high dimension, entanglement dynamics can be characterized with high precision. In the first part of this work, we derive evolution equations for entanglement. These formulas determine the entanglement after a given time in terms of a product of two distinct quantities: the initial amount of entanglement and a factor that merely contains the parameters that characterize the dynamics. The latter is given by the entanglement evolution of an initially maximally entangled state. A maximally entangled state thus benchmarks the dynamics, and hence allows for the immediate calculation or - under more general conditions - estimation of the change in entanglement. Thereafter, a statistical analysis supports that the derived (in-)equalities describe the entanglement dynamics of the majority of weakly mixed and thus experimentally highly relevant states with high precision. The second part of this work approaches entanglement dynamics from a topological perspective. This allows for a quantitative description with a minimum amount of assumptions about Hilbert space (sub-)structure and environment coupling. In particular, we investigate the limit of increasing system size and density of states, i.e. the macroscopic limit. In this limit, a universal behaviour of entanglement emerges following a ''reference trajectory'', similar to the central role of the entanglement dynamics of a maximally entangled state found in the first part of the present work. (orig.)

  1. BN-600 hybrid core benchmark analyses

    International Nuclear Information System (INIS)

    Kim, Y.I.; Stanculescu, A.; Finck, P.; Hill, R.N.; Grimm, K.N.

    2003-01-01

    Benchmark analyses for the hybrid BN-600 reactor that contains three uranium enrichment zones and one plutonium zone in the core, have been performed within the frame of an IAEA sponsored Coordinated Research Project. The results for several relevant reactivity parameters obtained by the participants with their own state-of-the-art basic data and codes, were compared in terms of calculational uncertainty, and their effects on the ULOF transient behavior of the hybrid BN-600 core were evaluated. The comparison of the diffusion and transport results obtained for the homogeneous representation generally shows good agreement for most parameters between the RZ and HEX-Z models. The burnup effect and the heterogeneity effect on most reactivity parameters also show good agreement for the HEX-Z diffusion and transport theory results. A large difference noticed for the sodium and steel density coefficients is mainly due to differences in the spatial coefficient predictions for non fuelled regions. The burnup reactivity loss was evaluated to be 0.025 (4.3 $) within ∼ 5.0% standard deviation. The heterogeneity effect on most reactivity coefficients was estimated to be small. The heterogeneity treatment reduced the control rod worth by 2.3%. The heterogeneity effect on the k-eff and control rod worth appeared to differ strongly depending on the heterogeneity treatment method. A substantial spread noticed for several reactivity coefficients did not give a significant impact on the transient behavior prediction. This result is attributable to compensating effects between several reactivity effects and the specific design of the partially MOX fuelled hybrid core. (author)

  2. Root-growth-inhibiting sheet

    Science.gov (United States)

    Burton, F.G.; Cataldo, D.A.; Cline, J.F.; Skiens, W.E.; Van Voris, P.

    1993-01-26

    In accordance with this invention, a porous sheet material is provided at intervals with bodies of a polymer which contain a 2,6-dinitroaniline. The sheet material is made porous to permit free passage of water. It may be either a perforated sheet or a woven or non-woven textile material. A particularly desirable embodiment is a non-woven fabric of non-biodegradable material. This type of material is known as a geotextile'' and is used for weed control, prevention of erosion on slopes, and other landscaping purposes. In order to obtain a root repelling property, a dinitroaniline is blended with a polymer which is attached to the geotextile or other porous material.

  3. Optimal swimming of a sheet.

    Science.gov (United States)

    Montenegro-Johnson, Thomas D; Lauga, Eric

    2014-06-01

    Propulsion at microscopic scales is often achieved through propagating traveling waves along hairlike organelles called flagella. Taylor's two-dimensional swimming sheet model is frequently used to provide insight into problems of flagellar propulsion. We derive numerically the large-amplitude wave form of the two-dimensional swimming sheet that yields optimum hydrodynamic efficiency: the ratio of the squared swimming speed to the rate-of-working of the sheet against the fluid. Using the boundary element method, we show that the optimal wave form is a front-back symmetric regularized cusp that is 25% more efficient than the optimal sine wave. This optimal two-dimensional shape is smooth, qualitatively different from the kinked form of Lighthill's optimal three-dimensional flagellum, not predicted by small-amplitude theory, and different from the smooth circular-arc-like shape of active elastic filaments.

  4. Root-growth-inhibiting sheet

    Science.gov (United States)

    Burton, Frederick G.; Cataldo, Dominic A.; Cline, John F.; Skiens, W. Eugene; Van Voris, Peter

    1993-01-01

    In accordance with this invention, a porous sheet material is provided at intervals with bodies of a polymer which contain a 2,6-dinitroaniline. The sheet material is made porous to permit free passage of water. It may be either a perforated sheet or a woven or non-woven textile material. A particularly desirable embodiment is a non-woven fabric of non-biodegradable material. This type of material is known as a "geotextile" and is used for weed control, prevention of erosion on slopes, and other landscaping purposes. In order to obtain a root repelling property, a dinitroaniline is blended with a polymer which is attached to the geotextile or other porous material.

  5. ZZ WPPR, Pu Recycling Benchmark Results

    International Nuclear Information System (INIS)

    Lutz, D.; Mattes, M.; Delpech, Marc; Juanola, Marc

    2002-01-01

    Description of program or function: The NEA NSC Working Party on Physics of Plutonium Recycling has commissioned a series of benchmarks covering: - Plutonium recycling in pressurized-water reactors; - Void reactivity effect in pressurized-water reactors; - Fast Plutonium-burner reactors: beginning of life; - Plutonium recycling in fast reactors; - Multiple recycling in advanced pressurized-water reactors. The results have been published (see references). ZZ-WPPR-1-A/B contains graphs and tables relative to the PWR Mox pin cell benchmark, representing typical fuel for plutonium recycling, one corresponding to a first cycle, the second for a fifth cycle. These computer readable files contain the complete set of results, while the printed report contains only a subset. ZZ-WPPR-2-CYC1 are the results from cycle 1 of the multiple recycling benchmarks

  6. Interior beam searchlight semi-analytical benchmark

    International Nuclear Information System (INIS)

    Ganapol, Barry D.; Kornreich, Drew E.

    2008-01-01

    Multidimensional semi-analytical benchmarks to provide highly accurate standards to assess routine numerical particle transport algorithms are few and far between. Because of the well-established 1D theory for the analytical solution of the transport equation, it is sometimes possible to 'bootstrap' a 1D solution to generate a more comprehensive solution representation. Here, we consider the searchlight problem (SLP) as a multidimensional benchmark. A variation of the usual SLP is the interior beam SLP (IBSLP) where a beam source lies beneath the surface of a half space and emits directly towards the free surface. We consider the establishment of a new semi-analytical benchmark based on a new FN formulation. This problem is important in radiative transfer experimental analysis to determine cloud absorption and scattering properties. (authors)

  7. The national hydrologic bench-mark network

    Science.gov (United States)

    Cobb, Ernest D.; Biesecker, J.E.

    1971-01-01

    The United States is undergoing a dramatic growth of population and demands on its natural resources. The effects are widespread and often produce significant alterations of the environment. The hydrologic bench-mark network was established to provide data on stream basins which are little affected by these changes. The network is made up of selected stream basins which are not expected to be significantly altered by man. Data obtained from these basins can be used to document natural changes in hydrologic characteristics with time, to provide a better understanding of the hydrologic structure of natural basins, and to provide a comparative base for studying the effects of man on the hydrologic environment. There are 57 bench-mark basins in 37 States. These basins are in areas having a wide variety of climate and topography. The bench-mark basins and the types of data collected in the basins are described.

  8. Confidential benchmarking based on multiparty computation

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Damgård, Kasper Lyneborg; Nielsen, Kurt

    We report on the design and implementation of a system that uses multiparty computation to enable banks to benchmark their customers' confidential performance data against a large representative set of confidential performance data from a consultancy house. The system ensures that both the banks......' and the consultancy house's data stays confidential, the banks as clients learn nothing but the computed benchmarking score. In the concrete business application, the developed prototype help Danish banks to find the most efficient customers among a large and challenging group of agricultural customers with too much...... debt. We propose a model based on linear programming for doing the benchmarking and implement it using the SPDZ protocol by Damgård et al., which we modify using a new idea that allows clients to supply data and get output without having to participate in the preprocessing phase and without keeping...

  9. Benchmark referencing of neutron dosimetry measurements

    International Nuclear Information System (INIS)

    Eisenhauer, C.M.; Grundl, J.A.; Gilliam, D.M.; McGarry, E.D.; Spiegel, V.

    1980-01-01

    The concept of benchmark referencing involves interpretation of dosimetry measurements in applied neutron fields in terms of similar measurements in benchmark fields whose neutron spectra and intensity are well known. The main advantage of benchmark referencing is that it minimizes or eliminates many types of experimental uncertainties such as those associated with absolute detection efficiencies and cross sections. In this paper we consider the cavity external to the pressure vessel of a power reactor as an example of an applied field. The pressure vessel cavity is an accessible location for exploratory dosimetry measurements aimed at understanding embrittlement of pressure vessel steel. Comparisons with calculated predictions of neutron fluence and spectra in the cavity provide a valuable check of the computational methods used to estimate pressure vessel safety margins for pressure vessel lifetimes

  10. MIPS bacterial genomes functional annotation benchmark dataset.

    Science.gov (United States)

    Tetko, Igor V; Brauner, Barbara; Dunger-Kaltenbach, Irmtraud; Frishman, Goar; Montrone, Corinna; Fobo, Gisela; Ruepp, Andreas; Antonov, Alexey V; Surmeli, Dimitrij; Mewes, Hans-Wernen

    2005-05-15

    Any development of new methods for automatic functional annotation of proteins according to their sequences requires high-quality data (as benchmark) as well as tedious preparatory work to generate sequence parameters required as input data for the machine learning methods. Different program settings and incompatible protocols make a comparison of the analyzed methods difficult. The MIPS Bacterial Functional Annotation Benchmark dataset (MIPS-BFAB) is a new, high-quality resource comprising four bacterial genomes manually annotated according to the MIPS functional catalogue (FunCat). These resources include precalculated sequence parameters, such as sequence similarity scores, InterPro domain composition and other parameters that could be used to develop and benchmark methods for functional annotation of bacterial protein sequences. These data are provided in XML format and can be used by scientists who are not necessarily experts in genome annotation. BFAB is available at http://mips.gsf.de/proj/bfab

  11. Energy benchmarking of South Australian WWTPs.

    Science.gov (United States)

    Krampe, J

    2013-01-01

    Optimising the energy consumption and energy generation of wastewater treatment plants (WWTPs) is a topic with increasing importance for water utilities in times of rising energy costs and pressures to reduce greenhouse gas (GHG) emissions. Assessing the energy efficiency and energy optimisation of a WWTP are difficult tasks as most plants vary greatly in size, process layout and other influencing factors. To overcome these limits it is necessary to compare energy efficiency with a statistically relevant base to identify shortfalls and optimisation potential. Such energy benchmarks have been successfully developed and used in central Europe over the last two decades. This paper demonstrates how the latest available energy benchmarks from Germany have been applied to 24 WWTPs in South Australia. It shows how energy benchmarking can be used to identify shortfalls in current performance, prioritise detailed energy assessments and help inform decisions on capital investment.

  12. Benchmarking criticality safety calculations with subcritical experiments

    International Nuclear Information System (INIS)

    Mihalczo, J.T.

    1984-06-01

    Calculation of the neutron multiplication factor at delayed criticality may be necessary for benchmarking calculations but it may not be sufficient. The use of subcritical experiments to benchmark criticality safety calculations could result in substantial savings in fuel material costs for experiments. In some cases subcritical configurations could be used to benchmark calculations where sufficient fuel to achieve delayed criticality is not available. By performing a variety of measurements with subcritical configurations, much detailed information can be obtained which can be compared directly with calculations. This paper discusses several measurements that can be performed with subcritical assemblies and presents examples that include comparisons between calculation and experiment where possible. Where not, examples from critical experiments have been used but the measurement methods could also be used for subcritical experiments

  13. Ice sheet hydrology - a review

    Energy Technology Data Exchange (ETDEWEB)

    Jansson, Peter; Naeslund, Jens-Ove [Dept. of Physical Geography and Quaternary Geology, Stockholm Univ., Stockholm (Sweden); Rodhe, Lars [Geological Survey of Sweden, Uppsala (Sweden)

    2007-03-15

    This report summarizes the theoretical knowledge on water flow in and beneath glaciers and ice sheets and how these theories are applied in models to simulate the hydrology of ice sheets. The purpose is to present the state of knowledge and, perhaps more importantly, identify the gaps in our understanding of ice sheet hydrology. Many general concepts in hydrology and hydraulics are applicable to water flow in glaciers. However, the unique situation of having the liquid phase flowing in conduits of the solid phase of the same material, water, is not a commonly occurring phenomena. This situation means that the heat exchange between the phases and the resulting phase changes also have to be accounted for in the analysis. The fact that the solidus in the pressure-temperature dependent phase diagram of water has a negative slope provides further complications. Ice can thus melt or freeze from both temperature and pressure variations or variations in both. In order to provide details of the current understanding of water flow in conjunction with deforming ice and to provide understanding for the development of ideas and models, emphasis has been put on the mathematical treatments, which are reproduced in detail. Qualitative results corroborating theory or, perhaps more often, questioning the simplifications made in theory, are also given. The overarching problem with our knowledge of glacier hydrology is the gap between the local theories of processes and the general flow of water in glaciers and ice sheets. Water is often channelized in non-stationary conduits through the ice, features which due to their minute size relative to the size of glaciers and ice sheets are difficult to incorporate in spatially larger models. Since the dynamic response of ice sheets to global warming is becoming a key issue in, e.g. sea-level change studies, the problems of the coupling between the hydrology of an ice sheet and its dynamics is steadily gaining interest. New work is emerging

  14. Modelling the Antarctic Ice Sheet

    DEFF Research Database (Denmark)

    Pedersen, Jens Olaf Pepke; Holm, A.

    2015-01-01

    to sea level high stands during past interglacial periods. A number of AIS models have been developed and applied to try to understand the workings of the AIS and to form a robust basis for future projections of the AIS contribution to sea level change. The recent DCESS (Danish Center for Earth System......The Antarctic ice sheet is a major player in the Earth’s climate system and is by far the largest depository of fresh water on the planet. Ice stored in the Antarctic ice sheet (AIS) contains enough water to raise sea level by about 58 m, and ice loss from Antarctica contributed significantly...

  15. Ice sheet hydrology - a review

    International Nuclear Information System (INIS)

    Jansson, Peter; Naeslund, Jens-Ove; Rodhe, Lars

    2007-03-01

    This report summarizes the theoretical knowledge on water flow in and beneath glaciers and ice sheets and how these theories are applied in models to simulate the hydrology of ice sheets. The purpose is to present the state of knowledge and, perhaps more importantly, identify the gaps in our understanding of ice sheet hydrology. Many general concepts in hydrology and hydraulics are applicable to water flow in glaciers. However, the unique situation of having the liquid phase flowing in conduits of the solid phase of the same material, water, is not a commonly occurring phenomena. This situation means that the heat exchange between the phases and the resulting phase changes also have to be accounted for in the analysis. The fact that the solidus in the pressure-temperature dependent phase diagram of water has a negative slope provides further complications. Ice can thus melt or freeze from both temperature and pressure variations or variations in both. In order to provide details of the current understanding of water flow in conjunction with deforming ice and to provide understanding for the development of ideas and models, emphasis has been put on the mathematical treatments, which are reproduced in detail. Qualitative results corroborating theory or, perhaps more often, questioning the simplifications made in theory, are also given. The overarching problem with our knowledge of glacier hydrology is the gap between the local theories of processes and the general flow of water in glaciers and ice sheets. Water is often channelized in non-stationary conduits through the ice, features which due to their minute size relative to the size of glaciers and ice sheets are difficult to incorporate in spatially larger models. Since the dynamic response of ice sheets to global warming is becoming a key issue in, e.g. sea-level change studies, the problems of the coupling between the hydrology of an ice sheet and its dynamics is steadily gaining interest. New work is emerging

  16. Sheet Beam Klystron Instability Analysis

    International Nuclear Information System (INIS)

    Bane, K.

    2009-01-01

    Using the principle of energy balance we develop a 2D theory for calculating growth rates of instability in a two-cavity model of a sheet beam klystron. An important ingredient is a TE-like mode in the gap that also gives a longitudinal kick to the beam. When compared with a self-consistent particle-in-cell calculation, with sheet beam klystron-type parameters, agreement is quite good up to half the design current, 65 A; at full current, however, other, current-dependent effects come in and the results deviate significantly

  17. The social balance sheet 2004

    OpenAIRE

    Ph. Delhez; P. Heuse

    2005-01-01

    Each year, in the 4th quarter’s Economic Review, the National Bank examines the provisional results of the social balance sheets. As all the social balance sheets are not yet available for 2004, the study is based on a limited population of enterprises, compiled according to the principle of a constant sample. This population is made up of 38,530 enterprises employing around 1,331,000 workers in 2004. The main results of the analysis, in terms of employment, working hours, labour cost and tra...

  18. A Benchmarking System for Domestic Water Use

    Directory of Open Access Journals (Sweden)

    Dexter V. L. Hunt

    2014-05-01

    Full Text Available The national demand for water in the UK is predicted to increase, exacerbated by a growing UK population, and home-grown demands for energy and food. When set against the context of overstretched existing supply sources vulnerable to droughts, particularly in increasingly dense city centres, the delicate balance of matching minimal demands with resource secure supplies becomes critical. When making changes to "internal" demands the role of technological efficiency and user behaviour cannot be ignored, yet existing benchmarking systems traditionally do not consider the latter. This paper investigates the practicalities of adopting a domestic benchmarking system (using a band rating that allows individual users to assess their current water use performance against what is possible. The benchmarking system allows users to achieve higher benchmarks through any approach that reduces water consumption. The sensitivity of water use benchmarks are investigated by making changes to user behaviour and technology. The impact of adopting localised supplies (i.e., Rainwater harvesting—RWH and Grey water—GW and including "external" gardening demands are investigated. This includes the impacts (in isolation and combination of the following: occupancy rates (1 to 4; roof size (12.5 m2 to 100 m2; garden size (25 m2 to 100 m2 and geographical location (North West, Midlands and South East, UK with yearly temporal effects (i.e., rainfall and temperature. Lessons learnt from analysis of the proposed benchmarking system are made throughout this paper, in particular its compatibility with the existing Code for Sustainable Homes (CSH accreditation system. Conclusions are subsequently drawn for the robustness of the proposed system.

  19. The OECD/NRC BWR full-size fine-mesh bundle tests benchmark (BFBT)-general description

    International Nuclear Information System (INIS)

    Sartori, Enrico; Hochreiter, L.E.; Ivanov, Kostadin; Utsuno, Hideaki

    2004-01-01

    The need to refine models for best-estimate calculations based on good-quality experimental data have been expressed in many recent meetings in the field of nuclear applications. The needs arising in this respect should not be limited to currently available macroscopic approaches but should be extended to next-generation approaches that focus on more microscopic processes. One most valuable database identified for the thermal-hydraulics modelling was developed by the Nuclear Power Engineering Corporation (NUPEC). Part of this database will be made available for an international benchmark exercise. This fine-mesh high-quality data encourages advancement in the insufficiently developed field of the two-phase flow theory. Considering that the present theoretical approach is relatively immature, the benchmark specification is designed so that it will systematically assess and compare the participants' numerical models on the prediction of detailed void distributions and critical powers. The development of truly mechanistic models for critical power prediction is currently underway. These innovative models should include elementary processes such as void distributions, droplet deposit, liquid film entrainment, etc. The benchmark problem includes both macroscopic and microscopic measurement data. In this context, the sub-channel grade void fraction data are regarded as the macroscopic data, and the digitized computer graphic images are the microscopic data. The proposed benchmark consists of two parts (phases), each part consisting of different exercises: Phase 1- Void distribution benchmark: Exercise 1- Steady-state sub-channel grade benchmark. Exercise 2- Steady-state microscopic grade benchmark. Exercise 3-Transient macroscopic grade benchmark. Phase 2-Critical power benchmark: Exercise 1-Steady-state benchmark. Exercise 2-Transient benchmark. (author)

  20. Toxicological benchmarks for wildlife: 1996 Revision

    International Nuclear Information System (INIS)

    Sample, B.E.; Opresko, D.M.; Suter, G.W., II.

    1996-06-01

    The purpose of this report is to present toxicological benchmarks for assessment of effects of certain chemicals on mammalian and avian wildlife species. Publication of this document meets a milestone for the Environmental Restoration (ER) Risk Assessment Program. This document provides the ER Program with toxicological benchmarks that may be used as comparative tools in screening assessments as well as lines of evidence to support or refute the presence of ecological effects in ecological risk assessments. The chemicals considered in this report are some that occur at US DOE waste sites, and the wildlife species evaluated herein were chosen because they represent a range of body sizes and diets

  1. Benchmarking af kommunernes førtidspensionspraksis

    DEFF Research Database (Denmark)

    Gregersen, Ole

    Hvert år udgiver Den Sociale Ankestyrelse statistikken over afgørelser i sager om førtidspension. I forbindelse med årsstatistikken udgives resultater fra en benchmarking model, hvor antal tilkendelser i den enkelte kommune sammenlignes med et forventet antal tilkendelser, hvis kommunen havde haft...... samme afgørelsespraksis, som den "gennemsnitlige kommune", når vi korrigerer for den sociale struktur i kommunen. Den hidtil anvendte benchmarking model er dokumenteret i Ole Gregersen (1994): Kommunernes Pensionspraksis, Servicerapport, Socialforskningsinstituttet. I dette notat dokumenteres en...

  2. Benchmark calculations for fusion blanket development

    International Nuclear Information System (INIS)

    Sawan, M.E.; Cheng, E.T.

    1985-01-01

    Benchmark problems representing the leading fusion blanket concepts are presented. Benchmark calculations for self-cooled Li/sub 17/Pb/sub 83/ and helium-cooled blankets were performed. Multigroup data libraries generated from ENDF/B-IV and V files using the NJOY and AMPX processing codes with different weighting functions were used. The sensitivity of the TBR to group structure and weighting spectrum increases and Li enrichment decrease with up to 20% discrepancies for thin natural Li/sub 17/Pb/sub 83/ blankets

  3. Benchmark calculations for fusion blanket development

    International Nuclear Information System (INIS)

    Sawan, M.L.; Cheng, E.T.

    1986-01-01

    Benchmark problems representing the leading fusion blanket concepts are presented. Benchmark calculations for self-cooled Li 17 Pb 83 and helium-cooled blankets were performed. Multigroup data libraries generated from ENDF/B-IV and V files using the NJOY and AMPX processing codes with different weighting functions were used. The sensitivity of the tritium breeding ratio to group structure and weighting spectrum increases as the thickness and Li enrichment decrease with up to 20% discrepancies for thin natural Li 17 Pb 83 blankets. (author)

  4. Toxicological benchmarks for wildlife: 1996 Revision

    Energy Technology Data Exchange (ETDEWEB)

    Sample, B.E.; Opresko, D.M.; Suter, G.W., II

    1996-06-01

    The purpose of this report is to present toxicological benchmarks for assessment of effects of certain chemicals on mammalian and avian wildlife species. Publication of this document meets a milestone for the Environmental Restoration (ER) Risk Assessment Program. This document provides the ER Program with toxicological benchmarks that may be used as comparative tools in screening assessments as well as lines of evidence to support or refute the presence of ecological effects in ecological risk assessments. The chemicals considered in this report are some that occur at US DOE waste sites, and the wildlife species evaluated herein were chosen because they represent a range of body sizes and diets.

  5. Reactor group constants and benchmark test

    Energy Technology Data Exchange (ETDEWEB)

    Takano, Hideki [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2001-08-01

    The evaluated nuclear data files such as JENDL, ENDF/B-VI and JEF-2 are validated by analyzing critical mock-up experiments for various type reactors and assessing applicability for nuclear characteristics such as criticality, reaction rates, reactivities, etc. This is called Benchmark Testing. In the nuclear calculations, the diffusion and transport codes use the group constant library which is generated by processing the nuclear data files. In this paper, the calculation methods of the reactor group constants and benchmark test are described. Finally, a new group constants scheme is proposed. (author)

  6. Benchmark problem suite for reactor physics study of LWR next generation fuels

    International Nuclear Information System (INIS)

    Yamamoto, Akio; Ikehara, Tadashi; Ito, Takuya; Saji, Etsuro

    2002-01-01

    This paper proposes a benchmark problem suite for studying the physics of next-generation fuels of light water reactors. The target discharge burnup of the next-generation fuel was set to 70 GWd/t considering the increasing trend in discharge burnup of light water reactor fuels. The UO 2 and MOX fuels are included in the benchmark specifications. The benchmark problem consists of three different geometries: fuel pin cell, PWR fuel assembly and BWR fuel assembly. In the pin cell problem, detailed nuclear characteristics such as burnup dependence of nuclide-wise reactivity were included in the required calculation results to facilitate the study of reactor physics. In the assembly benchmark problems, important parameters for in-core fuel management such as local peaking factors and reactivity coefficients were included in the required results. The benchmark problems provide comprehensive test problems for next-generation light water reactor fuels with extended high burnup. Furthermore, since the pin cell, the PWR assembly and the BWR assembly problems are independent, analyses of the entire benchmark suite is not necessary: e.g., the set of pin cell and PWR fuel assembly problems will be suitable for those in charge of PWR in-core fuel management, and the set of pin cell and BWR fuel assembly problems for those in charge of BWR in-core fuel management. (author)

  7. Tuning the mechanical properties of vertical graphene sheets through atomic layer deposition

    International Nuclear Information System (INIS)

    Davami, Keivan; Jiang, Yijie; Cortes, John; Lin, Chen; Turner, Kevin T; Bargatin, Igor; Shaygan, Mehrdad

    2016-01-01

    We report the fabrication and characterization of graphene nanostructures with mechanical properties that are tuned by conformal deposition of alumina. Vertical graphene (VG) sheets, also called carbon nanowalls (CNWs), were grown on copper foil substrates using a radio-frequency plasma-enhanced chemical vapor deposition (RF-PECVD) technique and conformally coated with different thicknesses of alumina (Al_2O_3) using atomic layer deposition (ALD). Nanoindentation was used to characterize the mechanical properties of pristine and alumina-coated VG sheets. Results show a significant increase in the effective Young’s modulus of the VG sheets with increasing thickness of deposited alumina. Deposition of only a 5 nm thick alumina layer on the VG sheets nearly triples the effective Young’s modulus of the VG structures. Both energy absorption and strain recovery were lower in VG sheets coated with alumina than in pure VG sheets (for the same peak force). This may be attributed to the increase in bending stiffness of the VG sheets and the creation of connections between the sheets after ALD deposition. These results demonstrate that the mechanical properties of VG sheets can be tuned over a wide range through conformal atomic layer deposition, facilitating the use of VG sheets in applications where specific mechanical properties are needed. (paper)

  8. Annual Energy Balance Sheets 2001-2002

    International Nuclear Information System (INIS)

    2004-01-01

    During the year 2002 the primary supply of energy reached 629 TWh, which is 7.7 TWh less than 2001. The decrease originates mainly from the reduced electricity production from water power. Also the electricity production in nuclear power plants decreased by 4.5 TWh. If we were to look at the supplied energy for final consumption we will find a slightly rise by 1.8 TWh. The year 2002 was warmer than a 'normal' year and that consequently brings lower energy needs. Compared with 2001, 2002 was not warmer and a net electricity import of 5.4 TWh covered the energy needs. The energy use increased by 3.3 TWh between 2002 and 2001. The industry sector shows the largest rise by 2.9 TWh, nearly 2 per cent. Within that sector, energy from biomass fuel had a rise by 6.7 per cent. The household sector decreases its energy use by 2.7 per cent, and oil and electricity show the largest decrease. The proportionately high electricity price probably had a slowing down effect on the electricity use. The balance sheets of energy sources are showing the total supply and consumption of energy sources expressed in original units, i.e. units recorded in the primary statistics - mainly commercial units. The production of derived energy commodities is recorded on the supply - side of the balance sheets of energy sources, which is not the case in the energy balance sheets. The balance sheets of energy sources also include specifications of input--output and energy consumption in energy conversion industries. The energy balance sheets are based on primary data recorded in the balance sheets of energy sources, here expressed in a common energy unit, TJ. The production of derived energy is recorded in a second flow-step comprising energy turnover in energy conversion and is also specified in complementary input - output tables for energy conversion industries. The following items are shown in the energy balance sheets. 1.1 Inland supply of primary energy; 1.3 Import; 1.4 Export; 1.5 Changes in

  9. Design of the MISMIP+, ISOMIP+, and MISOMIP ice-sheet, ocean, and coupled ice sheet-ocean intercomparison projects

    Science.gov (United States)

    Asay-Davis, Xylar; Cornford, Stephen; Martin, Daniel; Gudmundsson, Hilmar; Holland, David; Holland, Denise

    2015-04-01

    The MISMIP and MISMIP3D marine ice sheet model intercomparison exercises have become popular benchmarks, and several modeling groups have used them to show how their models compare to both analytical results and other models. Similarly, the ISOMIP (Ice Shelf-Ocean Model Intercomparison Project) experiments have acted as a proving ground for ocean models with sub-ice-shelf cavities.As coupled ice sheet-ocean models become available, an updated set of benchmark experiments is needed. To this end, we propose sequel experiments, MISMIP+ and ISOMIP+, with an end goal of coupling the two in a third intercomparison exercise, MISOMIP (the Marine Ice Sheet-Ocean Model Intercomparison Project). Like MISMIP3D, the MISMIP+ experiments take place in an idealized, three-dimensional setting and compare full 3D (Stokes) and reduced, hydrostatic models. Unlike the earlier exercises, the primary focus will be the response of models to sub-shelf melting. The chosen configuration features an ice shelf that experiences substantial lateral shear and buttresses the upstream ice, and so is well suited to melting experiments. Differences between the steady states of each model are minor compared to the response to melt-rate perturbations, reflecting typical real-world applications where parameters are chosen so that the initial states of all models tend to match observations. The three ISOMIP+ experiments have been designed to to make use of the same bedrock topography as MISMIP+ and using ice-shelf geometries from MISMIP+ results produced by the BISICLES ice-sheet model. The first two experiments use static ice-shelf geometries to simulate the evolution of ocean dynamics and resulting melt rates to a quasi-steady state when far-field forcing changes in either from cold to warm or from warm to cold states. The third experiment prescribes 200 years of dynamic ice-shelf geometry (with both retreating and advancing ice) based on a BISICLES simulation along with similar flips between warm and

  10. Benchmarking, benchmarks, or best practices? Applying quality improvement principles to decrease surgical turnaround time.

    Science.gov (United States)

    Mitchell, L

    1996-01-01

    The processes of benchmarking, benchmark data comparative analysis, and study of best practices are distinctly different. The study of best practices is explained with an example based on the Arthur Andersen & Co. 1992 "Study of Best Practices in Ambulatory Surgery". The results of a national best practices study in ambulatory surgery were used to provide our quality improvement team with the goal of improving the turnaround time between surgical cases. The team used a seven-step quality improvement problem-solving process to improve the surgical turnaround time. The national benchmark for turnaround times between surgical cases in 1992 was 13.5 minutes. The initial turnaround time at St. Joseph's Medical Center was 19.9 minutes. After the team implemented solutions, the time was reduced to an average of 16.3 minutes, an 18% improvement. Cost-benefit analysis showed a potential enhanced revenue of approximately $300,000, or a potential savings of $10,119. Applying quality improvement principles to benchmarking, benchmarks, or best practices can improve process performance. Understanding which form of benchmarking the institution wishes to embark on will help focus a team and use appropriate resources. Communicating with professional organizations that have experience in benchmarking will save time and money and help achieve the desired results.

  11. Principles for designing proteins with cavities formed by curved β sheets

    Energy Technology Data Exchange (ETDEWEB)

    Marcos, Enrique; Basanta, Benjamin; Chidyausiku, Tamuka M.; Tang, Yuefeng; Oberdorfer, Gustav; Liu, Gaohua; Swapna, G. V. T.; Guan, Rongjin; Silva, Daniel-Adriano; Dou, Jiayi; Pereira, Jose Henrique; Xiao, Rong; Sankaran, Banumathi; Zwart, Peter H.; Montelione, Gaetano T.; Baker, David

    2017-01-12

    Active sites and ligand-binding cavities in native proteins are often formed by curved β sheets, and the ability to control β-sheet curvature would allow design of binding proteins with cavities customized to specific ligands. Toward this end, we investigated the mechanisms controlling β-sheet curvature by studying the geometry of β sheets in naturally occurring protein structures and folding simulations. The principles emerging from this analysis were used to design, de novo, a series of proteins with curved β sheets topped with α helices. Nuclear magnetic resonance and crystal structures of the designs closely match the computational models, showing that β-sheet curvature can be controlled with atomic-level accuracy. Our approach enables the design of proteins with cavities and provides a route to custom design ligand-binding and catalytic sites.

  12. Learning from Balance Sheet Visualization

    Science.gov (United States)

    Tanlamai, Uthai; Soongswang, Oranuj

    2011-01-01

    This exploratory study examines alternative visuals and their effect on the level of learning of balance sheet users. Executive and regular classes of graduate students majoring in information technology in business were asked to evaluate the extent of acceptance and enhanced capability of these alternative visuals toward their learning…

  13. Off-Balance Sheet Financing.

    Science.gov (United States)

    Adams, Matthew C.

    1998-01-01

    Examines off-balance sheet financing, the facilities use of outsourcing for selected needs, as a means of saving operational costs and using facility assets efficiently. Examples of using outside sources for energy supply and food services, as well as partnering with business for facility expansion are provided. Concluding comments address tax…

  14. Benchmarking ENDF/B-VII.1, JENDL-4.0 and JEFF-3.1.1 with MCNP6

    International Nuclear Information System (INIS)

    Marck, Steven C. van der

    2012-01-01

    Recent releases of three major world nuclear reaction data libraries, ENDF/B-VII.1, JENDL-4.0, and JEFF-3.1.1, have been tested extensively using benchmark calculations. The calculations were performed with the latest release of the continuous energy Monte Carlo neutronics code MCNP, i.e. MCNP6. Three types of benchmarks were used, viz. criticality safety benchmarks, (fusion) shielding benchmarks, and reference systems for which the effective delayed neutron fraction is reported. For criticality safety, more than 2000 benchmarks from the International Handbook of Criticality Safety Benchmark Experiments were used. Benchmarks from all categories were used, ranging from low-enriched uranium, compound fuel, thermal spectrum ones (LEU-COMP-THERM), to mixed uranium-plutonium, metallic fuel, fast spectrum ones (MIX-MET-FAST). For fusion shielding many benchmarks were based on IAEA specifications for the Oktavian experiments (for Al, Co, Cr, Cu, LiF, Mn, Mo, Si, Ti, W, Zr), Fusion Neutronics Source in Japan (for Be, C, N, O, Fe, Pb), and Pulsed Sphere experiments at Lawrence Livermore National Laboratory (for 6 Li, 7 Li, Be, C, N, O, Mg, Al, Ti, Fe, Pb, D2O, H2O, concrete, polyethylene and teflon). The new functionality in MCNP6 to calculate the effective delayed neutron fraction was tested by comparison with more than thirty measurements in widely varying systems. Among these were measurements in the Tank Critical Assembly (TCA in Japan) and IPEN/MB-01 (Brazil), both with a thermal spectrum, two cores in Masurca (France) and three cores in the Fast Critical Assembly (FCA, Japan), all with fast spectra. The performance of the three libraries, in combination with MCNP6, is shown to be good. The results for the LEU-COMP-THERM category are on average very close to the benchmark value. Also for most other categories the results are satisfactory. Deviations from the benchmark values do occur in certain benchmark series, or in isolated cases within benchmark series. Such

  15. Statistical benchmarking in utility regulation: Role, standards and methods

    International Nuclear Information System (INIS)

    Newton Lowry, Mark; Getachew, Lullit

    2009-01-01

    Statistical benchmarking is being used with increasing frequency around the world in utility rate regulation. We discuss how and where benchmarking is in use for this purpose and the pros and cons of regulatory benchmarking. We then discuss alternative performance standards and benchmarking methods in regulatory applications. We use these to propose guidelines for the appropriate use of benchmarking in the rate setting process. The standards, which we term the competitive market and frontier paradigms, have a bearing on method selection. These along with regulatory experience suggest that benchmarking can either be used for prudence review in regulation or to establish rates or rate setting mechanisms directly

  16. Criticality safety benchmark experiment on 10% enriched uranyl nitrate solution using a 28-cm-thickness slab core

    International Nuclear Information System (INIS)

    Yamamoto, Toshihiro; Miyoshi, Yoshinori; Kikuchi, Tsukasa; Watanabe, Shouichi

    2002-01-01

    The second series of critical experiments with 10% enriched uranyl nitrate solution using 28-cm-thick slab core have been performed with the Static Experiment Critical Facility of the Japan Atomic Energy Research Institute. Systematic critical data were obtained by changing the uranium concentration of the fuel solution from 464 to 300 gU/l under various reflector conditions. In this paper, the thirteen critical configurations for water-reflected cores and unreflected cores are identified and evaluated. The effects of uncertainties in the experimental data on k eff are quantified by sensitivity studies. Benchmark model specifications that are necessary to construct a calculational model are given. The uncertainties of k eff 's included in the benchmark model specifications are approximately 0.1%Δk eff . The thirteen critical configurations are judged to be acceptable benchmark data. Using the benchmark model specifications, sample calculation results are provided with several sets of standard codes and cross section data. (author)

  17. The GLAS Standard Data Products Specification-Data Dictionary, Version 1.0. Volume 15

    Science.gov (United States)

    Lee, Jeffrey E.

    2013-01-01

    The Geoscience Laser Altimeter System (GLAS) is the primary instrument for the ICESat (Ice, Cloud and Land Elevation Satellite) laser altimetry mission. ICESat was the benchmark Earth Observing System (EOS) mission for measuring ice sheet mass balance, cloud and aerosol heights, as well as land topography and vegetation characteristics. From 2003 to 2009, the ICESat mission provided multi-year elevation data needed to determine ice sheet mass balance as well as cloud property information, especially for stratospheric clouds common over polar areas. It also provided topography and vegetation data around the globe, in addition to the polar-specific coverage over the Greenland and Antarctic ice sheets.This document contains the data dictionary for the GLAS standard data products. It details the parameters present on GLAS standard data products. Each parameter is defined with a short name, a long name, units on product, type of variable, a long description and products that contain it. The term standard data products refers to those EOS instrument data that are routinely generated for public distribution. These products are distributed by the National Snow and Ice Data Center (NSDIC).

  18. Benchmarking optimization solvers for structural topology optimization

    DEFF Research Database (Denmark)

    Rojas Labanda, Susana; Stolpe, Mathias

    2015-01-01

    solvers in IPOPT and FMINCON, and the sequential quadratic programming method in SNOPT, are benchmarked on the library using performance profiles. Whenever possible the methods are applied to both the nested and the Simultaneous Analysis and Design (SAND) formulations of the problem. The performance...

  19. Developing Benchmarking Criteria for CO2 Emissions

    Energy Technology Data Exchange (ETDEWEB)

    Neelis, M.; Worrell, E.; Mueller, N.; Angelini, T. [Ecofys, Utrecht (Netherlands); Cremer, C.; Schleich, J.; Eichhammer, W. [The Fraunhofer Institute for Systems and Innovation research, Karlsruhe (Germany)

    2009-02-15

    A European Union (EU) wide greenhouse gas (GHG) allowance trading scheme (EU ETS) was implemented in the EU in 2005. In the first two trading periods of the scheme (running up to 2012), free allocation based on historical emissions was the main methodology for allocation of allowances to existing installations. For the third trading period (2013 - 2020), the European Commission proposed in January 2008 a more important role of auctioning of allowances rather then free allocation. (Transitional) free allocation of allowances to industrial sectors will be determined via harmonized allocation rules, where feasible based on benchmarking. In general terms, a benchmark based method allocates allowances based on a certain amount of emissions per unit of productive output (i.e. the benchmark). This study aims to derive criteria for an allocation methodology for the EU Emission Trading Scheme based on benchmarking for the period 2013 - 2020. To test the feasibility of the criteria, we apply them to four example product groups: iron and steel, pulp and paper, lime and glass. The basis for this study is the Commission proposal for a revised ETS directive put forward on 23 January 2008 and does not take into account any changes to this proposal in the co-decision procedure that resulted in the adoption of the Energy and Climate change package in December 2008.

  20. Why and How to Benchmark XML Databases

    NARCIS (Netherlands)

    A.R. Schmidt; F. Waas; M.L. Kersten (Martin); D. Florescu; M.J. Carey; I. Manolescu; R. Busse

    2001-01-01

    textabstractBenchmarks belong to the very standard repertory of tools deployed in database development. Assessing the capabilities of a system, analyzing actual and potential bottlenecks, and, naturally, comparing the pros and cons of different systems architectures have become indispensable tasks

  1. Determination of Benchmarks Stability within Ahmadu Bello ...

    African Journals Online (AJOL)

    Heights of six geodetic benchmarks over a total distance of 8.6km at the Ahmadu Bello University (ABU), Zaria, Nigeria were recomputed and analysed using least squares adjustment technique. The network computations were tied to two fix primary reference pillars situated outside the campus. The two-tail Chi-square ...

  2. Benchmarking and performance management in health care

    OpenAIRE

    Buttigieg, Sandra; ; EHMA Annual Conference : Public Health Care : Who Pays, Who Provides?

    2012-01-01

    Current economic conditions challenge health care providers globally. Healthcare organizations need to deliver optimal financial, operational, and clinical performance to sustain quality of service delivery. Benchmarking is one of the most potent and under-utilized management tools available and an analytic tool to understand organizational performance. Additionally, it is required for financial survival and organizational excellence.

  3. Benchmarking 2010: Trends in Education Philanthropy

    Science.gov (United States)

    Bearman, Jessica

    2010-01-01

    "Benchmarking 2010" offers insights into the current priorities, practices and concerns of education grantmakers. The report is divided into five sections: (1) Mapping the Education Grantmaking Landscape; (2) 2010 Funding Priorities; (3) Strategies for Leveraging Greater Impact; (4) Identifying Significant Trends in Education Funding; and (5)…

  4. Benchmarking 2011: Trends in Education Philanthropy

    Science.gov (United States)

    Grantmakers for Education, 2011

    2011-01-01

    The analysis in "Benchmarking 2011" is based on data from an unduplicated sample of 184 education grantmaking organizations--approximately two-thirds of Grantmakers for Education's (GFE's) network of grantmakers--who responded to an online survey consisting of fixed-choice and open-ended questions. Because a different subset of funders elects to…

  5. 2010 Recruiting Benchmarks Survey. Research Brief

    Science.gov (United States)

    National Association of Colleges and Employers (NJ1), 2010

    2010-01-01

    The National Association of Colleges and Employers conducted its annual survey of employer members from June 15, 2010 to August 15, 2010, to benchmark data relevant to college recruiting. From a base of 861 employers holding organizational membership, there were 268 responses for a response rate of 31 percent. Following are some of the major…

  6. Benchmarking European Gas Transmission System Operators

    DEFF Research Database (Denmark)

    Agrell, Per J.; Bogetoft, Peter; Trinkner, Urs

    This is the final report for the pan-European efficiency benchmarking of gas transmission system operations commissioned by the Netherlands Authority for Consumers and Markets (ACM), Den Haag, on behalf of the Council of European Energy Regulators (CEER) under the supervision of the authors....

  7. Benchmarking of methods for genomic taxonomy

    DEFF Research Database (Denmark)

    Larsen, Mette Voldby; Cosentino, Salvatore; Lukjancenko, Oksana

    2014-01-01

    . Nevertheless, the method has been found to have a number of shortcomings. In the current study, we trained and benchmarked five methods for whole-genome sequence-based prokaryotic species identification on a common data set of complete genomes: (i) SpeciesFinder, which is based on the complete 16S rRNA gene...

  8. Parton Distribution Benchmarking with LHC Data

    NARCIS (Netherlands)

    Ball, Richard D.; Carrazza, Stefano; Debbio, Luigi Del; Forte, Stefano; Gao, Jun; Hartland, Nathan; Huston, Joey; Nadolsky, Pavel; Rojo, Juan; Stump, Daniel; Thorne, Robert S.; Yuan, C. -P.

    2012-01-01

    We present a detailed comparison of the most recent sets of NNLO PDFs from the ABM, CT, HERAPDF, MSTW and NNPDF collaborations. We compare parton distributions at low and high scales and parton luminosities relevant for LHC phenomenology. We study the PDF dependence of LHC benchmark inclusive cross

  9. What Is the Impact of Subject Benchmarking?

    Science.gov (United States)

    Pidcock, Steve

    2006-01-01

    The introduction of subject benchmarking led to fears of increased external intervention in the activities of universities and a more restrictive view of institutional autonomy, accompanied by an undermining of the academic profession, particularly through the perceived threat of the introduction of a national curriculum for higher education. For…

  10. A Benchmark and Simulator for UAV Tracking

    KAUST Repository

    Mueller, Matthias; Smith, Neil; Ghanem, Bernard

    2016-01-01

    In this paper, we propose a new aerial video dataset and benchmark for low altitude UAV target tracking, as well as, a photorealistic UAV simulator that can be coupled with tracking methods. Our benchmark provides the first evaluation of many state-of-the-art and popular trackers on 123 new and fully annotated HD video sequences captured from a low-altitude aerial perspective. Among the compared trackers, we determine which ones are the most suitable for UAV tracking both in terms of tracking accuracy and run-time. The simulator can be used to evaluate tracking algorithms in real-time scenarios before they are deployed on a UAV “in the field”, as well as, generate synthetic but photo-realistic tracking datasets with automatic ground truth annotations to easily extend existing real-world datasets. Both the benchmark and simulator are made publicly available to the vision community on our website to further research in the area of object tracking from UAVs. (https://ivul.kaust.edu.sa/Pages/pub-benchmark-simulator-uav.aspx.). © Springer International Publishing AG 2016.

  11. Prague texture segmentation data generator and benchmark

    Czech Academy of Sciences Publication Activity Database

    Mikeš, Stanislav; Haindl, Michal

    2006-01-01

    Roč. 2006, č. 64 (2006), s. 67-68 ISSN 0926-4981 R&D Projects: GA MŠk(CZ) 1M0572; GA AV ČR(CZ) 1ET400750407; GA AV ČR IAA2075302 Institutional research plan: CEZ:AV0Z10750506 Keywords : image segmentation * texture * benchmark * web Subject RIV: BD - Theory of Information

  12. A Benchmark and Simulator for UAV Tracking

    KAUST Repository

    Mueller, Matthias

    2016-09-16

    In this paper, we propose a new aerial video dataset and benchmark for low altitude UAV target tracking, as well as, a photorealistic UAV simulator that can be coupled with tracking methods. Our benchmark provides the first evaluation of many state-of-the-art and popular trackers on 123 new and fully annotated HD video sequences captured from a low-altitude aerial perspective. Among the compared trackers, we determine which ones are the most suitable for UAV tracking both in terms of tracking accuracy and run-time. The simulator can be used to evaluate tracking algorithms in real-time scenarios before they are deployed on a UAV “in the field”, as well as, generate synthetic but photo-realistic tracking datasets with automatic ground truth annotations to easily extend existing real-world datasets. Both the benchmark and simulator are made publicly available to the vision community on our website to further research in the area of object tracking from UAVs. (https://ivul.kaust.edu.sa/Pages/pub-benchmark-simulator-uav.aspx.). © Springer International Publishing AG 2016.

  13. Choice Complexity, Benchmarks and Costly Information

    NARCIS (Netherlands)

    Harms, Job; Rosenkranz, S.; Sanders, M.W.J.L.

    In this study we investigate how two types of information interventions, providing a benchmark and providing costly information on option ranking, can improve decision-making in complex choices. In our experiment subjects made a series of incentivized choices between four hypothetical financial

  14. Resolution for the Loviisa benchmark problem

    International Nuclear Information System (INIS)

    Garcia, C.R.; Quintero, R.; Milian, D.

    1992-01-01

    In the present paper, the Loviisa benchmark problem for cycles 11 and 8, and reactor blocks 1 and 2 from Loviisa NPP, is calculated. This problem user law leakage reload patterns and was posed at the second thematic group of TIC meeting held in Rheinsberg GDR, march 1989. SPPS-1 coarse mesh code has been used for the calculations

  15. Three-dimensional RAMA fluence methodology benchmarking

    International Nuclear Information System (INIS)

    Baker, S. P.; Carter, R. G.; Watkins, K. E.; Jones, D. B.

    2004-01-01

    This paper describes the benchmarking of the RAMA Fluence Methodology software, that has been performed in accordance with U. S. Nuclear Regulatory Commission Regulatory Guide 1.190. The RAMA Fluence Methodology has been developed by TransWare Enterprises Inc. through funding provided by the Electric Power Research Inst., Inc. (EPRI) and the Boiling Water Reactor Vessel and Internals Project (BWRVIP). The purpose of the software is to provide an accurate method for calculating neutron fluence in BWR pressure vessels and internal components. The Methodology incorporates a three-dimensional deterministic transport solution with flexible arbitrary geometry representation of reactor system components, previously available only with Monte Carlo solution techniques. Benchmarking was performed on measurements obtained from three standard benchmark problems which include the Pool Criticality Assembly (PCA), VENUS-3, and H. B. Robinson Unit 2 benchmarks, and on flux wire measurements obtained from two BWR nuclear plants. The calculated to measured (C/M) ratios range from 0.93 to 1.04 demonstrating the accuracy of the RAMA Fluence Methodology in predicting neutron flux, fluence, and dosimetry activation. (authors)

  16. Benchmarking Academic Libraries: An Australian Case Study.

    Science.gov (United States)

    Robertson, Margaret; Trahn, Isabella

    1997-01-01

    Discusses experiences and outcomes of benchmarking at the Queensland University of Technology (Australia) library that compared acquisitions, cataloging, document delivery, and research support services with those of the University of New South Wales. Highlights include results as a catalyst for change, and the use of common output and performance…

  17. Calculus of a reactor VVER-1000 benchmark

    International Nuclear Information System (INIS)

    Dourougie, C.

    1998-01-01

    In the framework of the FMDP (Fissile Materials Disposition Program between the US and Russian, a benchmark was tested. The pin cells contain low enriched uranium (LEU) and mixed oxide fuels (MOX). The calculations are done for a wide range of temperatures and solute boron concentrations, in accidental conditions. (A.L.B.)

  18. Tearing resistance of some co-polyester sheets

    International Nuclear Information System (INIS)

    Kim, Ho Sung; Karger-Kocsis, Jozsef

    2004-01-01

    A three-zone model consisting of initial, evolutionary and stabilised plastic zones for tearing resistance was proposed for polymer sheets. An analysis with the model, based on the essential work of fracture (EWF) approach, was demonstrated to be capable for predicting specific total work of fracture along the tear path across all the plastic zones although accuracy of specific essential work of fracture is subject to improvement. Photo-elastic images were used for identification of plastic deformation sizes and profiles. Fracture mode change during loading was described in relation with the three zones. Tearing fracture behaviour of extruded mono- and bi-layer sheets of different types of amorphous co-polyesters and different thicknesses was investigated. Thick material exhibited higher specific total work of tear fracture than thin mono-layer sheet in the case of amorphous polyethylene terephthalate (PET). This finding was explained in terms of plastic zone size formed along the tear path, i.e., thick material underwent larger plastic deformation than thin material. When PET and polyethylene terephthalate glycol (PETG) were laminated with each other, specific total work of fracture of the bi-layer sheets was not noticeably improved over that of the constituent materials

  19. Sucrose Treated Carbon Nanotube and Graphene Yarns and Sheets

    Science.gov (United States)

    Sauti, Godfrey (Inventor); Kim, Jae-Woo (Inventor); Siochi, Emilie J. (Inventor); Wise, Kristopher E. (Inventor)

    2017-01-01

    Consolidated carbon nanotube or graphene yarns and woven sheets are consolidated through the formation of a carbon binder formed from the dehydration of sucrose. The resulting materials, on a macro-scale are lightweight and of a high specific modulus and/or strength. Sucrose is relatively inexpensive and readily available, and the process is therefore cost-effective.

  20. Whooping Cough (Pertussis) - Fact Sheet for Parents

    Science.gov (United States)

    ... months 4 through 6 years Fact Sheet for Parents Color [2 pages] Español: Tosferina (pertussis) The best ... according to the recommended schedule. Fact Sheets for Parents Diseases and the Vaccines that Prevent Them Chickenpox ...

  1. Benchmarking transaction and analytical processing systems the creation of a mixed workload benchmark and its application

    CERN Document Server

    Bog, Anja

    2014-01-01

    This book introduces a new benchmark for hybrid database systems, gauging the effect of adding OLAP to an OLTP workload and analyzing the impact of commonly used optimizations in historically separate OLTP and OLAP domains in mixed-workload scenarios.

  2. Electricity consumption in school buildings - benchmark and web tools; Elforbrug i skoler - benchmark og webvaerktoej

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2006-07-01

    The aim of this project has been to produce benchmarks for electricity consumption in Danish schools in order to encourage electricity conservation. An internet programme has been developed with the aim of facilitating schools' access to benchmarks and to evaluate energy consumption. The overall purpose is to create increased attention to the electricity consumption of each separate school by publishing benchmarks which take the schools' age and number of pupils as well as after school activities into account. Benchmarks can be used to make green accounts and work as markers in e.g. energy conservation campaigns, energy management and for educational purposes. The internet tool can be found on www.energiguiden.dk. (BA)

  3. SPOC Benchmark Case: SNRE Model

    Energy Technology Data Exchange (ETDEWEB)

    Vishal Patel; Michael Eades; Claude Russel Joyner II

    2016-02-01

    The Small Nuclear Rocket Engine (SNRE) was modeled in the Center for Space Nuclear Research’s (CSNR) Space Propulsion Optimization Code (SPOC). SPOC aims to create nuclear thermal propulsion (NTP) geometries quickly to perform parametric studies on design spaces of historic and new NTP designs. The SNRE geometry was modeled in SPOC and a critical core with a reasonable amount of criticality margin was found. The fuel, tie-tubes, reflector, and control drum masses were predicted rather well. These are all very important for neutronics calculations so the active reactor geometries created with SPOC can continue to be trusted. Thermal calculations of the average and hot fuel channels agreed very well. The specific impulse calculations used historically and in SPOC disagree so mass flow rates and impulses differed. Modeling peripheral and power balance components that do not affect nuclear characteristics of the core is not a feature of SPOC and as such, these components should continue to be designed using other tools. A full paper detailing the available SNRE data and comparisons with SPOC outputs will be submitted as a follow-up to this abstract.

  4. 21 CFR 880.5180 - Burn sheet.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Burn sheet. 880.5180 Section 880.5180 Food and... Burn sheet. (a) Identification. A burn sheet is a device made of a porous material that is wrapped aroung a burn victim to retain body heat, to absorb wound exudate, and to serve as a barrier against...

  5. Manifold free multiple sheet superplastic forming

    Science.gov (United States)

    Elmer, John W.; Bridges, Robert L.

    2004-01-13

    Fluid-forming compositions in a container attached to enclosed adjacent sheets are heated to relatively high temperatures to generate fluids (gases) that effect inflation of the sheets. Fluid rates to the enclosed space between the sheets can be regulated by the canal from the container. Inflated articles can be produced by a continuous, rather than batch-type, process.

  6. On the possible eigenoscillations of neutral sheets

    International Nuclear Information System (INIS)

    Almeida, W.A.; Costa, J.M. da; Aruquipa, E.G.; Sudano, J.P.

    1974-12-01

    A neutral sheet model with hyperbolic tangent equilibrium magnetic field and hyperbolic square secant density profiles is considered. It is shown that the equation for small oscillations takes the form of an eigenvalue oscillation problem. Computed eigenfrequencies of the geomagnetic neutral sheet were found to be in the range of the resonant frequencies of the geomagnetic plasma sheet computed by other authors

  7. Using benchmarking for the primary allocation of EU allowances. An application to the German power sector

    Energy Technology Data Exchange (ETDEWEB)

    Schleich, J.; Cremer, C.

    2007-07-01

    Basing allocation of allowances for existing installations under the EU Emissions Trading Scheme on specific emission values (benchmarks) rather than on historic emissions may have several advantages. Benchmarking may recognize early ac-tion, provide higher incentives for replacing old installations and result in fewer distortions in case of updating, facilitate EU-wide harmonization of allocation rules or allow for simplified and more efficient closure rules. Applying an optimization model for the German power sector, we analyze the distributional effects of vari-ous allocation regimes across and within different generation technologies. Re-sults illustrate that regimes with a single uniform benchmark for all fuels or with a single benchmark for coal- and lignite-fired plants imply substantial distributional effects. In particular, lignite- and old coal-fired plants would be made worse off. Under a regime with fuel-specific benchmarks for gas, coal, and lignite 50 % of the gas-fired plants and 4 % of the lignite and coal-fired plants would face an allow-ance deficit of at least 10 %, while primarily modern lignite-fired plants would benefit. Capping the surplus and shortage of allowances would further moderate the distributional effects, but may tarnish incentives for efficiency improvements and recognition of early action. (orig.)

  8. Indoor external dose rates due to decorative sheet stone

    Energy Technology Data Exchange (ETDEWEB)

    Lu, C.H.; Sheu, R.D.; Jiang, S.H. [Dept. of Engineering and System Science, National Tsing Hua Univ., Hsinchu (Taiwan)

    2002-03-01

    The specific activities in decorative sheet stone made of granite or marble were measured, whereby the absolute peak efficiency of the HPGe detectors employed in the measurements for the sheet-stone sample was determined using the semi-empirical method. The spatial distribution for the indoor external dose rates due to the radionuclides present in the decorative sheet stone used to clad the floor and the four walls of a standard room was calculated using a three-dimensional point kernel computer code. It was found that the spatial distribution for the indoor dose rates was complex and non-uniform, which represents a difference in relation to the results of earlier studies. (orig.)

  9. Indoor external dose rates due to decorative sheet stone

    International Nuclear Information System (INIS)

    Lu, C.H.; Sheu, R.D.; Jiang, S.H.

    2002-01-01

    The specific activities in decorative sheet stone made of granite or marble were measured, whereby the absolute peak efficiency of the HPGe detectors employed in the measurements for the sheet-stone sample was determined using the semi-empirical method. The spatial distribution for the indoor external dose rates due to the radionuclides present in the decorative sheet stone used to clad the floor and the four walls of a standard room was calculated using a three-dimensional point kernel computer code. It was found that the spatial distribution for the indoor dose rates was complex and non-uniform, which represents a difference in relation to the results of earlier studies. (orig.)

  10. Formability of Annealed Ni-Ti Shape Memory Alloy Sheet

    Science.gov (United States)

    Fann, K. J.; Su, J. Y.; Chang, C. H.

    2018-03-01

    Ni-Ti shape memory alloy has two specific properties, superelasiticity and shape memory effect, and thus is widely applied in diverse industries. To extend its application, this study attempts to investigate the strength and cold formability of its sheet blank, which is annealed at various temperatures, by hardness test and by Erichsen-like cupping test. As a result, the higher the annealing temperature, the lower the hardness, the lower the maximum punch load as the sheet blank fractured, and the lower the Erichsen-like index or the lower the formability. In general, the Ni-Ti sheet after annealing has an Erichsen-like index between 8 mm and 9 mm. This study has also confirmed via DSC that the Ni-Ti shape memory alloy possesses the austenitic phase and shows the superelasticity at room temperature.

  11. Universal Style Sheet Language Environment Modification for the Business Use

    Directory of Open Access Journals (Sweden)

    Jiří Brázdil

    2014-01-01

    Full Text Available This paper deals with the description of USSL – Universal Style Sheet Language environment. USSL style sheet language is platform-independent and its primary focus is the declarative notation of the appearance of GUI libraries used by imperative programming languages. The implementation of the software support for wxWidgets library is made, because this library has no support for the separate declarative notation of the appearance via style sheet language. The separation of the appearance enables us to reuse and standardize the appearance notation and the independent development of the appearance. In this way it is possible to achieve consistent appearance of applications of specific set or even all of company software products. However, the first proposal of the USSL has several disadvantages which restrict the possibilities for practical use in business or other environment. These disadvantages are: the lack of @import rule for importing other style sheets, USSL only supports basic set of selectors compared with selectors of other style sheet languages for desktop environment such as Qt QSS and GTK+ GtkCssProvider, the lack of styling of the cursors, it is impossible to put down URL. The placement of widgets and its borders are not solved either. This paper contains suggestions for solving these issues.

  12. Numerical Simulation of Incremental Sheet Forming by Simplified Approach

    Science.gov (United States)

    Delamézière, A.; Yu, Y.; Robert, C.; Ayed, L. Ben; Nouari, M.; Batoz, J. L.

    2011-01-01

    The Incremental Sheet Forming (ISF) is a process, which can transform a flat metal sheet in a 3D complex part using a hemispherical tool. The final geometry of the product is obtained by the relative movement between this tool and the blank. The main advantage of that process is that the cost of the tool is very low compared to deep drawing with rigid tools. The main disadvantage is the very low velocity of the tool and thus the large amount of time to form the part. Classical contact algorithms give good agreement with experimental results, but are time consuming. A Simplified Approach for the contact management between the tool and the blank in ISF is presented here. The general principle of this approach is to imposed displacement of the nodes in contact with the tool at a given position. On a benchmark part, the CPU time of the present Simplified Approach is significantly reduced compared with a classical simulation performed with Abaqus implicit.

  13. Suggested benchmarks for shape optimization for minimum stress concentration

    DEFF Research Database (Denmark)

    Pedersen, Pauli

    2008-01-01

    Shape optimization for minimum stress concentration is vital, important, and difficult. New formulations and numerical procedures imply the need for good benchmarks. The available analytical shape solutions rely on assumptions that are seldom satisfied, so here, we suggest alternative benchmarks...

  14. Benchmarking in Identifying Priority Directions of Development of Telecommunication Operators

    Directory of Open Access Journals (Sweden)

    Zaharchenko Lolita A.

    2013-12-01

    Full Text Available The article analyses evolution of development and possibilities of application of benchmarking in the telecommunication sphere. It studies essence of benchmarking on the basis of generalisation of approaches of different scientists to definition of this notion. In order to improve activity of telecommunication operators, the article identifies the benchmarking technology and main factors, that determine success of the operator in the modern market economy, and the mechanism of benchmarking and component stages of carrying out benchmarking by a telecommunication operator. It analyses the telecommunication market and identifies dynamics of its development and tendencies of change of the composition of telecommunication operators and providers. Having generalised the existing experience of benchmarking application, the article identifies main types of benchmarking of telecommunication operators by the following features: by the level of conduct of (branch, inter-branch and international benchmarking; by relation to participation in the conduct (competitive and joint; and with respect to the enterprise environment (internal and external.

  15. SU-E-T-148: Benchmarks and Pre-Treatment Reviews: A Study of Quality Assurance Effectiveness

    International Nuclear Information System (INIS)

    Lowenstein, J; Nguyen, H; Roll, J; Walsh, A; Tailor, A; Followill, D

    2015-01-01

    Purpose: To determine the impact benchmarks and pre-treatment reviews have on improving the quality of submitted clinical trial data. Methods: Benchmarks are used to evaluate a site’s ability to develop a treatment that meets a specific protocol’s treatment guidelines prior to placing their first patient on the protocol. A pre-treatment review is an actual patient placed on the protocol in which the dosimetry and contour volumes are evaluated to be per protocol guidelines prior to allowing the beginning of the treatment. A key component of these QA mechanisms is that sites are provided timely feedback to educate them on how to plan per the protocol and prevent protocol deviations on patients accrued to a protocol. For both benchmarks and pre-treatment reviews a dose volume analysis (DVA) was performed using MIM softwareTM. For pre-treatment reviews a volume contour evaluation was also performed. Results: IROC Houston performed a QA effectiveness analysis of a protocol which required both benchmarks and pre-treatment reviews. In 70 percent of the patient cases submitted, the benchmark played an effective role in assuring that the pre-treatment review of the cases met protocol requirements. The 35 percent of sites failing the benchmark subsequently modified there planning technique to pass the benchmark before being allowed to submit a patient for pre-treatment review. However, in 30 percent of the submitted cases the pre-treatment review failed where the majority (71 percent) failed the DVA. 20 percent of sites submitting patients failed to correct their dose volume discrepancies indicated by the benchmark case. Conclusion: Benchmark cases and pre-treatment reviews can be an effective QA tool to educate sites on protocol guidelines and to minimize deviations. Without the benchmark cases it is possible that 65 percent of the cases undergoing a pre-treatment review would have failed to meet the protocols requirements.Support: U24-CA-180803

  16. An Arbitrary Benchmark CAPM: One Additional Frontier Portfolio is Sufficient

    OpenAIRE

    Ekern, Steinar

    2008-01-01

    First draft: July 16, 2008 This version: October 7, 2008 The benchmark CAPM linearly relates the expected returns on an arbitrary asset, an arbitrary benchmark portfolio, and an arbitrary MV frontier portfolio. The benchmark is not required to be on the frontier and may be non-perfectly correlated with the frontier portfolio. The benchmark CAPM extends and generalizes previous CAPM formulations, including the zero beta, two correlated frontier portfolios, riskless augmented frontier, an...

  17. Regression Benchmarking: An Approach to Quality Assurance in Performance

    OpenAIRE

    Bulej, Lubomír

    2005-01-01

    The paper presents a short summary of our work in the area of regression benchmarking and its application to software development. Specially, we explain the concept of regression benchmarking, the requirements for employing regression testing in a software project, and methods used for analyzing the vast amounts of data resulting from repeated benchmarking. We present the application of regression benchmarking on a real software project and conclude with a glimpse at the challenges for the fu...

  18. Revaluering benchmarking - A topical theme for the construction industry

    OpenAIRE

    Rasmussen, Grane Mikael Gregaard

    2011-01-01

    Over the past decade, benchmarking has increasingly gained foothold in the construction industry. The predominant research, perceptions and uses of benchmarking are valued so strongly and uniformly, that what may seem valuable, is actually abstaining researchers and practitioners from studying and questioning the concept objectively. This paper addresses the underlying nature of benchmarking, and accounts for the importance of focusing attention on the sociological impacts benchmarking has in...

  19. The extent of benchmarking in the South African financial sector

    OpenAIRE

    W Vermeulen

    2014-01-01

    Benchmarking is the process of identifying, understanding and adapting outstanding practices from within the organisation or from other businesses, to help improve performance. The importance of benchmarking as an enabler of business excellence has necessitated an in-depth investigation into the current state of benchmarking in South Africa. This research project highlights the fact that respondents realise the importance of benchmarking, but that various problems hinder the effective impleme...

  20. Geometry of thin liquid sheet flows

    Science.gov (United States)

    Chubb, Donald L.; Calfo, Frederick D.; Mcconley, Marc W.; Mcmaster, Matthew S.; Afjeh, Abdollah A.

    1994-01-01

    Incompresible, thin sheet flows have been of research interest for many years. Those studies were mainly concerned with the stability of the flow in a surrounding gas. Squire was the first to carry out a linear, invicid stability analysis of sheet flow in air and compare the results with experiment. Dombrowski and Fraser did an experimental study of the disintegration of sheet flows using several viscous liquids. They also detected the formulation of holes in their sheet flows. Hagerty and Shea carried out an inviscid stability analysis and calculated growth rates with experimental values. They compared their calculated growth rates with experimental values. Taylor studied extensively the stability of thin liquid sheets both theoretically and experimentally. He showed that thin sheets in a vacuum are stable. Brown experimentally investigated thin liquid sheet flows as a method of application of thin films. Clark and Dumbrowski carried out second-order stability analysis for invicid sheet flows. Lin introduced viscosity into the linear stability analysis of thin sheet flows in a vacuum. Mansour and Chigier conducted an experimental study of the breakup of a sheet flow surrounded by high-speed air. Lin et al. did a linear stability analysis that included viscosity and a surrounding gas. Rangel and Sirignano carried out both a linear and nonlinear invisid stability analysis that applies for any density ratio between the sheet liquid and the surrounding gas. Now there is renewed interest in sheet flows because of their possible application as low mass radiating surfaces. The objective of this study is to investigate the fluid dynamics of sheet flows that are of interest for a space radiator system. Analytical expressions that govern the sheet geometry are compared with experimental results. Since a space radiator will operate in a vacuum, the analysis does not include any drag force on the sheet flow.

  1. Benchmarking of industrial control systems via case-based reasoning

    International Nuclear Information System (INIS)

    Hadjiiski, M.; Boshnakov, K.; Georgiev, Z.

    2013-01-01

    Full text: The recent development of information and communication technologies enables the establishment of virtual consultation centers related to the control of specific processes that are widely presented worldwide as the location of the installations does not have influence on the results. The centers can provide consultations regarding the quality of the process control and overall enterprise management as correction factors such as weather conditions, product or service and associated technology, production level, quality of feedstock used and others can be also taken into account. The benchmarking technique is chosen as a tool for analyzing and comparing the quality of the assessed control systems in individual plants. It is a process of gathering, analyzing and comparing data on the characteristics of comparable units to assess and compare these characteristics and improve the performance of the particular process, enterprise or organization. By comparing the different processes and the adoption of the best practices energy efficiency could be improved and hence the competitiveness of the participating organizations will increase. In the presented work algorithm for benchmarking and parametric optimization of a given control system is developed by applying the approaches of Case-Based Reasoning (CBR) and Data Envelopment Analysis (DEA). Expert knowledge and approaches for optimal tuning of control systems are combined. Two of the most common systems for automatic control of different variables in the case of biological wastewater treatment are presented and discussed. Based on analysis of the processes, different cases are defined. By using DEA analysis the relative efficiencies of 10 systems for automatic control of dissolved oxygen are estimated. The designed and implemented in the current work CBR and DEA are applicable for the purposed of virtual consultation centers. Key words: benchmarking technique, energy efficiency, Case-Based Reasoning (CBR

  2. Benchmarking Academic Anatomic Pathologists: The Association of Pathology Chairs Survey.

    Science.gov (United States)

    Ducatman, Barbara S; Parslow, Tristram

    2016-01-01

    The most common benchmarks for faculty productivity are derived from Medical Group Management Association (MGMA) or Vizient-AAMC Faculty Practice Solutions Center ® (FPSC) databases. The Association of Pathology Chairs has also collected similar survey data for several years. We examined the Association of Pathology Chairs annual faculty productivity data and compared it with MGMA and FPSC data to understand the value, inherent flaws, and limitations of benchmarking data. We hypothesized that the variability in calculated faculty productivity is due to the type of practice model and clinical effort allocation. Data from the Association of Pathology Chairs survey on 629 surgical pathologists and/or anatomic pathologists from 51 programs were analyzed. From review of service assignments, we were able to assign each pathologist to a specific practice model: general anatomic pathologists/surgical pathologists, 1 or more subspecialties, or a hybrid of the 2 models. There were statistically significant differences among academic ranks and practice types. When we analyzed our data using each organization's methods, the median results for the anatomic pathologists/surgical pathologists general practice model compared to MGMA and FPSC results for anatomic and/or surgical pathology were quite close. Both MGMA and FPSC data exclude a significant proportion of academic pathologists with clinical duties. We used the more inclusive FPSC definition of clinical "full-time faculty" (0.60 clinical full-time equivalent and above). The correlation between clinical full-time equivalent effort allocation, annual days on service, and annual work relative value unit productivity was poor. This study demonstrates that effort allocations are variable across academic departments of pathology and do not correlate well with either work relative value unit effort or reported days on service. Although the Association of Pathology Chairs-reported median work relative value unit productivity

  3. Ice sheet hydrology from observations

    Energy Technology Data Exchange (ETDEWEB)

    Jansson, Peter [Dept. of Physical Geography and Quaternary Geology, Stockholm Univ-, Stockholm (Sweden)

    2010-11-15

    The hydrological systems of ice sheets are complex. Our view of the system is split, largely due to the complexity of observing the systems. Our basic knowledge of processes have been obtained from smaller glaciers and although applicable in general to the larger scales of the ice sheets, ice sheets contain features not observable on smaller glaciers due to their size. The generation of water on the ice sheet surface is well understood and can be satisfactorily modeled. The routing of water from the surface down through the ice is not complicated in terms of procat has been problematic is the way in which the couplings between surface and bed has been accomplished through a kilometer of cold ice, but with the studies on crack propagation and lake drainage on Greenland we are beginning to understand also this process and we know water can be routed through thick cold ice. Water generation at the bed is also well understood but the main problem preventing realistic estimates of water generation is lack of detailed information about geothermal heat fluxes and their geographical distribution beneath the ice. Although some average value for geothermal heat flux may suffice, for many purposes it is important that such values are not applied to sub-regions of significantly higher fluxes. Water generated by geothermal heat constitutes a constant supply and will likely maintain a steady system beneath the ice sheet. Such a system may include subglacial lakes as steady features and reconfiguration of the system is tied to time scales on which the ice sheet geometry changes so as to change pressure gradients in the basal system itself. Large scale re-organization of subglacial drainage systems have been observed beneath ice streams. The stability of an entirely subglacially fed drainage system may hence be perturbed by rapid ice flow. In the case of Antarctic ice streams where such behavior has been observed, the ice streams are underlain by deformable sediments. It is

  4. Periodic folding of viscous sheets

    Science.gov (United States)

    Ribe, Neil M.

    2003-09-01

    The periodic folding of a sheet of viscous fluid falling upon a rigid surface is a common fluid mechanical instability that occurs in contexts ranging from food processing to geophysics. Asymptotic thin-layer equations for the combined stretching-bending deformation of a two-dimensional sheet are solved numerically to determine the folding frequency as a function of the sheet’s initial thickness, the pouring speed, the height of fall, and the fluid properties. As the buoyancy increases, the system bifurcates from “forced” folding driven kinematically by fluid extrusion to “free” folding in which viscous resistance to bending is balanced by buoyancy. The systematics of the numerically predicted folding frequency are in good agreement with laboratory experiments.

  5. Ice sheet hydrology from observations

    International Nuclear Information System (INIS)

    Jansson, Peter

    2010-11-01

    The hydrological systems of ice sheets are complex. Our view of the system is split, largely due to the complexity of observing the systems. Our basic knowledge of processes have been obtained from smaller glaciers and although applicable in general to the larger scales of the ice sheets, ice sheets contain features not observable on smaller glaciers due to their size. The generation of water on the ice sheet surface is well understood and can be satisfactorily modeled. The routing of water from the surface down through the ice is not complicated in terms of procat has been problematic is the way in which the couplings between surface and bed has been accomplished through a kilometer of cold ice, but with the studies on crack propagation and lake drainage on Greenland we are beginning to understand also this process and we know water can be routed through thick cold ice. Water generation at the bed is also well understood but the main problem preventing realistic estimates of water generation is lack of detailed information about geothermal heat fluxes and their geographical distribution beneath the ice. Although some average value for geothermal heat flux may suffice, for many purposes it is important that such values are not applied to sub-regions of significantly higher fluxes. Water generated by geothermal heat constitutes a constant supply and will likely maintain a steady system beneath the ice sheet. Such a system may include subglacial lakes as steady features and reconfiguration of the system is tied to time scales on which the ice sheet geometry changes so as to change pressure gradients in the basal system itself. Large scale re-organization of subglacial drainage systems have been observed beneath ice streams. The stability of an entirely subglacially fed drainage system may hence be perturbed by rapid ice flow. In the case of Antarctic ice streams where such behavior has been observed, the ice streams are underlain by deformable sediments. It is

  6. Load Test in Sheet Pile

    OpenAIRE

    Luis Orlando Ibanez

    2016-01-01

    In this work, are discussed experiences in the use of mathematical modeling and testing in hydraulic engineering structures. For this purpose the results of load tests in sheet pile, evaluating horizontal and vertical deformations that occur in the same exposed. Comparisons between theoretical methods for calculating deformations and mathematical models based on the Finite Element Method are established. Finally, the coincidence between the numerical model and the results of the load test ful...

  7. Ohm's law for a current sheet

    Science.gov (United States)

    Lyons, L. R.; Speiser, T. W.

    1985-01-01

    The paper derives an Ohm's law for single-particle motion in a current sheet, where the magnetic field reverses in direction across the sheet. The result is considerably different from the resistive Ohm's law often used in MHD studies of the geomagnetic tail. Single-particle analysis is extended to obtain a self-consistency relation for a current sheet which agrees with previous results. The results are applicable to the concept of reconnection in that the electric field parallel to the current is obtained for a one-dimensional current sheet with constant normal magnetic field. Dissipated energy goes directly into accelerating particles within the current sheet.

  8. MTCB: A Multi-Tenant Customizable database Benchmark

    NARCIS (Netherlands)

    van der Zijden, WIm; Hiemstra, Djoerd; van Keulen, Maurice

    2017-01-01

    We argue that there is a need for Multi-Tenant Customizable OLTP systems. Such systems need a Multi-Tenant Customizable Database (MTC-DB) as a backing. To stimulate the development of such databases, we propose the benchmark MTCB. Benchmarks for OLTP exist and multi-tenant benchmarks exist, but no

  9. Benchmarking in the globalised world and its impact on South ...

    African Journals Online (AJOL)

    In order to understand the potential impact of international benchmarking on South African institutions, it is important to explore the future role of benchmarking on the international level. In this regard, examples of transnational benchmarking activities will be considered. As a result of the involvement of South African ...

  10. Benchmarking a signpost to excellence in quality and productivity

    CERN Document Server

    Karlof, Bengt

    1993-01-01

    According to the authors, benchmarking exerts a powerful leverage effect on an organization and they consider some of the factors which justify their claim. Describes how to implement benchmarking and exactly what to benchmark. Explains benchlearning which integrates education, leadership development and organizational dynamics with the actual work being done and how to make it work more efficiently in terms of quality and productivity.

  11. 40 CFR 141.172 - Disinfection profiling and benchmarking.

    Science.gov (United States)

    2010-07-01

    ... benchmarking. 141.172 Section 141.172 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED... Disinfection-Systems Serving 10,000 or More People § 141.172 Disinfection profiling and benchmarking. (a... sanitary surveys conducted by the State. (c) Disinfection benchmarking. (1) Any system required to develop...

  12. Discovering and Implementing Best Practices to Strengthen SEAs: Collaborative Benchmarking

    Science.gov (United States)

    Building State Capacity and Productivity Center, 2013

    2013-01-01

    This paper is written for state educational agency (SEA) leaders who are considering the benefits of collaborative benchmarking, and it addresses the following questions: (1) What does benchmarking of best practices entail?; (2) How does "collaborative benchmarking" enhance the process?; (3) How do SEAs control the process so that "their" needs…

  13. A New Performance Improvement Model: Adding Benchmarking to the Analysis of Performance Indicator Data.

    Science.gov (United States)

    Al-Kuwaiti, Ahmed; Homa, Karen; Maruthamuthu, Thennarasu

    2016-01-01

    A performance improvement model was developed that focuses on the analysis and interpretation of performance indicator (PI) data using statistical process control and benchmarking. PIs are suitable for comparison with benchmarks only if the data fall within the statistically accepted limit-that is, show only random variation. Specifically, if there is no significant special-cause variation over a period of time, then the data are ready to be benchmarked. The proposed Define, Measure, Control, Internal Threshold, and Benchmark model is adapted from the Define, Measure, Analyze, Improve, Control (DMAIC) model. The model consists of the following five steps: Step 1. Define the process; Step 2. Monitor and measure the variation over the period of time; Step 3. Check the variation of the process; if stable (no significant variation), go to Step 4; otherwise, control variation with the help of an action plan; Step 4. Develop an internal threshold and compare the process with it; Step 5.1. Compare the process with an internal benchmark; and Step 5.2. Compare the process with an external benchmark. The steps are illustrated through the use of health care-associated infection (HAI) data collected for 2013 and 2014 from the Infection Control Unit, King Fahd Hospital, University of Dammam, Saudi Arabia. Monitoring variation is an important strategy in understanding and learning about a process. In the example, HAI was monitored for variation in 2013, and the need to have a more predictable process prompted the need to control variation by an action plan. The action plan was successful, as noted by the shift in the 2014 data, compared to the historical average, and, in addition, the variation was reduced. The model is subject to limitations: For example, it cannot be used without benchmarks, which need to be calculated the same way with similar patient populations, and it focuses only on the "Analyze" part of the DMAIC model.

  14. Experimental formability analysis of bondal sandwich sheet

    Science.gov (United States)

    Kami, Abdolvahed; Banabic, Dorel

    2018-05-01

    Metal/polymer/metal sandwich sheets have recently attracted the interests of industries like automotive industry. These sandwich sheets have superior properties over single-layer metallic sheets including good sound and vibration damping and light weight. However, the formability of these sandwich sheets should be enhanced which requires more research. In this paper, the formability of Bondal sheet (DC06/viscoelastic polymer/DC06 sandwich sheet) was studied through different types of experiments. The mechanical properties of Bondal were determined by uniaxial tensile tests. Hemispherical punch stretching and hydraulic bulge tests were carried out to determine the forming limit diagram (FLD) of Bondal. Furthermore, cylindrical and square cup drawing tests were performed in dry and oil lubricated conditions. These tests were conducted at different blank holding forces (BHFs). An interesting observation about Bondal sheet deep drawing was obtaining of higher drawing depths at dry condition in comparison with oil-lubricated condition.

  15. Buckling and stretching of thin viscous sheets

    Science.gov (United States)

    O'Kiely, Doireann; Breward, Chris; Griffiths, Ian; Howell, Peter; Lange, Ulrich

    2016-11-01

    Thin glass sheets are used in smartphone, battery and semiconductor technology, and may be manufactured by producing a relatively thick glass slab and subsequently redrawing it to a required thickness. The resulting sheets commonly possess undesired centerline ripples and thick edges. We present a mathematical model in which a viscous sheet undergoes redraw in the direction of gravity, and show that, in a sufficiently strong gravitational field, buckling is driven by compression in a region near the bottom of the sheet, and limited by viscous resistance to stretching of the sheet. We use asymptotic analysis in the thin-sheet, low-Reynolds-number limit to determine the centerline profile and growth rate of such a viscous sheet.

  16. HPL and STREAM Benchmarks on SANAM Supercomputer

    KAUST Repository

    Bin Sulaiman, Riman A.

    2017-01-01

    SANAM supercomputer was jointly built by KACST and FIAS in 2012 ranking second that year in the Green500 list with a power efficiency of 2.3 GFLOPS/W (Rohr et al., 2014). It is a heterogeneous accelerator-based HPC system that has 300 compute nodes. Each node includes two Intel Xeon E5?2650 CPUs, two AMD FirePro S10000 dual GPUs and 128 GiB of main memory. In this work, the seven benchmarks of HPCC were installed and configured to reassess the performance of SANAM, as part of an unpublished master thesis, after it was reassembled in the Kingdom of Saudi Arabia. We present here detailed results of HPL and STREAM benchmarks.

  17. Benchmark and Continuous Improvement of Performance

    Directory of Open Access Journals (Sweden)

    Alina Alecse Stanciu

    2017-12-01

    Full Text Available The present Economic Environment is challenge us to perform, to think and re-think our personal strategies in according with our entities strategies, even if we are simply employed or we are entrepreneurs. Is an environment characterised by Volatility, Uncertainity, Complexity and Ambiguity - a VUCA World in which the entities must fight for their position gained in the market, disrupt new markets and new economies, developing their client portofolio, with the Performance as one final goal. The pressure of driving forces known as the 2030 Megatrends: Globalization 2.0, Environmental Crisis and the Scarcity of Resources, Individualism and Value Pluralism, Demographic Change, This paper examines whether using benchmark is an opportunity to increase the competitiveness of Romanian SMEs and the results show that benchmark is therefore a powerful instrument, combining reduced negative impact on the environment with a positive impact on the economy and society.

  18. The Benchmark Test Results of QNX RTOS

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jang Yeol; Lee, Young Jun; Cheon, Se Woo; Lee, Jang Soo; Kwon, Kee Choon [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2010-10-15

    A Real-Time Operating System(RTOS) is an Operating System(OS) intended for real-time applications. Benchmarking is a point of reference by which something can be measured. The QNX is a Real Time Operating System(RTOS) developed by QSSL(QNX Software Systems Ltd.) in Canada. The ELMSYS is the brand name of commercially available Personal Computer(PC) for applications such as Cabinet Operator Module(COM) of Digital Plant Protection System(DPPS) and COM of Digital Engineered Safety Features Actuation System(DESFAS). The ELMSYS PC Hardware is being qualified by KTL(Korea Testing Lab.) for use as a Cabinet Operator Module(COM). The QNX RTOS is being dedicated by Korea Atomic Energy Research Institute (KAERI). This paper describes the outline and benchmarking test results on Context Switching, Message Passing, Synchronization and Deadline Violation of QNX RTOS under the ELMSYS PC platform

  19. The Benchmark Test Results of QNX RTOS

    International Nuclear Information System (INIS)

    Kim, Jang Yeol; Lee, Young Jun; Cheon, Se Woo; Lee, Jang Soo; Kwon, Kee Choon

    2010-01-01

    A Real-Time Operating System(RTOS) is an Operating System(OS) intended for real-time applications. Benchmarking is a point of reference by which something can be measured. The QNX is a Real Time Operating System(RTOS) developed by QSSL(QNX Software Systems Ltd.) in Canada. The ELMSYS is the brand name of commercially available Personal Computer(PC) for applications such as Cabinet Operator Module(COM) of Digital Plant Protection System(DPPS) and COM of Digital Engineered Safety Features Actuation System(DESFAS). The ELMSYS PC Hardware is being qualified by KTL(Korea Testing Lab.) for use as a Cabinet Operator Module(COM). The QNX RTOS is being dedicated by Korea Atomic Energy Research Institute (KAERI). This paper describes the outline and benchmarking test results on Context Switching, Message Passing, Synchronization and Deadline Violation of QNX RTOS under the ELMSYS PC platform

  20. HPL and STREAM Benchmarks on SANAM Supercomputer

    KAUST Repository

    Bin Sulaiman, Riman A.

    2017-03-13

    SANAM supercomputer was jointly built by KACST and FIAS in 2012 ranking second that year in the Green500 list with a power efficiency of 2.3 GFLOPS/W (Rohr et al., 2014). It is a heterogeneous accelerator-based HPC system that has 300 compute nodes. Each node includes two Intel Xeon E5?2650 CPUs, two AMD FirePro S10000 dual GPUs and 128 GiB of main memory. In this work, the seven benchmarks of HPCC were installed and configured to reassess the performance of SANAM, as part of an unpublished master thesis, after it was reassembled in the Kingdom of Saudi Arabia. We present here detailed results of HPL and STREAM benchmarks.