WorldWideScience

Sample records for core benchmark analyses

  1. Benchmark Evaluation of the NRAD Reactor LEU Core Startup Measurements

    Energy Technology Data Exchange (ETDEWEB)

    J. D. Bess; T. L. Maddock; M. A. Marshall

    2011-09-01

    The Neutron Radiography (NRAD) reactor is a 250-kW TRIGA-(Training, Research, Isotope Production, General Atomics)-conversion-type reactor at the Idaho National Laboratory; it is primarily used for neutron radiography analysis of irradiated and unirradiated fuels and materials. The NRAD reactor was converted from HEU to LEU fuel with 60 fuel elements and brought critical on March 31, 2010. This configuration of the NRAD reactor has been evaluated as an acceptable benchmark experiment and is available in the 2011 editions of the International Handbook of Evaluated Criticality Safety Benchmark Experiments (ICSBEP Handbook) and the International Handbook of Evaluated Reactor Physics Benchmark Experiments (IRPhEP Handbook). Significant effort went into precisely characterizing all aspects of the reactor core dimensions and material properties; detailed analyses of reactor parameters minimized experimental uncertainties. The largest contributors to the total benchmark uncertainty were the 234U, 236U, Er, and Hf content in the fuel; the manganese content in the stainless steel cladding; and the unknown level of water saturation in the graphite reflector blocks. A simplified benchmark model of the NRAD reactor was prepared with a keff of 1.0012 {+-} 0.0029 (1s). Monte Carlo calculations with MCNP5 and KENO-VI and various neutron cross section libraries were performed and compared with the benchmark eigenvalue for the 60-fuel-element core configuration; all calculated eigenvalues are between 0.3 and 0.8% greater than the benchmark value. Benchmark evaluations of the NRAD reactor are beneficial in understanding biases and uncertainties affecting criticality safety analyses of storage, handling, or transportation applications with LEU-Er-Zr-H fuel.

  2. TREAT Transient Analysis Benchmarking for the HEU Core

    Energy Technology Data Exchange (ETDEWEB)

    Kontogeorgakos, D. C. [Argonne National Lab. (ANL), Argonne, IL (United States); Connaway, H. M. [Argonne National Lab. (ANL), Argonne, IL (United States); Wright, A. E. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2014-05-01

    This work was performed to support the feasibility study on the potential conversion of the Transient Reactor Test Facility (TREAT) at Idaho National Laboratory from the use of high enriched uranium (HEU) fuel to the use of low enriched uranium (LEU) fuel. The analyses were performed by the GTRI Reactor Conversion staff at the Argonne National Laboratory (ANL). The objective of this study was to benchmark the transient calculations against temperature-limited transients performed in the final operating HEU TREAT core configuration. The MCNP code was used to evaluate steady-state neutronics behavior, and the point kinetics code TREKIN was used to determine core power and energy during transients. The first part of the benchmarking process was to calculate with MCNP all the neutronic parameters required by TREKIN to simulate the transients: the transient rod-bank worth, the prompt neutron generation lifetime, the temperature reactivity feedback as a function of total core energy, and the core-average temperature and peak temperature as a functions of total core energy. The results of these calculations were compared against measurements or against reported values as documented in the available TREAT reports. The heating of the fuel was simulated as an adiabatic process. The reported values were extracted from ANL reports, intra-laboratory memos and experiment logsheets and in some cases it was not clear if the values were based on measurements, on calculations or a combination of both. Therefore, it was decided to use the term “reported” values when referring to such data. The methods and results from the HEU core transient analyses will be used for the potential LEU core configurations to predict the converted (LEU) core’s performance.

  3. Multi-Core Processor Memory Contention Benchmark Analysis Case Study

    Science.gov (United States)

    Simon, Tyler; McGalliard, James

    2009-01-01

    Multi-core processors dominate current mainframe, server, and high performance computing (HPC) systems. This paper provides synthetic kernel and natural benchmark results from an HPC system at the NASA Goddard Space Flight Center that illustrate the performance impacts of multi-core (dual- and quad-core) vs. single core processor systems. Analysis of processor design, application source code, and synthetic and natural test results all indicate that multi-core processors can suffer from significant memory subsystem contention compared to similar single-core processors.

  4. VENUS-F: A fast lead critical core for benchmarking

    Energy Technology Data Exchange (ETDEWEB)

    Kochetkov, A.; Wagemans, J.; Vittiglio, G. [SCK.CEN, Boeretang 200, 2400 Mol (Belgium)

    2011-07-01

    The zero-power thermal neutron water-moderated facility VENUS at SCK-CEN has been extensively used for benchmarking in the past. In accordance with GEN-IV design tasks (fast reactor systems and accelerator driven systems), the VENUS facility was modified in 2007-2010 into the fast neutron facility VENUS-F with solid core components. This paper introduces the projects GUINEVERE and FREYA, which are being conducted at the VENUS-F facility, and it presents the measurement results obtained at the first critical core. Throughout the projects other fast lead benchmarks also will be investigated. The measurement results of the different configurations can all be used as fast neutron benchmarks. (authors)

  5. Surveying and benchmarking techniques to analyse DNA gel fingerprint images.

    Science.gov (United States)

    Heras, Jónathan; Domínguez, César; Mata, Eloy; Pascual, Vico

    2016-11-01

    DNA fingerprinting is a genetic typing technique that allows the analysis of the genomic relatedness between samples, and the comparison of DNA patterns. The analysis of DNA gel fingerprint images usually consists of five consecutive steps: image pre-processing, lane segmentation, band detection, normalization and fingerprint comparison. In this article, we firstly survey the main methods that have been applied in the literature in each of these stages. Secondly, we focus on lane-segmentation and band-detection algorithms-as they are the steps that usually require user-intervention-and detect the seven core algorithms used for both tasks. Subsequently, we present a benchmark that includes a data set of images, the gold standards associated with those images and the tools to measure the performance of lane-segmentation and band-detection algorithms. Finally, we implement the core algorithms used both for lane segmentation and band detection, and evaluate their performance using our benchmark. As a conclusion of that study, we obtain that the average profile algorithm is the best starting point for lane segmentation and band detection.

  6. Benchmarking von Krankenhausinformationssystemen – eine vergleichende Analyse deutschsprachiger Benchmarkingcluster

    Directory of Open Access Journals (Sweden)

    Jahn, Franziska

    2015-08-01

    Full Text Available Benchmarking is a method of strategic information management used by many hospitals today. During the last years, several benchmarking clusters have been established within the German-speaking countries. They support hospitals in comparing and positioning their information system’s and information management’s costs, performance and efficiency against other hospitals. In order to differentiate between these benchmarking clusters and to provide decision support in selecting an appropriate benchmarking cluster, a classification scheme is developed. The classification scheme observes both general conditions and examined contents of the benchmarking clusters. It is applied to seven benchmarking clusters which have been active in the German-speaking countries within the last years. Currently, performance benchmarking is the most frequent benchmarking type, whereas the observed benchmarking clusters differ in the number of benchmarking partners and their cooperation forms. The benchmarking clusters also deal with different benchmarking subjects. Assessing costs and quality application systems, physical data processing systems, organizational structures of information management and IT services processes are the most frequent benchmarking subjects. There is still potential for further activities within the benchmarking clusters to measure strategic and tactical information management, IT governance and quality of data and data-processing processes. Based on the classification scheme and the comparison of the benchmarking clusters, we derive general recommendations for benchmarking of hospital information systems.

  7. Benchmarking NWP Kernels on Multi- and Many-core Processors

    Science.gov (United States)

    Michalakes, J.; Vachharajani, M.

    2008-12-01

    Increased computing power for weather, climate, and atmospheric science has provided direct benefits for defense, agriculture, the economy, the environment, and public welfare and convenience. Today, very large clusters with many thousands of processors are allowing scientists to move forward with simulations of unprecedented size. But time-critical applications such as real-time forecasting or climate prediction need strong scaling: faster nodes and processors, not more of them. Moreover, the need for good cost- performance has never been greater, both in terms of performance per watt and per dollar. For these reasons, the new generations of multi- and many-core processors being mass produced for commercial IT and "graphical computing" (video games) are being scrutinized for their ability to exploit the abundant fine- grain parallelism in atmospheric models. We present results of our work to date identifying key computational kernels within the dynamics and physics of a large community NWP model, the Weather Research and Forecast (WRF) model. We benchmark and optimize these kernels on several different multi- and many-core processors. The goals are to (1) characterize and model performance of the kernels in terms of computational intensity, data parallelism, memory bandwidth pressure, memory footprint, etc. (2) enumerate and classify effective strategies for coding and optimizing for these new processors, (3) assess difficulties and opportunities for tool or higher-level language support, and (4) establish a continuing set of kernel benchmarks that can be used to measure and compare effectiveness of current and future designs of multi- and many-core processors for weather and climate applications.

  8. Large Core Code Evaluation Working Group Benchmark Problem Four: neutronics and burnup analysis of a large heterogeneous fast reactor. Part 1. Analysis of benchmark results. [LMFBR

    Energy Technology Data Exchange (ETDEWEB)

    Cowan, C.L.; Protsik, R.; Lewellen, J.W. (eds.)

    1984-01-01

    The Large Core Code Evaluation Working Group Benchmark Problem Four was specified to provide a stringent test of the current methods which are used in the nuclear design and analyses process. The benchmark specifications provided a base for performing detailed burnup calculations over the first two irradiation cycles for a large heterogeneous fast reactor. Particular emphasis was placed on the techniques for modeling the three-dimensional benchmark geometry, and sensitivity studies were carried out to determine the performance parameter sensitivities to changes in the neutronics and burnup specifications. The results of the Benchmark Four calculations indicated that a linked RZ-XY (Hex) two-dimensional representation of the benchmark model geometry can be used to predict mass balance data, power distributions, regionwise fuel exposure data and burnup reactivities with good accuracy when compared with the results of direct three-dimensional computations. Most of the small differences in the results of the benchmark analyses by the different participants were attributed to ambiguities in carrying out the regionwise flux renormalization calculations throughout the burnup step.

  9. Benchmarking spin-state chemistry in starless core models

    CERN Document Server

    Sipilä, O; Harju, J

    2015-01-01

    Aims. We aim to present simulated chemical abundance profiles for a variety of important species, with special attention given to spin-state chemistry, in order to provide reference results against which present and future models can be compared. Methods. We employ gas-phase and gas-grain models to investigate chemical abundances in physical conditions corresponding to starless cores. To this end, we have developed new chemical reaction sets for both gas-phase and grain-surface chemistry, including the deuterated forms of species with up to six atoms and the spin-state chemistry of light ions and of the species involved in the ammonia and water formation networks. The physical model is kept simple in order to facilitate straightforward benchmarking of other models against the results of this paper. Results. We find that the ortho/para ratios of ammonia and water are similar in both gas-phase and gas-grain models, at late times in particular, implying that the ratios are determined by gas-phase processes. We d...

  10. IAEA coordinated research project (CRP) on 'Analytical and experimental benchmark analyses of accelerator driven systems'

    Energy Technology Data Exchange (ETDEWEB)

    Abanades, Alberto [Universidad Politecnica de Madrid (Spain); Aliberti, Gerardo; Gohar, Yousry; Talamo, Alberto [ANL, Argonne (United States); Bornos, Victor; Kiyavitskaya, Anna [Joint Institute of Power Eng. and Nucl. Research ' Sosny' , Minsk (Belarus); Carta, Mario [ENEA, Casaccia (Italy); Janczyszyn, Jerzy [AGH-University of Science and Technology, Krakow (Poland); Maiorino, Jose [IPEN, Sao Paulo (Brazil); Pyeon, Cheolho [Kyoto University (Japan); Stanculescu, Alexander [IAEA, Vienna (Austria); Titarenko, Yury [ITEP, Moscow (Russian Federation); Westmeier, Wolfram [Wolfram Westmeier GmbH, Ebsdorfergrund (Germany)

    2008-07-01

    In December 2005, the International Atomic Energy Agency (IAEA) has started a Coordinated Research Project (CRP) on 'Analytical and Experimental Benchmark Analyses of Accelerator Driven Systems'. The overall objective of the CRP, performed within the framework of the Technical Working Group on Fast Reactors (TWGFR) of IAEA's Nuclear Energy Department, is to increase the capability of interested Member States in developing and applying advanced reactor technologies in the area of long-lived radioactive waste utilization and transmutation. The specific objective of the CRP is to improve the present understanding of the coupling of an external neutron source (e.g. spallation source) with a multiplicative sub-critical core. The participants are performing computational and experimental benchmark analyses using integrated calculation schemes and simulation methods. The CRP aims at integrating some of the planned experimental demonstration projects of the coupling between a sub-critical core and an external neutron source (e.g. YALINA Booster in Belarus, and Kyoto University's Critical Assembly (KUCA)). The objective of these experimental programs is to validate computational methods, obtain high energy nuclear data, characterize the performance of sub-critical assemblies driven by external sources, and to develop and improve techniques for sub-criticality monitoring. The paper summarizes preliminary results obtained to-date for some of the CRP benchmarks. (authors)

  11. VENUS-2 MOX Core Benchmark: Results of ORNL Calculations Using HELIOS-1.4

    Energy Technology Data Exchange (ETDEWEB)

    Ellis, RJ

    2001-02-02

    The Task Force on Reactor-Based Plutonium Disposition, now an Expert Group, was set up through the Organization for Economic Cooperation and Development/Nuclear Energy Agency to facilitate technical assessments of burning weapons-grade plutonium mixed-oxide (MOX) fuel in U.S. pressurized-water reactors and Russian VVER nuclear reactors. More than ten countries participated to advance the work of the Task Force in a major initiative, which was a blind benchmark study to compare code benchmark calculations against experimental data for the VENUS-2 MOX core at SCK-CEN in Mol, Belgium. At the Oak Ridge National Laboratory, the HELIOS-1.4 code was used to perform a comprehensive study of pin-cell and core calculations for the VENUS-2 benchmark.

  12. VENUS-2 MOX Core Benchmark: Results of ORNL Calculations Using HELIOS-1.4 - Revised Report

    Energy Technology Data Exchange (ETDEWEB)

    Ellis, RJ

    2001-06-01

    The Task Force on Reactor-Based Plutonium Disposition (TFRPD) was formed by the Organization for Economic Cooperation and Development/Nuclear Energy Agency (OECD/NEA) to study reactor physics, fuel performance, and fuel cycle issues related to the disposition of weapons-grade (WG) plutonium as mixed-oxide (MOX) reactor fuel. To advance the goals of the TFRPD, 10 countries and 12 institutions participated in a major TFRPD activity: a blind benchmark study to compare code calculations to experimental data for the VENUS-2 MOX core at SCK-CEN in Mol, Belgium. At Oak Ridge National Laboratory, the HELIOS-1.4 code system was used to perform the comprehensive study of pin-cell and MOX core calculations for the VENUS-2 MOX core benchmark study.

  13. Benchmark Evaluation of the HTR-PROTEUS Absorber Rod Worths (Core 4)

    Energy Technology Data Exchange (ETDEWEB)

    John D. Bess; Leland M. Montierth

    2014-06-01

    PROTEUS was a zero-power research reactor at the Paul Scherrer Institute (PSI) in Switzerland. The critical assembly was constructed from a large graphite annulus surrounding a central cylindrical cavity. Various experimental programs were investigated in PROTEUS; during the years 1992 through 1996, it was configured as a pebble-bed reactor and designated HTR-PROTEUS. Various critical configurations were assembled with each accompanied by an assortment of reactor physics experiments including differential and integral absorber rod measurements, kinetics, reaction rate distributions, water ingress effects, and small sample reactivity effects [1]. Four benchmark reports were previously prepared and included in the March 2013 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments (IRPhEP Handbook) [2] evaluating eleven critical configurations. A summary of that effort was previously provided [3] and an analysis of absorber rod worth measurements for Cores 9 and 10 have been performed prior to this analysis and included in PROTEUS-GCR-EXP-004 [4]. In the current benchmark effort, absorber rod worths measured for Core Configuration 4, which was the only core with a randomly-packed pebble loading, have been evaluated for inclusion as a revision to the HTR-PROTEUS benchmark report PROTEUS-GCR-EXP-002.

  14. Benchmarking Benchmarks

    NARCIS (Netherlands)

    D.C. Blitz (David)

    2011-01-01

    textabstractBenchmarking benchmarks is a bundle of six studies that are inspired by the prevalence of benchmarking in academic finance research as well as in investment practice. Three studies examine if current benchmark asset pricing models adequately describe the cross-section of stock returns.

  15. Benchmarking Benchmarks

    NARCIS (Netherlands)

    D.C. Blitz (David)

    2011-01-01

    textabstractBenchmarking benchmarks is a bundle of six studies that are inspired by the prevalence of benchmarking in academic finance research as well as in investment practice. Three studies examine if current benchmark asset pricing models adequately describe the cross-section of stock returns. W

  16. Defining core elements and outstanding practice in Nutritional Science through collaborative benchmarking.

    Science.gov (United States)

    Samman, Samir; McCarthur, Jennifer O; Peat, Mary

    2006-01-01

    Benchmarking has been adopted by educational institutions as a potentially sensitive tool for improving learning and teaching. To date there has been limited application of benchmarking methodology in the Discipline of Nutritional Science. The aim of this survey was to define core elements and outstanding practice in Nutritional Science through collaborative benchmarking. Questionnaires that aimed to establish proposed core elements for Nutritional Science, and inquired about definitions of " good" and " outstanding" practice were posted to named representatives at eight Australian universities. Seven respondents identified core elements that included knowledge of nutrient metabolism and requirement, food production and processing, modern biomedical techniques that could be applied to understanding nutrition, and social and environmental issues as related to Nutritional Science. Four of the eight institutions who agreed to participate in the present survey identified the integration of teaching with research as an indicator of outstanding practice. Nutritional Science is a rapidly evolving discipline. Further and more comprehensive surveys are required to consolidate and update the definition of the discipline, and to identify the optimal way of teaching it. Global ideas and specific regional requirements also need to be considered.

  17. Benchmark calculation for water reflected STACY cores containing low enriched uranyl nitrate solution

    Energy Technology Data Exchange (ETDEWEB)

    Miyoshi, Yoshinori; Yamamoto, Toshihiro; Nakamura, Takemi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2001-08-01

    In order to validate the availability of criticality calculation codes and related nuclear data library, a series of fundamental benchmark experiments on low enriched uranyl nitrate solution have been performed with a Static Experiment Criticality Facility, STACY in JAERI. The basic core composed of a single tank with water reflector was used for accumulating the systematic data with well-known experimental uncertainties. This paper presents the outline of the core configurations of STACY, the standard calculation model, and calculation results with a Monte Carlo code and JENDL 3.2 nuclear data library. (author)

  18. Development of core design and analyses technology for integral reactor

    Energy Technology Data Exchange (ETDEWEB)

    Zee, Sung Quun; Lee, C. C.; Song, J. S. and others

    1999-03-01

    Integral reactors are developed for the applications such as sea water desalination, heat energy for various industries, and power sources for large container ships. In order to enhance the inherent and passive safety features, low power density concept is chosen for the integral reactor SMART. Moreover, ultra-longer cycle and boron-free operation concepts are reviewed for better plant economy and simple design of reactor system. Especially, boron-free operation concept brings about large difference in core configurations and reactivity controls from those of the existing large size commercial nuclear power plants and also causes many differences in the safety aspects. The ultimate objectives of this study include detailed core design of a integral reactor, development of the core design system and technology, and finally acquisition of the system design certificate. The goal of the first stage is the conceptual core design, that is, to establish the design bases and requirements suitable for the boron-free concept, to develop a core loading pattern, to analyze the nuclear, thermal and hydraulic characteristics of the core and to perform the core shielding design. Interface data for safety and performance analyses including fuel design data are produced for the relevant design analysis groups. Nuclear, thermal and hydraulic, shielding design and analysis code systems necessary for the core conceptual design are established through modification of the existing design tools and newly developed methodology and code modules. Core safety and performance can be improved by the technology development such as boron-free core optimization, advaned core monitoring and operational aid system. Feasiblity study on the improvement of the core protection and monitoring system will also contribute toward core safety and performance. Both the conceptual core design study and the related technology will provide concrete basis for the next design phase. This study will also

  19. NODAL3 Sensitivity Analysis for NEACRP 3D LWR Core Transient Benchmark (PWR

    Directory of Open Access Journals (Sweden)

    Surian Pinem

    2016-01-01

    Full Text Available This paper reports the results of sensitivity analysis of the multidimension, multigroup neutron diffusion NODAL3 code for the NEACRP 3D LWR core transient benchmarks (PWR. The code input parameters covered in the sensitivity analysis are the radial and axial node sizes (the number of radial node per fuel assembly and the number of axial layers, heat conduction node size in the fuel pellet and cladding, and the maximum time step. The output parameters considered in this analysis followed the above-mentioned core transient benchmarks, that is, power peak, time of power peak, power, averaged Doppler temperature, maximum fuel centerline temperature, and coolant outlet temperature at the end of simulation (5 s. The sensitivity analysis results showed that the radial node size and maximum time step give a significant effect on the transient parameters, especially the time of power peak, for the HZP and HFP conditions. The number of ring divisions for fuel pellet and cladding gives negligible effect on the transient solutions. For productive work of the PWR transient analysis, based on the present sensitivity analysis results, we recommend NODAL3 users to use 2×2 radial nodes per assembly, 1×18 axial layers per assembly, the maximum time step of 10 ms, and 9 and 1 ring divisions for fuel pellet and cladding, respectively.

  20. Benchmarking the CRBLASTER Computational Framework on the 350-MHz 49-core Maestro Development Board

    Science.gov (United States)

    Mighell, K. J.

    2012-09-01

    I describe the performance of the CRBLASTER computational framework on a 350-MHz 49-core Maestro Development Board (MBD). The 49-core Interim Test Chip (ITC) was developed by the U.S. Government and is based on the intellectual property of the 64-core TILE64 processor of the Tilera Corporation. The Maestro processor is intended for use in the high radiation environments found in space; the ITC was fabricated using IBM 90-nm CMOS 9SF technology and Radiation-Hardening-by-Design (RHDB) rules. CRBLASTER is a parallel-processing cosmic-ray rejection application based on a simple computational framework that uses the high-performance computing industry standard Message Passing Interface (MPI) library. CRBLASTER was designed to be used by research scientists to easily port image-analysis programs based on embarrassingly-parallel algorithms to a parallel-processing environment such as a multi-node Beowulf cluster or multi-core processors using MPI. I describe my experience of porting CRBLASTER to the 64-core TILE64 processor, the Maestro simulator, and finally the 49-core Maestro processor itself. Performance comparisons using the ITC are presented between emulating all floating-point operations in software and doing all floating point operations with hardware assist from an IEEE-754 compliant Aurora FPU (floating point unit) that is attached to each of the 49 cores. Benchmarking of the CRBLASTER computational framework using the memory-intensive L.A.COSMIC cosmic ray rejection algorithm and a computational-intensive Poisson noise generator reveal subtleties of the Maestro hardware design. Lastly, I describe the importance of using real scientific applications during the testing phase of next-generation computer hardware; complex real-world scientific applications can stress hardware in novel ways that may not necessarily be revealed while executing simple applications or unit tests.

  1. BENCHMARK EVALUATION OF THE START-UP CORE REACTOR PHYSICS MEASUREMENTS OF THE HIGH TEMPERATURE ENGINEERING TEST REACTOR

    Energy Technology Data Exchange (ETDEWEB)

    John Darrell Bess

    2010-05-01

    The benchmark evaluation of the start-up core reactor physics measurements performed with Japan’s High Temperature Engineering Test Reactor, in support of the Next Generation Nuclear Plant Project and Very High Temperature Reactor Program activities at the Idaho National Laboratory, has been completed. The evaluation was performed using MCNP5 with ENDF/B-VII.0 nuclear data libraries and according to guidelines provided for inclusion in the International Reactor Physics Experiment Evaluation Project Handbook. Results provided include updated evaluation of the initial six critical core configurations (five annular and one fully-loaded). The calculated keff eigenvalues agree within 1s of the benchmark values. Reactor physics measurements that were evaluated include reactivity effects measurements such as excess reactivity during the core loading process and shutdown margins for the fully-loaded core, four isothermal temperature reactivity coefficient measurements for the fully-loaded core, and axial reaction rate measurements in the instrumentation columns of three core configurations. The calculated values agree well with the benchmark experiment measurements. Fully subcritical and warm critical configurations of the fully-loaded core were also assessed. The calculated keff eigenvalues for these two configurations also agree within 1s of the benchmark values. The reactor physics measurement data can be used in the validation and design development of future High Temperature Gas-cooled Reactor systems.

  2. Radiocarbon analyses along the EDML ice core in Antarctica

    NARCIS (Netherlands)

    van de Wal, R.S.W.; Meijer, H.A.J.; van Rooij, M.; van der Veen, C.

    2007-01-01

    Samples, 17 in total, from the EDML core drilled at Kohnen station Antarctica are analysed for 14CO and 14CO2 with a dry-extraction technique in combination with accelerator mass spectrometry. Results of the in situ produced 14CO fraction show a very low concentration of in situ produced 14CO. Despi

  3. Radiocarbon analyses along the EDML ice core in Antarctica

    NARCIS (Netherlands)

    Van de Wal, R. S. W.; Meijer, H. A. J.; De Rooij, M.; Van der Veen, C.

    2007-01-01

    Samples, 17 in total, from the EDML core drilled at Kohnen station Antarctica are analysed for (CO)-C-14 and (CO2)-C-14 with a dry-extraction technique in combination with accelerator mass spectrometry. Results of the in situ produced (CO)-C-14 fraction show a very low concentration of in situ produ

  4. A highly heterogeneous 3D PWR core benchmark: deterministic and Monte Carlo method comparison

    Science.gov (United States)

    Jaboulay, J.-C.; Damian, F.; Douce, S.; Lopez, F.; Guenaut, C.; Aggery, A.; Poinot-Salanon, C.

    2014-06-01

    Physical analyses of the LWR potential performances with regards to the fuel utilization require an important part of the work dedicated to the validation of the deterministic models used for theses analyses. Advances in both codes and computer technology give the opportunity to perform the validation of these models on complex 3D core configurations closed to the physical situations encountered (both steady-state and transient configurations). In this paper, we used the Monte Carlo Transport code TRIPOLI-4®; to describe a whole 3D large-scale and highly-heterogeneous LWR core. The aim of this study is to validate the deterministic CRONOS2 code to Monte Carlo code TRIPOLI-4®; in a relevant PWR core configuration. As a consequence, a 3D pin by pin model with a consistent number of volumes (4.3 millions) and media (around 23,000) is established to precisely characterize the core at equilibrium cycle, namely using a refined burn-up and moderator density maps. The configuration selected for this analysis is a very heterogeneous PWR high conversion core with fissile (MOX fuel) and fertile zones (depleted uranium). Furthermore, a tight pitch lattice is selcted (to increase conversion of 238U in 239Pu) that leads to harder neutron spectrum compared to standard PWR assembly. In these conditions two main subjects will be discussed: the Monte Carlo variance calculation and the assessment of the diffusion operator with two energy groups for the core calculation.

  5. 3-D core modelling of RIA transient: the TMI-1 benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Ferraresi, P. [CEA Cadarache, Institut de Protection et de Surete Nucleaire, Dept. de Recherches en Securite, 13 - Saint Paul Lez Durance (France); Studer, E. [CEA Saclay, Dept. Modelisation de Systemes et Structures, 91 - Gif sur Yvette (France); Avvakumov, A.; Malofeev, V. [Nuclear Safety Institute of Russian Research Center, Kurchatov Institute, Moscow (Russian Federation); Diamond, D.; Bromley, B. [Nuclear Energy and Infrastructure Systems Div., Brookhaven National Lab., BNL, Upton, NY (United States)

    2001-07-01

    The increase of fuel burn up in core management poses actually the problem of the evaluation of the deposited energy during Reactivity Insertion Accidents (RIA). In order to precisely evaluate this energy, 3-D approaches are used more and more frequently in core calculations. This 'best-estimate' approach requires the evaluation of code uncertainties. To contribute to this evaluation, a code benchmark has been launched. A 3-D modelling for the TMI-1 central Ejected Rod Accident with zero and intermediate initial powers was carried out with three different methods of calculation for an inserted reactivity respectively fixed at 1.2 $ and 1.26 $. The studies implemented by the neutronics codes PARCS (BNL) and CRONOS (IPSN/CEA) describe an homogeneous assembly, whereas the BARS (KI) code allows a pin-by-pin representation (CRONOS has both possibilities). All the calculations are consistent, the variation in figures resulting mainly from the method used to build cross sections and reflectors constants. The maximum rise in enthalpy for the intermediate initial power (33 % P{sub N}) calculation is, for this academic calculation, about 30 cal/g. This work will be completed in a next step by an evaluation of the uncertainty induced by the uncertainty on model parameters, and a sensitivity study of the key parameters for a peripheral Rod Ejection Accident. (authors)

  6. Continuous melting and ion chromatographic analyses of ice cores.

    Science.gov (United States)

    Huber, T M; Schwikowski, M; Gäggele, H W

    2001-06-22

    A new method for determining concentrations of organic and inorganic ions in ice cores by continuous melting and contemporaneous ion chromatographic analyses was developed. A subcore is melted on a melting device and the meltwater produced is collected in two parallel sample loops and then analyzed simultaneously by two ion chromatographs, one for anions and one for cations. For most of the analyzed species, lower or equal blank values were achieved with the continuous melting and analysis technique compared to the conventional analysis. Comparison of the continuous melting and ion chromatographic analysis with the conventional analysis of a real ice core segment showed good agreement in concentration profiles and total amounts of ionic species. Thus, the newly developed method is well suited for ice core analysis and has the advantages of lower ice consumption, less time-consuming sample preparation and lower risk of contamination.

  7. VIPRE modeling of VVER-1000 reactor core for DNB analyses

    Energy Technology Data Exchange (ETDEWEB)

    Sung, Y.; Nguyen, Q. [Westinghouse Electric Corporation, Pittsburgh, PA (United States); Cizek, J. [Nuclear Research Institute, Prague, (Czech Republic)

    1995-09-01

    Based on the one-pass modeling approach, the hot channels and the VVER-1000 reactor core can be modeled in 30 channels for DNB analyses using the VIPRE-01/MOD02 (VIPRE) code (VIPRE is owned by Electric Power Research Institute, Palo Alto, California). The VIPRE one-pass model does not compromise any accuracy in the hot channel local fluid conditions. Extensive qualifications include sensitivity studies of radial noding and crossflow parameters and comparisons with the results from THINC and CALOPEA subchannel codes. The qualifications confirm that the VIPRE code with the Westinghouse modeling method provides good computational performance and accuracy for VVER-1000 DNB analyses.

  8. Creation of a Full-Core HTR Benchmark with the Fort St. Vrain Initial Core and Assessment of Uncertainties in the FSV Fuel Composition and Geometry

    Energy Technology Data Exchange (ETDEWEB)

    Martin, William R.; Lee, John C.; baxter, Alan; Wemple, Chuck

    2012-03-31

    Information and measured data from the intial Fort St. Vrain (FSV) high temperature gas reactor core is used to develop a benchmark configuration to validate computational methods for analysis of a full-core, commercial HTR configuration. Large uncertainties in the geometry and composition data for the FSV fuel and core are identified, including: (1) the relative numbers of fuel particles for the four particle types, (2) the distribution of fuel kernel diameters for the four particle types, (3) the Th:U ratio in the initial FSV core, (4) and the buffer thickness for the fissile and fertile particles. Sensitivity studies were performed to assess each of these uncertainties. A number of methods were developed to assist in these studies, including: (1) the automation of MCNP5 input files for FSV using Python scripts, (2) a simple method to verify isotopic loadings in MCNP5 input files, (3) an automated procedure to conduct a coupled MCNP5-RELAP5 analysis for a full-core FSV configuration with thermal-hydraulic feedback, and (4) a methodology for sampling kernel diameters from arbitrary power law and Gaussian PDFs that preserved fuel loading and packing factor constraints. A reference FSV fuel configuration was developed based on having a single diameter kernel for each of the four particle types, preserving known uranium and thorium loadings and packing factor (58%). Three fuel models were developed, based on representing the fuel as a mixture of kernels with two diameters, four diameters, or a continuous range of diameters. The fuel particles were put into a fuel compact using either a lattice-bsed approach or a stochastic packing methodology from RPI, and simulated with MCNP5. The results of the sensitivity studies indicated that the uncertainties in the relative numbers and sizes of fissile and fertile kernels were not important nor were the distributions of kernel diameters within their diameter ranges. The uncertainty in the Th:U ratio in the intial FSV core was

  9. Validation of HELIOS for ATR Core Follow Analyses

    Energy Technology Data Exchange (ETDEWEB)

    Bays, Samuel E. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Swain, Emily T. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Crawford, Douglas S. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Nigg, David W. [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-03-01

    This work summarizes the validation analyses for the HELIOS code to support core design and safety assurance calculations of the Advanced Test Reactor (ATR). Past and current core safety assurance is performed by the PDQ-7 diffusion code; a state of the art reactor physics simulation tool from the nuclear industry’s earlier days. Over the past twenty years, improvements in computational speed have enabled the use of modern neutron transport methodologies to replace the role of diffusion theory for simulation of complex systems, such as the ATR. More exact methodologies have enabled a paradigm-shift away from highly tuned codes that force compliance with a bounding safety envelope, and towards codes regularly validated against routine measurements. To validate HELIOS, the 16 ATR operational cycles from late-2009 to present were modeled. The computed power distribution was compared against data collected by the ATR’s on-line power surveillance system. It was found that the ATR’s lobe-powers could be determined with ±10% accuracy. Also, the ATR’s cold startup shim configuration for each of these 16 cycles was estimated and compared against the reported critical position from the reactor log-book. HELIOS successfully predicted criticality within the tolerance set by the ATR startup procedure for 13 out of the 16 cycles. This is compared to 12 times for PDQ (without empirical adjustment). These findings, as well as other insights discussed in this report, suggest that HELIOS is highly suited for replacing PDQ for core safety assurance of the ATR. Furthermore, a modern verification and validation framework has been established that allows reactor and fuel performance data to be computed with a known degree of accuracy and stated uncertainty.

  10. Mars/master coupled system calculation of the OECD MSLB benchmark exercise 3 with refined core thermal-hydraulic nodalization

    Energy Technology Data Exchange (ETDEWEB)

    Jeong, J.J.; Joo, H.G.; Cho, B.O.; Zee, S.Q.; Lee, W.J. [Korea Atomic Energy Research Inst., Daejeon (Korea, Republic of)

    2001-07-01

    To assess the performance of KAERI coupled multi-dimensional system thermal- hydraulics (T/H) and three-dimensional (3-D) kinetics code, MARS/MASTER, Exercise III of the OECD main steam line break benchmark problem is solved. The coupled code is capable of employing an individual flow channel for each fuel assembly as well as lumped ones. The basic analysis model of the reference plant consists of four major components: a 3-D core neutronics model, a 3-D thermal-hydraulic model for the reactor vessel employing lumped flow channels, a refined core T/H model and a 1-D T/H model for coolant system. Calculations were performed with and without the refined core T/H model. The results of the basic calculation performed without the refined core T/H model show that the core power distribution evolves to a highly localized shape due to the presence of a stuck rod, as well as asymmetric flow distribution in the reactor core. The results of the refined core T/H model indicate that the local peaking factor can be reduced by as much as 22 % through accurate representation of the local T/H feedback effects. Nonetheless, the global transient behaviors are not significantly affected. (author)

  11. Radiochemical analyses of surface water from U.S. Geological Survey hydrologic bench-mark stations

    Science.gov (United States)

    Janzer, V.J.; Saindon, L.G.

    1972-01-01

    The U.S. Geological Survey's program for collecting and analyzing surface-water samples for radiochemical constituents at hydrologic bench-mark stations is described. Analytical methods used during the study are described briefly and data obtained from 55 of the network stations in the United States during the period from 1967 to 1971 are given in tabular form.Concentration values are reported for dissolved uranium, radium, gross alpha and gross beta radioactivity. Values are also given for suspended gross alpha radioactivity in terms of natural uranium. Suspended gross beta radioactivity is expressed both as the equilibrium mixture of strontium-90/yttrium-90 and as cesium-137.Other physical parameters reported which describe the samples include the concentrations of dissolved and suspended solids, the water temperature and stream discharge at the time of the sample collection.

  12. Radiocarbon analyses along the EDML ice core in Antarctica

    Energy Technology Data Exchange (ETDEWEB)

    Wal, R.S.W. van de; Veen, C. van der [Utrecht Univ. (Netherlands). Inst. for Marine and Atmospheric research; Meijer, H.A.J.; De Rooij, M. [Univ. of Groningen (Netherlands). Center for Isotope Research

    2007-02-15

    Samples, 17 in total, from the EDML core drilled at Kohnen station Antarctica are analysed for {sup 14}CO and {sup 14}CO{sub 2} with a dry-extraction technique in combination with accelerator mass spectrometry. Results of the in situ produced {sup 14}CO fraction show a very low concentration of in situ produced {sup 14}CO. Despite these low levels in carbon monoxide, a significant in situ production is observed in the carbon dioxide fraction. For the first time we found background values for the ice samples which are equal to line blanks. The data set is used to test a model for the production of {sup 14}C in the ice matrix, in combination with a degassing as {sup 14}CO{sub 2} and possibly as {sup 14}CO into the air bubbles. Application of the model, for which no independent validation is yet possible, offers the opportunity to use radiocarbon analysis as dating technique for the air bubbles in the ice. Assigning an arbitrary error of 25% to the calculation of the in situ production leads to age estimates, after correction for the in situ production, which are in agreement with age estimates based on a volcanic layer match of EDML to the Dome C timescale in combination with a correction for firn diffusion.

  13. Applications of Integral Benchmark Data

    Energy Technology Data Exchange (ETDEWEB)

    Giuseppe Palmiotti; Teruhiko Kugo; Fitz Trumble; Albert C. (Skip) Kahler; Dale Lancaster

    2014-10-09

    The International Reactor Physics Experiment Evaluation Project (IRPhEP) and the International Criticality Safety Benchmark Evaluation Project (ICSBEP) provide evaluated integral benchmark data that may be used for validation of reactor physics / nuclear criticality safety analytical methods and data, nuclear data testing, advanced modeling and simulation, and safety analysis licensing activities. The handbooks produced by these programs are used in over 30 countries. Five example applications are presented in this paper: (1) Use of IRPhEP Data in Uncertainty Analyses and Cross Section Adjustment, (2) Uncertainty Evaluation Methods for Reactor Core Design at JAEA Using Reactor Physics Experimental Data, (3) Application of Benchmarking Data to a Broad Range of Criticality Safety Problems, (4) Cross Section Data Testing with ICSBEP Benchmarks, and (5) Use of the International Handbook of Evaluated Reactor Physics Benchmark Experiments to Support the Power Industry.

  14. Benchmark of SCALE (SAS2H) isotopic predictions of depletion analyses for San Onofre PWR MOX fuel

    Energy Technology Data Exchange (ETDEWEB)

    Hermann, O.W.

    2000-02-01

    The isotopic composition of mixed-oxide (MOX) fuel, fabricated with both uranium and plutonium, after discharge from reactors is of significant interest to the Fissile Materials Disposition Program. The validation of the SCALE (SAS2H) depletion code for use in the prediction of isotopic compositions of MOX fuel, similar to previous validation studies on uranium-only fueled reactors, has corresponding significance. The EEI-Westinghouse Plutonium Recycle Demonstration Program examined the use of MOX fuel in the San Onofre PWR, Unit 1, during cycles 2 and 3. Isotopic analyses of the MOX spent fuel were conducted on 13 actinides and {sup 148}Nd by either mass or alpha spectrometry. Six fuel pellet samples were taken from four different fuel pins of an irradiated MOX assembly. The measured actinide inventories from those samples has been used to benchmark SAS2H for MOX fuel applications. The average percentage differences in the code results compared with the measurement were {minus}0.9% for {sup 235}U and 5.2% for {sup 239}Pu. The differences for most of the isotopes were significantly larger than in the cases for uranium-only fueled reactors. In general, comparisons of code results with alpha spectrometer data had extreme differences, although the differences in the calculations compared with mass spectrometer analyses were not extremely larger than that of uranium-only fueled reactors. This benchmark study should be useful in estimating uncertainties of inventory, criticality and dose calculations of MOX spent fuel.

  15. Analysing Student Performance Using Sparse Data of Core Bachelor Courses

    Science.gov (United States)

    Saarela, Mirka; Karkkainen, Tommi

    2015-01-01

    Curricula for Computer Science (CS) degrees are characterized by the strong occupational orientation of the discipline. In the BSc degree structure, with clearly separate CS core studies, the learning skills for these and other required courses may vary a lot, which is shown in students' overall performance. To analyze this situation, we apply…

  16. Benchmarking CRBLASTER on the 350-MHz 49-core Maestro Development Board

    CERN Document Server

    Mighell, Kenneth J

    2012-01-01

    I describe the performance of the CRBLASTER computational framework on a 350-MHz 49-core Maestro Development Board (MDB). The 49-core Interim Test Chip (ITC) was developed by the U.S. Government and is based on the intellectual property of the 64-core TILE64 processor of the Tilera Corporation. The Maestro processor is intended for use in the high radiation environments found in space; the ITC was fabricated using IBM 90-nm CMOS 9SF technology and Radiation-Hardening-by-Design (RHDB) rules. CRBLASTER is a parallel-processing cosmic-ray rejection application based on a simple computational framework that uses the high-performance computing industry standard Message Passing Interface (MPI) library. CRBLASTER was designed to be used by research scientists to easily port image-analysis programs based on embarrassingly-parallel algorithms to a parallel-processing environment such as a multi-node Beowulf cluster or multi-core processors using MPI. I describe my experience of porting CRBLASTER to the 64-core TILE64 ...

  17. Benchmark calculation of no-core Monte Carlo shell model in light nuclei

    CERN Document Server

    Abe, T; Otsuka, T; Shimizu, N; Utsuno, Y; Vary, J P; 10.1063/1.3584062

    2011-01-01

    The Monte Carlo shell model is firstly applied to the calculation of the no-core shell model in light nuclei. The results are compared with those of the full configuration interaction. The agreements between them are within a few % at most.

  18. Development and application of neutron transport methods and uncertainty analyses for reactor core calculations. Technical report; Entwicklung und Einsatz von Neutronentransportmethoden und Unsicherheitsanalysen fuer Reaktorkernberechnungen. Technischer Bericht

    Energy Technology Data Exchange (ETDEWEB)

    Zwermann, W.; Aures, A.; Bernnat, W.; and others

    2013-06-15

    This report documents the status of the research and development goals reached within the reactor safety research project RS1503 ''Development and Application of Neutron Transport Methods and Uncertainty Analyses for Reactor Core Calculations'' as of the 1{sup st} quarter of 2013. The superordinate goal of the project is the development, validation, and application of neutron transport methods and uncertainty analyses for reactor core calculations. These calculation methods will mainly be applied to problems related to the core behaviour of light water reactors and innovative reactor concepts. The contributions of this project towards achieving this goal are the further development, validation, and application of deterministic and stochastic calculation programmes and of methods for uncertainty and sensitivity analyses, as well as the assessment of artificial neutral networks, for providing a complete nuclear calculation chain. This comprises processing nuclear basis data, creating multi-group data for diffusion and transport codes, obtaining reference solutions for stationary states with Monte Carlo codes, performing coupled 3D full core analyses in diffusion approximation and with other deterministic and also Monte Carlo transport codes, and implementing uncertainty and sensitivity analyses with the aim of propagating uncertainties through the whole calculation chain from fuel assembly, spectral and depletion calculations to coupled transient analyses. This calculation chain shall be applicable to light water reactors and also to innovative reactor concepts, and therefore has to be extensively validated with the help of benchmarks and critical experiments.

  19. Continuous ice-core chemical analyses using inductively coupled plasma mass spectrometry.

    Science.gov (United States)

    McConnell, Joseph R; Lamorey, Gregg W; Lambert, Steven W; Taylor, Kendrick C

    2002-01-01

    Impurities trapped in ice sheets and glaciers have the potential to provide detailed, high temporal resolution proxy information on paleo-environments, atmospheric circulation, and environmental pollution through the use of chemical, isotopic, and elemental tracers. We present a novel approach to ice-core chemical analyses in which an ice-core melter is coupled directly with both an inductively coupled plasma mass spectrometer and a traditional continuous flow analysis system. We demonstrate this new approach using replicated measurements of ice-core samples from Summit, Greenland. With this method, it is possible to readily obtain continuous, exactly coregistered concentration records for a large number of elements and chemical species at ppb and ppt levels and at unprecedented depth resolution. Such very-high depth resolution, multiparameter measurements will significantly expand the use of ice-core records for environmental proxies.

  20. An Ice Core Melter System for Continuous Major and Trace Chemical Analyses of a New Mt. Logan Summit Ice Core

    Science.gov (United States)

    Osterberg, E. C.; Handley, M. J.; Sneed, S. D.; Mayewski, P. A.; Kreutz, K. J.; Fisher, D. A.

    2004-12-01

    The ice core melter system at the University of Maine Climate Change Institute has been recently modified and updated to allow high-resolution (Mt. Logan summit ice core (187 m to bedrock), for analyses of 34 trace elements (Sr, Cd, Sb, Cs, Ba, Pb, Bi, U, As, Al, S, Ca, Ti, V, Cr, Mn, Fe, Co, Cu, Zn, REE suite) by inductively coupled plasma mass spectrometry (ICP-MS), 8 major ions (Na+, Ca2+, Mg2+, K+, Cl-, SO42-, NO3-, MSA) by ion chromatography (IC), stable water isotopes (δ 18O, δ D, d) and volcanic tephra. The UMaine continuous melter (UMCoM) system is housed in a dedicated clean room with HEPA filtered air. Standard clean room procedures are employed during melting. A Wagenbach-style continuous melter system has been modified to include a pure Nickel melthead that can be easily dismantled for thorough cleaning. The system allows melting of both ice and firn without wicking of the meltwater into unmelted core. Contrary to ice core melter systems in which the meltwater is directly channeled to online instruments for continuous flow analyses, the UMCoM system collects discrete samples for each chemical analysis under ultraclean conditions. Meltwater from the pristine innermost section of the ice core is split between one fraction collector that accumulates ICP-MS samples in acid pre-cleaned polypropylene vials under a class-100 HEPA clean bench, and a second fraction collector that accumulates IC samples. A third fraction collector accumulates isotope and tephra samples from the potentially contaminated outer portion of the core. This method is advantageous because an archive of each sample remains for subsequent analyses (including trace element isotope ratios), and ICP-MS analytes are scanned for longer intervals and in replicate. Method detection limits, calculated from de-ionized water blanks passed through the entire UMCoM system, are below 10% of average Mt. Logan values. A strong correlation (R2>0.9) between Ca and S concentrations measured on different

  1. Cycle 0(CY1991) NLS trade studies and analyses report. Book 1: Structures and core vehicle

    Science.gov (United States)

    1992-01-01

    This report (SR-1: Structures, Trades, and Analysis), documents the Core Tankage Trades and analyses performed in support of the National Launch System (NLS) Cycle 0 preliminary design activities. The report covers trades that were conducted on the Vehicle Assembly, Fwd Skirt, LO2 Tank, Intertank, LH2 Tank, and Aft Skirt of the NLS Core Tankage. For each trade study, a two page executive summary and the detail trade study are provided. The trade studies contain study results, recommended changes to the Cycle 0 Baselines, and suggested follow on tasks to be performed during Cycle 1.

  2. Web Server Benchmark Application WiiBench using Erlang/OTP R11 and Fedora-Core Linux 5.0

    CERN Document Server

    Mutiara, A B

    2007-01-01

    As the web grows and the amount of traffics on the web server increase, problems related to performance begin to appear. Some of the problems, such as the number of users that can access the server simultaneously, the number of requests that can be handled by the server per second (requests per second) to bandwidth consumption and hardware utilization like memories and CPU. To give better quality of service (\\textbf{\\textit{QoS}}), web hosting providers and also the system administrators and network administrators who manage the server need a benchmark application to measure the capabilities of their servers. Later, the application intends to work under Linux/Unix -- like platforms and built using Erlang/OTP R11 as a concurrent oriented language under Fedora Core Linux 5.0. \\textbf{\\textit{WiiBench}} is divided into two main parts, the controller section and the launcher section. Controller is the core of the application. It has several duties, such as read the benchmark scenario file, configure the program b...

  3. Analyses of the stability and core taxonomic memberships of the human microbiome.

    Science.gov (United States)

    Li, Kelvin; Bihan, Monika; Methé, Barbara A

    2013-01-01

    Analyses of the taxonomic diversity associated with the human microbiome continue to be an area of great importance. The study of the nature and extent of the commonly shared taxa ("core"), versus those less prevalent, establishes a baseline for comparing healthy and diseased groups by quantifying the variation among people, across body habitats and over time. The National Institutes of Health (NIH) sponsored Human Microbiome Project (HMP) has provided an unprecedented opportunity to examine and better define what constitutes the taxonomic core within and across body habitats and individuals through pyrosequencing-based profiling of 16S rRNA gene sequences from oral, skin, distal gut (stool), and vaginal body habitats from over 200 healthy individuals. A two-parameter model is introduced to quantitatively identify the core taxonomic members of each body habitat's microbiota across the healthy cohort. Using only cutoffs for taxonomic ubiquity and abundance, core taxonomic members were identified for each of the 18 body habitats and also for the 4 higher-level body regions. Although many microbes were shared at low abundance, they exhibited a relatively continuous spread in both their abundance and ubiquity, as opposed to a more discretized separation. The numbers of core taxa members in the body regions are comparatively small and stable, reflecting the relatively high, but conserved, interpersonal variability within the cohort. Core sizes increased across the body regions in the order of: vagina, skin, stool, and oral cavity. A number of "minor" oral taxonomic core were also identified by their majority presence across the cohort, but with relatively low and stable abundances. A method for quantifying the difference between two cohorts was introduced and applied to samples collected on a second visit, revealing that over time, the oral, skin, and stool body regions tended to be more transient in their taxonomic structure than the vaginal body region.

  4. Benchmark Comparison of Dual- and Quad-Core Processor Linux Clusters with Two Global Climate Modeling Workloads

    Science.gov (United States)

    McGalliard, James

    2008-01-01

    This viewgraph presentation details the science and systems environments that NASA High End computing program serves. Included is a discussion of the workload that is involved in the processing for the Global Climate Modeling. The Goddard Earth Observing System Model, Version 5 (GEOS-5) is a system of models integrated using the Earth System Modeling Framework (ESMF). The GEOS-5 system was used for the Benchmark tests, and the results of the tests are shown and discussed. Tests were also run for the Cubed Sphere system, results for these test are also shown.

  5. Transient analyses for a molten salt fast reactor with optimized core geometry

    Energy Technology Data Exchange (ETDEWEB)

    Li, R., E-mail: rui.li@kit.edu [Institute for Nuclear and Energy Technologies (IKET), Karlsruhe Institute of Technology (KIT), Hermann-von-Helmholtz-Platz 1, D-76344 Eggenstein-Leopoldshafen (Germany); Wang, S.; Rineiski, A.; Zhang, D. [Institute for Nuclear and Energy Technologies (IKET), Karlsruhe Institute of Technology (KIT), Hermann-von-Helmholtz-Platz 1, D-76344 Eggenstein-Leopoldshafen (Germany); Merle-Lucotte, E. [Laboratoire de Physique Subatomique et de Cosmologie – IN2P3 – CNRS/Grenoble INP/UJF, 53, rue des Martyrs, 38026 Grenoble (France)

    2015-10-15

    Highlights: • MSFR core is analyzed by fully coupling neutronics and thermal-hydraulics codes. • We investigated four types of transients intensively with the optimized core geometry. • It demonstrates MSFR has a high safety potential. - Abstract: Molten salt reactors (MSRs) have encountered a marked resurgence of interest over the past decades, highlighted by their inclusion as one of the six candidate reactors of the Generation IV advanced nuclear power systems. The present work is carried out in the framework of the European FP-7 project EVOL (Evaluation and Viability Of Liquid fuel fast reactor system). One of the project tasks is to report on safety analyses: calculations of reactor transients using various numerical codes for the molten salt fast reactor (MSFR) under different boundary conditions, assumptions, and for different selected scenarios. Based on the original reference core geometry, an optimized geometry was proposed by Rouch et al. (2014. Ann. Nucl. Energy 64, 449) on thermal-hydraulic design aspects to avoid a recirculation zone near the blanket which accumulates heat and very high temperature exceeding the salt boiling point. Using both fully neutronics thermal-hydraulic coupled codes (SIMMER and COUPLE), we also re-confirm the efforts step by step toward a core geometry without the recirculation zone in particular as concerns the modifications of the core geometrical shape. Different transients namely Unprotected Loss of Heat Sink (ULOHS), Unprotected Loss of Flow (ULOF), Unprotected Transient Over Power (UTOP), Fuel Salt Over Cooling (FSOC) are intensively investigated and discussed with the optimized core geometry. It is demonstrated that due to inherent negative feedbacks, an MSFR plant has a high safety potential.

  6. The Core Mouse Response to Infection by Neospora Caninum Defined by Gene Set Enrichment Analyses

    Science.gov (United States)

    Ellis, John; Goodswen, Stephen; Kennedy, Paul J; Bush, Stephen

    2012-01-01

    In this study, the BALB/c and Qs mouse responses to infection by the parasite Neospora caninum were investigated in order to identify host response mechanisms. Investigation was done using gene set (enrichment) analyses of microarray data. GSEA, MANOVA, Romer, subGSE and SAM-GS were used to study the contrasts Neospora strain type, Mouse type (BALB/c and Qs) and time post infection (6 hours post infection and 10 days post infection). The analyses show that the major signal in the core mouse response to infection is from time post infection and can be defined by gene ontology terms Protein Kinase Activity, Cell Proliferation and Transcription Initiation. Several terms linked to signaling, morphogenesis, response and fat metabolism were also identified. At 10 days post infection, genes associated with fatty acid metabolism were identified as up regulated in expression. The value of gene set (enrichment) analyses in the analysis of microarray data is discussed. PMID:23012496

  7. Comparative Genomic and Phylogenomic Analyses Reveal a Conserved Core Genome Shared by Estuarine and Oceanic Cyanopodoviruses

    Science.gov (United States)

    Huang, Sijun; Zhang, Si; Jiao, Nianzhi; Chen, Feng

    2015-01-01

    Podoviruses are among the major viral groups that infect marine picocyanobacteria Prochlorococcus and Synechococcus. Here, we reported the genome sequences of five Synechococcus podoviruses isolated from the estuarine environment, and performed comparative genomic and phylogenomic analyses based on a total of 20 cyanopodovirus genomes. The genomes of all the known marine cyanopodoviruses are highly syntenic. A pan-genome of 349 clustered orthologous groups was determined, among which 15 were core genes. These core genes make up nearly half of each genome in length, reflecting the high level of genome conservation among this cyanophage type. The whole genome phylogenies based on concatenated core genes and gene content were highly consistent and confirmed the separation of two discrete marine cyanopodovirus clusters MPP-A and MPP-B. The genomes within cluster MPP-B grouped into subclusters mainly corresponding to Prochlorococcus or Synechococcus host types. Auxiliary metabolic genes tend to occur in a specific phylogenetic group of these cyanopodoviruses. All the MPP-B phages analyzed here encode the photosynthesis gene psbA, which are absent in all the MPP-A genomes thus far. Interestingly, all the MPP-B and two MPP-A Synechococcus podoviruses encode the thymidylate synthase gene thyX, while at the same genome locus all the MPP-B Prochlorococcus podoviruses encode the transaldolase gene talC. Both genes are hypothesized to have the potential to facilitate the biosynthesis of deoxynucleotide for phage replication. Inheritance of specific functional genes could be important to the evolution and ecological fitness of certain cyanophage genotypes. Our analyses demonstrate that cyanopodoviruses of estuarine and oceanic origins share a conserved core genome and suggest that accessory genes may be related to environmental adaptation. PMID:26569403

  8. Comparative Genomic and Phylogenomic Analyses Reveal a Conserved Core Genome Shared by Estuarine and Oceanic Cyanopodoviruses.

    Directory of Open Access Journals (Sweden)

    Sijun Huang

    Full Text Available Podoviruses are among the major viral groups that infect marine picocyanobacteria Prochlorococcus and Synechococcus. Here, we reported the genome sequences of five Synechococcus podoviruses isolated from the estuarine environment, and performed comparative genomic and phylogenomic analyses based on a total of 20 cyanopodovirus genomes. The genomes of all the known marine cyanopodoviruses are highly syntenic. A pan-genome of 349 clustered orthologous groups was determined, among which 15 were core genes. These core genes make up nearly half of each genome in length, reflecting the high level of genome conservation among this cyanophage type. The whole genome phylogenies based on concatenated core genes and gene content were highly consistent and confirmed the separation of two discrete marine cyanopodovirus clusters MPP-A and MPP-B. The genomes within cluster MPP-B grouped into subclusters mainly corresponding to Prochlorococcus or Synechococcus host types. Auxiliary metabolic genes tend to occur in a specific phylogenetic group of these cyanopodoviruses. All the MPP-B phages analyzed here encode the photosynthesis gene psbA, which are absent in all the MPP-A genomes thus far. Interestingly, all the MPP-B and two MPP-A Synechococcus podoviruses encode the thymidylate synthase gene thyX, while at the same genome locus all the MPP-B Prochlorococcus podoviruses encode the transaldolase gene talC. Both genes are hypothesized to have the potential to facilitate the biosynthesis of deoxynucleotide for phage replication. Inheritance of specific functional genes could be important to the evolution and ecological fitness of certain cyanophage genotypes. Our analyses demonstrate that cyanopodoviruses of estuarine and oceanic origins share a conserved core genome and suggest that accessory genes may be related to environmental adaptation.

  9. Scientific Drilling of Impact Craters - Well Logging and Core Analyses Using Magnetic Methods (Invited)

    Science.gov (United States)

    Fucugauchi, J. U.; Perez-Cruz, L. L.; Velasco-Villarreal, M.

    2013-12-01

    Drilling projects of impact structures provide data on the structure and stratigraphy of target, impact and post-impact lithologies, providing insight on the impact dynamics and cratering. Studies have successfully included magnetic well logging and analyses in core and cuttings, directed to characterize the subsurface stratigraphy and structure at depth. There are 170-180 impact craters documented in the terrestrial record, which is a small proportion compared to expectations derived from what is observed on the Moon, Mars and other bodies of the solar system. Knowledge of the internal 3-D deep structure of craters, critical for understanding impacts and crater formation, can best be studied by geophysics and drilling. On Earth, few craters have yet been investigated by drilling. Craters have been drilled as part of industry surveys and/or academic projects, including notably Chicxulub, Sudbury, Ries, Vredefort, Manson and many other craters. As part of the Continental ICDP program, drilling projects have been conducted on the Chicxulub, Bosumtwi, Chesapeake, Ries and El gygytgyn craters. Inclusion of continuous core recovery expanded the range of paleomagnetic and rock magnetic applications, with direct core laboratory measurements, which are part of the tools available in the ocean and continental drilling programs. Drilling studies are here briefly reviewed, with emphasis on the Chicxulub crater formed by an asteroid impact 66 Ma ago at the Cretaceous/Paleogene boundary. Chicxulub crater has no surface expression, covered by a kilometer of Cenozoic sediments, thus making drilling an essential tool. As part of our studies we have drilled eleven wells with continuous core recovery. Magnetic susceptibility logging, magnetostratigraphic, rock magnetic and fabric studies have been carried out and results used for lateral correlation, dating, formation evaluation, azimuthal core orientation and physical property contrasts. Contributions of magnetic studies on impact

  10. Comparative Neutronics Analysis of DIMPLE S06 Criticality Benchmark with Contemporary Reactor Core Analysis Computer Code Systems

    Directory of Open Access Journals (Sweden)

    Wonkyeong Kim

    2015-01-01

    Full Text Available A high-leakage core has been known to be a challenging problem not only for a two-step homogenization approach but also for a direct heterogeneous approach. In this paper the DIMPLE S06 core, which is a small high-leakage core, has been analyzed by a direct heterogeneous modeling approach and by a two-step homogenization modeling approach, using contemporary code systems developed for reactor core analysis. The focus of this work is a comprehensive comparative analysis of the conventional approaches and codes with a small core design, DIMPLE S06 critical experiment. The calculation procedure for the two approaches is explicitly presented in this paper. Comprehensive comparative analysis is performed by neutronics parameters: multiplication factor and assembly power distribution. Comparison of two-group homogenized cross sections from each lattice physics codes shows that the generated transport cross section has significant difference according to the transport approximation to treat anisotropic scattering effect. The necessity of the ADF to correct the discontinuity at the assembly interfaces is clearly presented by the flux distributions and the result of two-step approach. Finally, the two approaches show consistent results for all codes, while the comparison with the reference generated by MCNP shows significant error except for another Monte Carlo code, SERPENT2.

  11. Benchmark of Atucha-2 PHWR RELAP5-3D control rod model by Monte Carlo MCNP5 core calculation

    Energy Technology Data Exchange (ETDEWEB)

    Pecchia, M.; D' Auria, F. [San Piero A Grado Nuclear Research Group GRNSPG, Univ. of Pisa, via Diotisalvi, 2, 56122 - Pisa (Italy); Mazzantini, O. [Nucleo-electrica Argentina Societad Anonima NA-SA, Buenos Aires (Argentina)

    2012-07-01

    Atucha-2 is a Siemens-designed PHWR reactor under construction in the Republic of Argentina. Its geometrical complexity and peculiarities require the adoption of advanced Monte Carlo codes for performing realistic neutronic simulations. Therefore core models of Atucha-2 PHWR were developed using MCNP5. In this work a methodology was set up to collect the flux in the hexagonal mesh by which the Atucha-2 core is represented. The scope of this activity is to evaluate the effect of obliquely inserted control rod on neutron flux in order to validate the RELAP5-3D{sup C}/NESTLE three dimensional neutron kinetic coupled thermal-hydraulic model, applied by GRNSPG/UNIPI for performing selected transients of Chapter 15 FSAR of Atucha-2. (authors)

  12. Non-Target Analyses of organic compounds in ice cores using HPLC-ESI-UHRMS

    Science.gov (United States)

    Zuth, Christoph; Müller-Tautges, Christina; Eichler, Anja; Schwikowski, Margit; Hoffmann, Thorsten

    2015-04-01

    To study the global climatic and environmental changes it is necessary to know the environmental and especially atmospheric conditions of the past. By analysing climate archives, such as for example ice cores, unique environmental information can be obtained. In contrast to the well-established analysis of inorganic species in ice cores, organic compounds have been analysed in ice cores to a much smaller extent. Because of current analytical limitations it has become commonplace to focus on 'total organic carbon' measurements or specific classes of organic molecules, as no analytical methods exist that can provide a broad characterization of the organic material present[1]. On the one hand, it is important to focus on already known atmospheric markers in ice cores and to quantify, where possible, in order to compare them to current conditions. On the other hand, unfortunately a wealth of information is lost when only a small fraction of the organic material is examined. However, recent developments in mass spectrometry in respect to higher mass resolution and mass accuracy enable a new approach to the analysis of complex environmental samples. The qualitative characterization of the complex mixture of water soluble organic carbon (WSOC) in the ice using high-resolution mass spectrometry allows for novel insights concerning the composition and possible sources of aerosol derived WSOC deposited at glacier sites. By performing a non-target analysis of an ice core from the Swiss Alps using previous enrichment by solid-phase extraction (SPE) and high performance liquid chromatography coupled to electrospray ionization and ultra-high resolution mass spectrometry (HPLC-ESI-UHRMS) 475 elemental formulas distributed onto 659 different peaks were detected. The elemental formulas were classified according to their elemental composition into CHO-, CHON-, CHOS-, CHONS-containing compounds and 'others'. Several methods for the analysis of complex data sets of high resolution

  13. Special core analyses and relative permeability measurement on Almond formation reservoir rocks

    Energy Technology Data Exchange (ETDEWEB)

    Maloney, D.; Doggett, K.; Brinkmeyer, A.

    1993-02-01

    This report describes the results from special core analyses and relative permeability measurements conducted on samples of rock from the Almond Formation in Greater Green River Basin of southwestern Wyoming. The core was from Arch Unit Well 121 of Patrick Draw field. Samples were taken from the 4,950 to 4,965 ft depth interval. Thin section evaluation, X-ray diffraction, routine permeability and porosity, capillary pressure and wettability tests were performed to characterize the samples. Fluid flow capacity characteristics were measured during two-phase unsteady- and steady-state and three-phase steady-state relative permeability tests. Test results are presented in tables and graphs. Relative permeability results are compared with those of a 260-mD, fired Berea sandstone sample which was previously subjected to similar tests. Brine relative permeabilities were similar for the two samples, whereas oil and gas relative permeabilities for the Almond formation rock were higher at equivalent saturation conditions compared to Berea results. Most of the tests described in this report were conducted at 74{degrees}F laboratory temperature. Additional tests are planned at 150{degrees}F temperature. Equipment and procedural modifications to perform the elevated temperature tests are described.

  14. Special core analyses and relative permeability measurement on Almond formation reservoir rocks

    Energy Technology Data Exchange (ETDEWEB)

    Maloney, D.; Doggett, K.; Brinkmeyer, A.

    1993-02-01

    This report describes the results from special core analyses and relative permeability measurements conducted on samples of rock from the Almond Formation in Greater Green River Basin of southwestern Wyoming. The core was from Arch Unit Well 121 of Patrick Draw field. Samples were taken from the 4,950 to 4,965 ft depth interval. Thin section evaluation, X-ray diffraction, routine permeability and porosity, capillary pressure and wettability tests were performed to characterize the samples. Fluid flow capacity characteristics were measured during two-phase unsteady- and steady-state and three-phase steady-state relative permeability tests. Test results are presented in tables and graphs. Relative permeability results are compared with those of a 260-mD, fired Berea sandstone sample which was previously subjected to similar tests. Brine relative permeabilities were similar for the two samples, whereas oil and gas relative permeabilities for the Almond formation rock were higher at equivalent saturation conditions compared to Berea results. Most of the tests described in this report were conducted at 74[degrees]F laboratory temperature. Additional tests are planned at 150[degrees]F temperature. Equipment and procedural modifications to perform the elevated temperature tests are described.

  15. Library Benchmarking

    Directory of Open Access Journals (Sweden)

    Wiji Suwarno

    2017-02-01

    Full Text Available The term benchmarking has been encountered in the implementation of total quality (TQM or in Indonesian termed holistic quality management because benchmarking is a tool to look for ideas or learn from the library. Benchmarking is a processof measuring and comparing for continuous business process of systematic and continuous measurement, the process of measuring and comparing for continuous business process of an organization to get information that can help these organization improve their performance efforts.

  16. Financial Benchmarking

    OpenAIRE

    2012-01-01

    This bachelor's thesis is focused on financial benchmarking of TULIPA PRAHA s.r.o. The aim of this work is to evaluate financial situation of the company, identify its strengths and weaknesses and to find out how efficient is the performance of this company in comparison with top companies within the same field by using INFA benchmarking diagnostic system of financial indicators. The theoretical part includes the characteristic of financial analysis, which financial benchmarking is based on a...

  17. Benchmark Results and Theoretical Treatments for Valence-to-Core X-ray Emission Spectroscopy in Transition Metal Compounds

    Energy Technology Data Exchange (ETDEWEB)

    Mortensen, Devon R.; Seidler, Gerald T.; Kas, Joshua J.; Govind, Niranjan; Schwartz, Craig; Pemmaraju, Das; Prendergast, David

    2017-09-20

    We report measurement of the valence-to-core (VTC) region of the K-shell x-ray emission spectra from several Zn and Fe inorganic compounds, and their critical comparison with several existing theoretical treatments. We find generally good agreement between the respective theories and experiment, and in particular find an important admixture of dipole and quadrupole character for Zn materials that is much weaker in Fe-based systems. These results on materials whose simple crystal structures should not, a prior, pose deep challenges to theory, will prove useful in guiding the further development of DFT and time-dependent DFT methods for VTC-XES predictions and their comparison to experiment.

  18. Subshell fitting of relativistic atomic core electron densities for use in QTAIM analyses of ECP-based wave functions.

    Science.gov (United States)

    Keith, Todd A; Frisch, Michael J

    2011-11-17

    Scalar-relativistic, all-electron density functional theory (DFT) calculations were done for free, neutral atoms of all elements of the periodic table using the universal Gaussian basis set. Each core, closed-subshell contribution to a total atomic electron density distribution was separately fitted to a spherical electron density function: a linear combination of s-type Gaussian functions. The resulting core subshell electron densities are useful for systematically and compactly approximating total core electron densities of atoms in molecules, for any atomic core defined in terms of closed subshells. When used to augment the electron density from a wave function based on a calculation using effective core potentials (ECPs) in the Hamiltonian, the atomic core electron densities are sufficient to restore the otherwise-absent electron density maxima at the nuclear positions and eliminate spurious critical points in the neighborhood of the atom, thus enabling quantum theory of atoms in molecules (QTAIM) analyses to be done in the neighborhoods of atoms for which ECPs were used. Comparison of results from QTAIM analyses with all-electron, relativistic and nonrelativistic molecular wave functions validates the use of the atomic core electron densities for augmenting electron densities from ECP-based wave functions. For an atom in a molecule for which a small-core or medium-core ECPs is used, simply representing the core using a simplistic, tightly localized electron density function is actually sufficient to obtain a correct electron density topology and perform QTAIM analyses to obtain at least semiquantitatively meaningful results, but this is often not true when a large-core ECP is used. Comparison of QTAIM results from augmenting ECP-based molecular wave functions with the realistic atomic core electron densities presented here versus augmenting with the limiting case of tight core densities may be useful for diagnosing the reliability of large-core ECP models in

  19. Benchmark selection

    DEFF Research Database (Denmark)

    Hougaard, Jens Leth; Tvede, Mich

    2002-01-01

    Within a production theoretic framework, this paper considers an axiomatic approach to benchmark selection. It is shown that two simple and weak axioms; efficiency and comprehensive monotonicity characterize a natural family of benchmarks which typically becomes unique. Further axioms are added...

  20. Reservoir condition special core analyses and relative permeability measurements on Almond formation and Fontainebleu sandstone rocks

    Energy Technology Data Exchange (ETDEWEB)

    Maloney, D.

    1993-11-01

    This report describes the results from special core analyses and relative permeability measurements conducted on Almond formation and Fontainebleu sandstone plugs. Almond formation plug tests were performed to evaluate multiphase, steady-state,reservoir-condition relative permeability measurement techniques and to examine the effect of temperature on relative permeability characteristics. Some conclusions from this project are as follows: An increase in temperature appeared to cause an increase in brine relative permeability results for an Almond formation plug compared to room temperature results. The plug was tested using steady-state oil/brine methods. The oil was a low-viscosity, isoparaffinic refined oil. Fontainebleu sandstone rock and fluid flow characteristics were measured and are reported. Most of the relative permeability versus saturation results could be represented by one of two trends -- either a k{sub rx} versus S{sub x} or k{sub rx} versus Sy trend where x and y are fluid phases (gas, oil, or brine). An oil/surfactant-brine steady-state relative permeability test was performed to examine changes in oil/brine relative permeability characteristics from changes in fluid IFTS. It appeared that, while low interfacial tension increased the aqueous phase relative permeability, it had no effect on the oil relative permeability. The BOAST simulator was modified for coreflood simulation. The simulator was useful for examining effects of variations in relative permeability and capillary pressure functions. Coreflood production monitoring and separator interface level measurement techniques were developed using X-ray absorption, weight methods, and RF admittance technologies. The three types of separators should be useful for routine and specialized core analysis applications.

  1. Core analyses for selected samples from the Culebra Dolomite at the Waste Isolation Pilot Plant site

    Energy Technology Data Exchange (ETDEWEB)

    Kelley, V.A.; Saulnier, G.J. Jr. (INTERA, Inc., Austin, TX (USA))

    1990-11-01

    Two groups of core samples from the Culebra Dolomite Member of the Rustler Formation at and near the Waste Isolation Pilot Plant were analyzed to provide estimates of hydrologic parameters for use in flow-and-transport modeling. Whole-core and core-plug samples were analyzed by helium porosimetry, resaturation and porosimetry, mercury-intrusion porosimetry, electrical-resistivity techniques, and gas-permeability methods. 33 refs., 25 figs., 10 tabs.

  2. Core analyses for selected samples from the Culebra Dolomite at the Waste Isolation Pilot Plant site

    Energy Technology Data Exchange (ETDEWEB)

    Kelley, V.A.; Saulnier, G.J. Jr. (INTERA, Inc., Austin, TX (USA))

    1990-11-01

    Two groups of core samples from the Culebra Dolomite Member of the Rustler Formation at and near the Waste Isolation Pilot Plant were analyzed to provide estimates of hydrologic parameters for use in flow-and-transport modeling. Whole-core and core-plug samples were analyzed by helium porosimetry, resaturation and porosimetry, mercury-intrusion porosimetry, electrical-resistivity techniques, and gas-permeability methods. 33 refs., 25 figs., 10 tabs.

  3. Ecosystem history of southern and central Biscayne Bay; summary report on sediment core analyses

    Science.gov (United States)

    Wingard, G.L.; Cronin, T. M.; Dwyer, G.S.; Ishman, S.E.; Willard, D.A.; Holmes, C.W.; Bernhardt, C.E.; Williams, C.P.; Marot, M.E.; Murray, J.B.; Stamm, R.G.; Murray, J.H.; Budet, C.

    2003-01-01

    During the last century, the environs of Biscayne Bay have been greatly affected by anthropogenic alteration through urbanization of the Miami/Dade County area. The sources, timing, delivery, and quality of freshwater flow into the Bay have been changed by construction of a complex canal system that controls movement of water throughout south Florida. Changes in shoreline and sub-aquatic vegetation and marine organisms have been observed and changes in water delivery are believed to be the cause. Current restoration goals are attempting to restore natural flow of fresh water into Biscayne and Florida Bays and to restore the natural fauna and flora, but first we need to determine pre-alteration baseline conditions in order to establish targets and performance measures for restoration. This research is part of an ongoing study designed to address the needs of the Biscayne Bay Coastal Wetlands Project (BBCW) of the Comprehensive Everglades Restoration Plan (CERP). By establishing the natural patterns of temporal change in salinity, water quality, vegetation, and benthic fauna in Biscayne Bay and the nearby wetlands over the last 100- 500 years the USGS, in collaboration with our partners, will provide the data necessary to set realistic targets to achieve the BBCW Project goals. Six cores from three sites in Biscayne Bay were collected in April 2002 for multidisciplinary multi-proxy analyses. This report details the results of these analyses and compares the 2002 cores to cores collected in 1997. The following are our significant findings to date: ?\tThe salinity of central Biscayne Bay has become increasingly marine and increasingly stable over the last 100 years. o\tAt No Name Bank, prior to approximately 1915, the inter-decadal and decadal salinity fluctuations appear to have been greater than after 1915 when salinities stabilized at that site. o\tContinental shelf/open marine influence on the sites has increased during the 20th century. o\tThere is no indication

  4. Whole-rock analyses of core samples from the 1988 drilling of Kilauea Iki lava lake, Hawaii

    Science.gov (United States)

    Helz, Rosalind Tuthill; Taggart, Joseph E.

    2010-01-01

    This report presents and evaluates 64 major-element analyses of previously unanalyzed Kilauea Iki drill core, plus three samples from the 1959 and 1960 eruptions of Kilauea, obtained by X-ray fluorescence (XRF) analysis during the period 1992 to 1995. All earlier major-element analyses of Kilauea Iki core, obtained by classical (gravimetric) analysis, were reported and evaluated in Helz and others (1994). In order to assess how well the newer data compare with this earlier suite of analyses, a subset of 24 samples, which had been analyzed by classical analysis, was reanalyzed using the XRF technique; those results are presented and evaluated in this report also. The XRF analyses have not been published previously. This report also provides an overview of how the chemical variations observed in these new data fit in with the chemical zonation patterns and petrologic processes inferred in earlier studies of Kilauea Iki.

  5. A STRONGLY COUPLED REACTOR CORE ISOLATION COOLING SYSTEM MODEL FOR EXTENDED STATION BLACK-OUT ANALYSES

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, Haihua [Idaho National Laboratory; Zhang, Hongbin [Idaho National Laboratory; Zou, Ling [Idaho National Laboratory; Martineau, Richard Charles [Idaho National Laboratory

    2015-03-01

    The reactor core isolation cooling (RCIC) system in a boiling water reactor (BWR) provides makeup cooling water to the reactor pressure vessel (RPV) when the main steam lines are isolated and the normal supply of water to the reactor vessel is lost. The RCIC system operates independently of AC power, service air, or external cooling water systems. The only required external energy source is from the battery to maintain the logic circuits to control the opening and/or closure of valves in the RCIC systems in order to control the RPV water level by shutting down the RCIC pump to avoid overfilling the RPV and flooding the steam line to the RCIC turbine. It is generally considered in almost all the existing station black-out accidents (SBO) analyses that loss of the DC power would result in overfilling the steam line and allowing liquid water to flow into the RCIC turbine, where it is assumed that the turbine would then be disabled. This behavior, however, was not observed in the Fukushima Daiichi accidents, where the Unit 2 RCIC functioned without DC power for nearly three days. Therefore, more detailed mechanistic models for RCIC system components are needed to understand the extended SBO for BWRs. As part of the effort to develop the next generation reactor system safety analysis code RELAP-7, we have developed a strongly coupled RCIC system model, which consists of a turbine model, a pump model, a check valve model, a wet well model, and their coupling models. Unlike the traditional SBO simulations where mass flow rates are typically given in the input file through time dependent functions, the real mass flow rates through the turbine and the pump loops in our model are dynamically calculated according to conservation laws and turbine/pump operation curves. A simplified SBO demonstration RELAP-7 model with this RCIC model has been successfully developed. The demonstration model includes the major components for the primary system of a BWR, as well as the safety

  6. Interactive benchmarking

    DEFF Research Database (Denmark)

    Lawson, Lartey; Nielsen, Kurt

    2005-01-01

    distance functions. The frontier is given by an explicit quantile, e.g. “the best 90 %”. Using the explanatory model of the inefficiency, the user can adjust the frontiers by submitting state variables that influence the inefficiency. An efficiency study of Danish dairy farms is implemented......We discuss individual learning by interactive benchmarking using stochastic frontier models. The interactions allow the user to tailor the performance evaluation to preferences and explore alternative improvement strategies by selecting and searching the different frontiers using directional...... in the suggested benchmarking tool. The study investigates how different characteristics on dairy farms influences the technical efficiency....

  7. Core burnup calculation and accidents analyses of a pressurized water reactor partially loaded with rock-like oxide fuel

    Science.gov (United States)

    Akie, H.; Sugo, Y.; Okawa, R.

    2003-06-01

    A rock-like oxide (ROX) fuel - light water reactor (LWR) burning system has been studied for efficient plutonium transmutation. For the improvement of small negative reactivity coefficients and severe transient behaviors of ROX fueled LWRs, a partial loading core of ROX fuel assemblies with conventional UO 2 assemblies was considered. As a result, although the reactivity coefficients could be improved, the power peaking tends to be large in this heterogeneous core configuration. The reactivity initiated accident (RIA) and loss of coolant accident (LOCA) behaviors were not sufficiently improved. In order to reduce the power peaking, the fuel composition and the assembly design of the ROX fuel were modified. Firstly, erbium burnable poison was added as Er 2O 3 in the ROX fuel to reduce the burnup reactivity swing. Then pin-by-pin Pu enrichment and Er content distributions within the ROX fuel assembly were considered. In addition, the Er content distribution was also considered in the axial direction of the ROX fuel pin. With these modifications, a power peaking factor even lower than the one in a conventional UO 2 fueled core can be obtained. The RIA and LOCA analyses of the modified core have also shown the comparable transient behaviors of ROX partial loading core to those of the UO 2 core.

  8. Preliminary Benchmark Evaluation of Japan’s High Temperature Engineering Test Reactor

    Energy Technology Data Exchange (ETDEWEB)

    John Darrell Bess

    2009-05-01

    A benchmark model of the initial fully-loaded start-up core critical of Japan’s High Temperature Engineering Test Reactor (HTTR) was developed to provide data in support of ongoing validation efforts of the Very High Temperature Reactor Program using publicly available resources. The HTTR is a 30 MWt test reactor utilizing graphite moderation, helium coolant, and prismatic TRISO fuel. The benchmark was modeled using MCNP5 with various neutron cross-section libraries. An uncertainty evaluation was performed by perturbing the benchmark model and comparing the resultant eigenvalues. The calculated eigenvalues are approximately 2-3% greater than expected with an uncertainty of ±0.70%. The primary sources of uncertainty are the impurities in the core and reflector graphite. The release of additional HTTR data could effectively reduce the benchmark model uncertainties and bias. Sensitivity of the results to the graphite impurity content might imply that further evaluation of the graphite content could significantly improve calculated results. Proper characterization of graphite for future Next Generation Nuclear Power reactor designs will improve computational modeling capabilities. Current benchmarking activities include evaluation of the annular HTTR cores and assessment of the remaining start-up core physics experiments, including reactivity effects, reactivity coefficient, and reaction-rate distribution measurements. Long term benchmarking goals might include analyses of the hot zero-power critical, rise-to-power tests, and other irradiation, safety, and technical evaluations performed with the HTTR.

  9. Continuous ice core melter system with discrete sampling for major ion, trace element and stable isotope analyses.

    Science.gov (United States)

    Osterberg, Erich C; Handley, Michael J; Sneed, Sharon B; Mayewski, Paul A; Kreutz, Karl J

    2006-05-15

    We present a novel ice/firn core melter system that uses fraction collectors to collect discrete, high-resolution (32 trace elements by inductively coupled plasma sectorfield mass spectrometry (ICP-SMS), and stable oxygen and hydrogen isotopes by isotope ratio mass spectrometry (IRMS). The new continuous melting with discrete sampling (CMDS) system preserves an archive of each sample, reduces the problem of incomplete particle dissolution in ICP-SMS samples, and provides more precise trace element data than previous ice melter models by using longer ICP-SMS scan times and washing the instrument between samples. CMDS detection limits are similar to or lower than those published for ice melter systems coupled directly to analytical instruments and are suitable for analyses of polar and mid-low-latitude ice cores. Analysis of total calcium and sulfur by ICP-SMS and calcium ion, sulfate, and methanesulfonate by IC from the Mt. Logan Prospector-Russell Col ice core confirms data accuracy and coregistration of the split fractions from each sample. The reproducibility of all data acquired by the CMDS system is confirmed by replicate analyses of parallel sections of the GISP2 D ice core.

  10. HS06 Benchmark for an ARM Server

    Science.gov (United States)

    Kluth, Stefan

    2014-06-01

    We benchmarked an ARM cortex-A9 based server system with a four-core CPU running at 1.1 GHz. The system used Ubuntu 12.04 as operating system and the HEPSPEC 2006 (HS06) benchmarking suite was compiled natively with gcc-4.4 on the system. The benchmark was run for various settings of the relevant gcc compiler options. We did not find significant influence from the compiler options on the benchmark result. The final HS06 benchmark result is 10.4.

  11. HS06 Benchmark for an ARM Server

    CERN Document Server

    Kluth, Stefan

    2013-01-01

    We benchmarked an ARM cortex-A9 based server system with a four-core CPU running at 1.1 GHz. The system used Ubuntu 12.04 as operating system and the HEPSPEC 2006 (HS06) benchmarking suite was compiled natively with gcc-4.4 on the system. The benchmark was run for various settings of the relevant gcc compiler options. We did not find significant influence from the compiler options on the benchmark result. The final HS06 benchmark result is 10.4.

  12. Re-evaluating the 1257 AD eruption using annually-resolved ice core chemical analyses

    Science.gov (United States)

    Simonsen, M. F.; Kjær, H. A.; Vallelonga, P. T.; Neff, P. D.; Bertler, N. A. N.; Svensson, A.; Seierstad, I.; Albert, P. G.; Bourne, A. J.; Kurbatov, A.

    2014-12-01

    The source of the 1257 AD volcanic eruption has recently been proposed to be Samalas in Indonesia. The eruption was one of the largest of the Holocene and has been recorded in ice cores in both hemispheres from sulfate and acidity measurements. The estimate of its sulfate load varies from 2 to 8 times that of Tambora. This is also the only volcano for which tephras have been assigned in ice cores from both Antarctica and Greenland (GISP2). Due to this unique assignment of a bipolar tephra layer in ice cores, the origins of the sulfate and tephras have been disputed and it has been proposed that at least one of the tephras was due to an additional volcanic eruption local to either Greenland or Antarctica. We have re-evaluated the acid and tephra deposition from the 1257 AD volcano in two ice cores, one from Greenland (NGRIP. 75.1° N, 42.3° W) and one from Antarctica (RICE, Roosevelt Island. 79.36° S, -161.71° W). Annually-resolved continuous flow analysis (CFA) measurements determined relevant parameters such as melt water conductivity, sulphate and acidity. The acidity peak at RICE (~20 uM H+) is approximately double that found at NGRIP (10 uM H+). The only visible tephra layer found in the corresponding depth range was deposited at 1250 AD, 9 years before the acidity peak. The high resolution of the data offers a precise evaluation of the delay between the deposition of tephra and acid (sulfate) in each hemisphere. The comparison between poles allows some evaluation of the spread of deposition from the volcanic eruption.

  13. The Core Competitiveness of the Wisdom Tourism Food Analyses Based on the Internet of Things

    Directory of Open Access Journals (Sweden)

    Qingyun Chen

    2015-07-01

    Full Text Available As the world’s largest industry, the tourism Food industry of the world has rapidly developed in recent years. Transferring “digital food” to “wisdom food” means new opportunities and challenges that the sustainable development of the food should face. However, the informational development of this industry still lags behind, the exploitation and utilization of information resources haven’t owned an effective platform, which lack of a benign circulation and interactive mechanism, therefore, to combine the IOT with the food about wisdom has always been a new tendency. This study constructed framework of the tourism Food core competitiveness based on IOT. Then, this study proposed a measurement model for the tourism Food core competitiveness and test the model through exploratory and confirmatory factor analysis, the indicators demonstrate that the model is effective and present that resource protection ability, operation management ability, service ability and Tourism Food service chain integration ability all have influence on the tourism Food core competitiveness.

  14. Preliminary Thermal Hydraulic Analyses of the Conceptual Core Models with Tubular Type Fuel Assemblies

    Energy Technology Data Exchange (ETDEWEB)

    Chae, Hee Taek; Park, Jong Hark; Park, Cheol

    2006-11-15

    A new research reactor (AHR, Advanced HANARO Reactor) based on the HANARO has being conceptually developed for the future needs of research reactors. A tubular type fuel was considered as one of the fuel options of the AHR. A tubular type fuel assembly has several curved fuel plates arranged with a constant small gap to build up cooling channels, which is very similar to an annulus pipe with many layers. This report presents the preliminary analysis of thermal hydraulic characteristics and safety margins for three conceptual core models using tubular fuel assemblies. Four design criteria, which are the fuel temperature, ONB (Onset of Nucleate Boiling) margin, minimum DNBR (Departure from Nucleate Boiling Ratio) and OFIR (Onset of Flow Instability Ratio), were investigated along with various core flow velocities in the normal operating conditions. And the primary coolant flow rate based a conceptual core model was suggested as a design information for the process design of the primary cooling system. The computational fluid dynamics analysis was also carried out to evaluate the coolant velocity distributions between tubular channels and the pressure drop characteristics of the tubular fuel assembly.

  15. Quantitative Analyses of Core Promoters Enable Precise Engineering of Regulated Gene Expression in Mammalian Cells.

    Science.gov (United States)

    Ede, Christopher; Chen, Ximin; Lin, Meng-Yin; Chen, Yvonne Y

    2016-05-20

    Inducible transcription systems play a crucial role in a wide array of synthetic biology circuits. However, the majority of inducible promoters are constructed from a limited set of tried-and-true promoter parts, which are susceptible to common shortcomings such as high basal expression levels (i.e., leakiness). To expand the toolbox for regulated mammalian gene expression and facilitate the construction of mammalian genetic circuits with precise functionality, we quantitatively characterized a panel of eight core promoters, including sequences with mammalian, viral, and synthetic origins. We demonstrate that this selection of core promoters can provide a wide range of basal gene expression levels and achieve a gradient of fold-inductions spanning 2 orders of magnitude. Furthermore, commonly used parts such as minimal CMV and minimal SV40 promoters were shown to achieve robust gene expression upon induction, but also suffer from high levels of leakiness. In contrast, a synthetic promoter, YB_TATA, was shown to combine low basal expression with high transcription rate in the induced state to achieve significantly higher fold-induction ratios compared to all other promoters tested. These behaviors remain consistent when the promoters are coupled to different genetic outputs and different response elements, as well as across different host-cell types and DNA copy numbers. We apply this quantitative understanding of core promoter properties to the successful engineering of human T cells that respond to antigen stimulation via chimeric antigen receptor signaling specifically under hypoxic environments. Results presented in this study can facilitate the design and calibration of future mammalian synthetic biology systems capable of precisely programmed functionality.

  16. Core microbial functional activities in ocean environments revealed by global metagenomic profiling analyses.

    Directory of Open Access Journals (Sweden)

    Ari J S Ferreira

    Full Text Available Metagenomics-based functional profiling analysis is an effective means of gaining deeper insight into the composition of marine microbial populations and developing a better understanding of the interplay between the functional genome content of microbial communities and abiotic factors. Here we present a comprehensive analysis of 24 datasets covering surface and depth-related environments at 11 sites around the world's oceans. The complete datasets comprises approximately 12 million sequences, totaling 5,358 Mb. Based on profiling patterns of Clusters of Orthologous Groups (COGs of proteins, a core set of reference photic and aphotic depth-related COGs, and a collection of COGs that are associated with extreme oxygen limitation were defined. Their inferred functions were utilized as indicators to characterize the distribution of light- and oxygen-related biological activities in marine environments. The results reveal that, while light level in the water column is a major determinant of phenotypic adaptation in marine microorganisms, oxygen concentration in the aphotic zone has a significant impact only in extremely hypoxic waters. Phylogenetic profiling of the reference photic/aphotic gene sets revealed a greater variety of source organisms in the aphotic zone, although the majority of individual photic and aphotic depth-related COGs are assigned to the same taxa across the different sites. This increase in phylogenetic and functional diversity of the core aphotic related COGs most probably reflects selection for the utilization of a broad range of alternate energy sources in the absence of light.

  17. Core microbial functional activities in ocean environments revealed by global metagenomic profiling analyses.

    KAUST Repository

    Ferreira, Ari J S

    2014-06-12

    Metagenomics-based functional profiling analysis is an effective means of gaining deeper insight into the composition of marine microbial populations and developing a better understanding of the interplay between the functional genome content of microbial communities and abiotic factors. Here we present a comprehensive analysis of 24 datasets covering surface and depth-related environments at 11 sites around the world\\'s oceans. The complete datasets comprises approximately 12 million sequences, totaling 5,358 Mb. Based on profiling patterns of Clusters of Orthologous Groups (COGs) of proteins, a core set of reference photic and aphotic depth-related COGs, and a collection of COGs that are associated with extreme oxygen limitation were defined. Their inferred functions were utilized as indicators to characterize the distribution of light- and oxygen-related biological activities in marine environments. The results reveal that, while light level in the water column is a major determinant of phenotypic adaptation in marine microorganisms, oxygen concentration in the aphotic zone has a significant impact only in extremely hypoxic waters. Phylogenetic profiling of the reference photic/aphotic gene sets revealed a greater variety of source organisms in the aphotic zone, although the majority of individual photic and aphotic depth-related COGs are assigned to the same taxa across the different sites. This increase in phylogenetic and functional diversity of the core aphotic related COGs most probably reflects selection for the utilization of a broad range of alternate energy sources in the absence of light.

  18. Suggested protocol for collecting, handling and preparing peat cores and peat samples for physical, chemical, mineralogical and isotopic analyses.

    Science.gov (United States)

    Givelet, Nicolas; Le Roux, Gaël; Cheburkin, Andriy; Chen, Bin; Frank, Jutta; Goodsite, Michael E; Kempter, Heike; Krachler, Michael; Noernberg, Tommy; Rausch, Nicole; Rheinberger, Stefan; Roos-Barraclough, Fiona; Sapkota, Atindra; Scholz, Christian; Shotyk, William

    2004-05-01

    For detailed reconstructions of atmospheric metal deposition using peat cores from bogs, a comprehensive protocol for working with peat cores is proposed. The first step is to locate and determine suitable sampling sites in accordance with the principal goal of the study, the period of time of interest and the precision required. Using the state of the art procedures and field equipment, peat cores are collected in such a way as to provide high quality records for paleoenvironmental study. Pertinent field observations gathered during the fieldwork are recorded in a field report. Cores are kept frozen at -18 degree C until they can be prepared in the laboratory. Frozen peat cores are precisely cut into 1 cm slices using a stainless steel band saw with stainless steel blades. The outside edges of each slice are removed using a titanium knife to avoid any possible contamination which might have occurred during the sampling and handling stage. Each slice is split, with one-half kept frozen for future studies (archived), and the other half further subdivided for physical, chemical, and mineralogical analyses. Physical parameters such as ash and water contents, the bulk density and the degree of decomposition of the peat are determined using established methods. A subsample is dried overnight at 105 degree C in a drying oven and milled in a centrifugal mill with titanium sieve. Prior to any expensive and time consuming chemical procedures and analyses, the resulting powdered samples, after manual homogenisation, are measured for more than twenty-two major and trace elements using non-destructive X-Ray fluorescence (XRF) methods. This approach provides lots of valuable geochemical data which documents the natural geochemical processes which occur in the peat profiles and their possible effect on the trace metal profiles. The development, evaluation and use of peat cores from bogs as archives of high-resolution records of atmospheric deposition of mineral dust and trace

  19. Submerged terrestrial landscapes in the Baltic Sea: Evidence from multiproxy analyses of sediment cores from Fehmarnbelt

    Science.gov (United States)

    Enters, Dirk; Wolters, Steffen; Blume, Katharina; Segschneider, Martin; Lücke, Andreas; Theuerkauf, Martin; Hübener, Thomas

    2016-04-01

    Five sediment cores were taken from the southern part of the Fehmarn Belt (Baltic Sea) in the context of an environmental impact study for the intended fixed traverse between Germany and Denmark. The lithologies of the 8m long cores reveal dramatic changes in sedimentary environments which reflect the early Holocene history of the southern Baltic Sea. A succession of terrestrial, semiterrestrial and limnic facies from glacial sediments to peat, lacustrine/estuarine deposits and finally marine sediments document the interplay of eustatic sea level rise and isostatic rebound, which finally lead to the establishment of marine conditions during the Littorina transgression. An age control of the observed changes was established by dating over 50 C-14 samples of different fractions. During the Lateglacial minerogenic varves with thicknesses of several centimeters verify the existence of a proglacial lake in the Fehmarnbelt. Peat development started around 11.250 cal. BP and terminated ca. 10.600 cal. BP which is roughly contemporaneous with the end of the Yoldia Phase in the central Baltic Sea. The oldest peat layers consist of undecomposed sedges and reed. Woody remains of willows appear not before 10.700 cal BP and indicate a stagnant or slowly decreasing water table. This semi-terrestrial phase is followed by a shallow inland lake which existed until the Littorina transgression around 8.300 cal. BP. Initially the lacustrine sediments exhibit high C/N ratios, low low δ13Corg values and contain numerous wood fragments as well as other botanical macro remains. This indicates shallow conditions close to the lake shore. Later, the occurrence of planktonic diatom species such as Aulacoseira ambigua suggest greater water depths. We did not find any indications of the often postulated catastrophic outburst of the Ancylus Lake via Fehmarnbelt and the Great Belt into the North Sea. Likewise, XRF scanning does not show conspicuous peaks in Ti or K which would have been

  20. GCFR Coupled Neutronic and Thermal-Fluid-Dynamics Analyses for a Core Containing Minor Actinides

    Directory of Open Access Journals (Sweden)

    Diego Castelliti

    2009-01-01

    Full Text Available Problems about future energy availability, climate changes, and air quality seem to play an important role in energy production. While current reactor generations provide a guaranteed and economical energy production, new nuclear power plant generation would increase the ways and purposes in which nuclear energy can be used. To explore these new technological applications, several governments, industries, and research communities decided to contribute to the next reactor generation, called “Generation IV.” Among the six Gen-IV reactor designs, the Gas Cooled Fast Reactor (GCFR uses a direct-cycle helium turbine for electricity generation and for a CO2-free thermochemical production of hydrogen. Additionally, the use of a fast spectrum allows actinides transmutation, minimizing the production of long-lived radioactive waste in an integrated fuel cycle. This paper presents an analysis of GCFR fuel cycle optimization and of a thermal-hydraulic of a GCFR-prototype under steady-state and transient conditions. The fuel cycle optimization was performed to assess the capability of the GCFR to transmute MAs, while the thermal-hydraulic analysis was performed to investigate the reactor and the safety systems behavior during a LOFA. Preliminary results show that limited quantities of MA are not affecting significantly the thermal-fluid-dynamics behavior of a GCFR core.

  1. CORE

    DEFF Research Database (Denmark)

    Krigslund, Jeppe; Hansen, Jonas; Hundebøll, Martin

    2013-01-01

    different flows. Instead of maintaining these approaches separate, we propose a protocol (CORE) that brings together these coding mechanisms. Our protocol uses random linear network coding (RLNC) for intra- session coding but allows nodes in the network to setup inter- session coding regions where flows...... intersect. Routes for unicast sessions are agnostic to other sessions and setup beforehand, CORE will then discover and exploit intersecting routes. Our approach allows the inter-session regions to leverage RLNC to compensate for losses or failures in the overhearing or transmitting process. Thus, we...... increase the benefits of XORing by exploiting the underlying RLNC structure of individual flows. This goes beyond providing additional reliability to each individual session and beyond exploiting coding opportunistically. Our numerical results show that CORE outperforms both forwarding and COPE...

  2. CORE

    DEFF Research Database (Denmark)

    Krigslund, Jeppe; Hansen, Jonas; Hundebøll, Martin

    2013-01-01

    different flows. Instead of maintaining these approaches separate, we propose a protocol (CORE) that brings together these coding mechanisms. Our protocol uses random linear network coding (RLNC) for intra- session coding but allows nodes in the network to setup inter- session coding regions where flows...... intersect. Routes for unicast sessions are agnostic to other sessions and setup beforehand, CORE will then discover and exploit intersecting routes. Our approach allows the inter-session regions to leverage RLNC to compensate for losses or failures in the overhearing or transmitting process. Thus, we...... increase the benefits of XORing by exploiting the underlying RLNC structure of individual flows. This goes beyond providing additional reliability to each individual session and beyond exploiting coding opportunistically. Our numerical results show that CORE outperforms both forwarding and COPE...

  3. Reevaluation of JACS code system benchmark analyses of the heterogeneous system. Fuel rods in U+Pu nitric acid solution system

    Energy Technology Data Exchange (ETDEWEB)

    Takada, Tomoyuki; Miyoshi, Yoshinori; Katakura, Jun-ichi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2003-03-01

    In order to perform accuracy evaluation of the critical calculation by the combination of multi-group constant library MGCL and 3-dimensional Monte Carlo code KENO-IV among critical safety evaluation code system JACS, benchmark calculation was carried out from 1980 in 1982. Some cases where the neutron multiplication factor calculated in the heterogeneous system in it was less than 0.95 were seen. In this report, it re-calculated by considering the cause about the heterogeneous system of the U+Pu nitric acid solution systems containing the neutron poison shown in JAERI-M 9859. The present study has shown that the k{sub eff} value less than 0.95 given in JAERI-M 9859 is caused by the fact that the water reflector below a cylindrical container was not taken into consideration in the KENO-IV calculation model. By taking into the water reflector, the KENO-IV calculation gives a k{sub eff} value greater than 0.95 and a good agreement with the experiment. (author)

  4. Kvantitativ benchmark - Produktionsvirksomheder

    DEFF Research Database (Denmark)

    Sørensen, Ole H.; Andersen, Vibeke

    Rapport med resultatet af kvantitativ benchmark over produktionsvirksomhederne i VIPS projektet.......Rapport med resultatet af kvantitativ benchmark over produktionsvirksomhederne i VIPS projektet....

  5. Benchmarking in Student Affairs.

    Science.gov (United States)

    Mosier, Robert E.; Schwarzmueller, Gary J.

    2002-01-01

    Discusses the use of benchmarking in student affairs, focusing on issues related to student housing. Provides examples of how benchmarking has influenced administrative practice at many institutions. (EV)

  6. RANS analyses on erosion behavior of density stratification consisted of helium–air mixture gas by a low momentum vertical buoyant jet in the PANDA test facility, the third international benchmark exercise (IBE-3)

    Energy Technology Data Exchange (ETDEWEB)

    Abe, Satoshi, E-mail: abe.satoshi@jaea.go.jp; Ishigaki, Masahiro; Sibamoto, Yasuteru; Yonomoto, Taisuke

    2015-08-15

    Highlights: . • The third international benchmark exercise (IBE-3) focused on density stratification erosion by a vertical buoyant jet in the reactor containment vessel. • Two types turbulence model modification were applied in order to accurately simulate the turbulence helium transportation in the density stratification. • The analysis result in case with turbulence model modification is good agreement with the experimental data. • There is a major difference of turbulence helium–mass transportation between in case with and without the turbulence model modification. - Abstract: Density stratification in the reactor containment vessel is an important phenomenon on an issue of hydrogen safety. The Japan Atomic Energy Agency (JAEA) has started the ROSA-SA project on containment thermal hydraulics. As a part of the activity, we participated in the third international CFD benchmark exercise (IBE-3) focused on density stratification erosion by a vertical buoyant jet in containment vessel. This paper shows our approach for the IBE-3, focusing on the turbulence transport phenomena in eroding the density stratification and introducing modified turbulence models for improvement of the CFD analyses. For this analysis, we modified the CFD code OpenFOAM by using two turbulence models; the Kato and Launder modification to estimate turbulent kinetic energy production around a stagnation point, and the Katsuki model to consider turbulence damping in density stratification. As a result, the modified code predicted well the experimental data. The importance of turbulence transport modeling is also discussed using the calculation results.

  7. Benchmarking v ICT

    OpenAIRE

    Blecher, Jan

    2009-01-01

    The aim of this paper is to describe benefits of benchmarking IT in wider context and benchmarking scope at all. I specify benchmarking as a process and mention basic rules and guidelines. Further I define IT benchmarking domains and describe possibilities of their use. Best known type of IT benchmark is cost benchmark which represents only a subset of benchmark opportunities. In this paper, is cost benchmark rather an imaginary first step to benchmarking contribution to company. IT benchmark...

  8. DSP Platform Benchmarking : DSP Platform Benchmarking

    OpenAIRE

    Xinyuan, Luo

    2009-01-01

    Benchmarking of DSP kernel algorithms was conducted in the thesis on a DSP processor for teaching in the course TESA26 in the department of Electrical Engineering. It includes benchmarking on cycle count and memory usage. The goal of the thesis is to evaluate the quality of a single MAC DSP instruction set and provide suggestions for further improvement in instruction set architecture accordingly. The scope of the thesis is limited to benchmark the processor only based on assembly coding. The...

  9. Benchmarking Universiteitsvastgoed: Managementinformatie bij vastgoedbeslissingen

    NARCIS (Netherlands)

    Den Heijer, A.C.; De Vries, J.C.

    2004-01-01

    Voor u ligt het eindrapport van het onderzoek "Benchmarking universiteitsvastgoed". Dit rapport is de samenvoeging van twee deel producten: het theorierapport (verschenen in december 2003) en het praktijkrapport (verschenen in januari 2004). Onderwerpen in het theoriedeel zijn de analyse van andere

  10. [Benchmarking in health care: conclusions and recommendations].

    Science.gov (United States)

    Geraedts, Max; Selbmann, Hans-Konrad

    2011-01-01

    The German Health Ministry funded 10 demonstration projects and accompanying research of benchmarking in health care. The accompanying research work aimed to infer generalisable findings and recommendations. We performed a meta-evaluation of the demonstration projects and analysed national and international approaches to benchmarking in health care. It was found that the typical benchmarking sequence is hardly ever realised. Most projects lack a detailed analysis of structures and processes of the best performers as a starting point for the process of learning from and adopting best practice. To tap the full potential of benchmarking in health care, participation in voluntary benchmarking projects should be promoted that have been demonstrated to follow all the typical steps of a benchmarking process.

  11. Sedimentological and biostratigraphical analyses of short sediment cores from Hagelseewli (2339 m a.s.l. in the Swiss Alps

    Directory of Open Access Journals (Sweden)

    Jacqueline F.N. VAN LEEUWEN

    2000-09-01

    Full Text Available Several short sediment cores of between 35 and 40 cm from Hagelseewli, a small, remote lake in the Swiss Alps at an elevation of 2339 m a.s.l. were correlated according to their organic matter content. The sediments are characterized by organic silts and show in their uppermost part a surprisingly high amount of organic matter (30-35%. Synchronous changes, occurring in pollen from snow-bed vegetation, the alga Pediastrum, chironomids, and grain-size composition, point to a climatic change interpreted as cooler or shorter summers that led to prolonged ice-cover on the lake. According to palynological results the sediments date back to at least the early 15th century A.D., with the cooling phase encompassing the period between late 16th and the mid-19th century thus coinciding with the Little Ice Age. Low concentrations of both chironomid head capsules and cladoceran remains in combination with results from fossil pigment analyses point to longer periods of bottom-water anoxia as a result of long-lasting ice-cover that prevented mixing of the water column. According to our results aquatic biota in Hagelseewli are mainly indirectly influenced by climate change. The duration of ice-cover on the lake controls the mixing of the water column as well as light-availability for phytoplankton blooms.

  12. Aquatic Life Benchmarks

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Aquatic Life Benchmarks is an EPA-developed set of criteria for freshwater species. These benchmarks are based on toxicity values reviewed by EPA and used in the...

  13. Benchmarking a DSP processor

    OpenAIRE

    Lennartsson, Per; Nordlander, Lars

    2002-01-01

    This Master thesis describes the benchmarking of a DSP processor. Benchmarking means measuring the performance in some way. In this report, we have focused on the number of instruction cycles needed to execute certain algorithms. The algorithms we have used in the benchmark are all very common in signal processing today. The results we have reached in this thesis have been compared to benchmarks for other processors, performed by Berkeley Design Technology, Inc. The algorithms were programm...

  14. The COST Benchmark

    DEFF Research Database (Denmark)

    Jensen, Christian Søndergaard; Tiesyte, Dalia; Tradisauskas, Nerius

    2006-01-01

    , and more are underway. As a result, there is an increasing need for an independent benchmark for spatio-temporal indexes. This paper characterizes the spatio-temporal indexing problem and proposes a benchmark for the performance evaluation and comparison of spatio-temporal indexes. Notably, the benchmark...

  15. The validation benchmark analyses for CMS data

    CERN Document Server

    Holub, Lukas

    2016-01-01

    The main goal of this report is to summarize my work at CERN during this summer. My first task was to transport code and dataset files from CERN Open Data Portal to Github that will be more convenient for users. The second part of my work was to copy environment from CERN Open Data Virtual Machine and apply it in the analysis environment SWAN. The last task was to rescale X-axis of the histogram.

  16. Benchmarking in Identifying Priority Directions of Development of Telecommunication Operators

    Directory of Open Access Journals (Sweden)

    Zaharchenko Lolita A.

    2013-12-01

    Full Text Available The article analyses evolution of development and possibilities of application of benchmarking in the telecommunication sphere. It studies essence of benchmarking on the basis of generalisation of approaches of different scientists to definition of this notion. In order to improve activity of telecommunication operators, the article identifies the benchmarking technology and main factors, that determine success of the operator in the modern market economy, and the mechanism of benchmarking and component stages of carrying out benchmarking by a telecommunication operator. It analyses the telecommunication market and identifies dynamics of its development and tendencies of change of the composition of telecommunication operators and providers. Having generalised the existing experience of benchmarking application, the article identifies main types of benchmarking of telecommunication operators by the following features: by the level of conduct of (branch, inter-branch and international benchmarking; by relation to participation in the conduct (competitive and joint; and with respect to the enterprise environment (internal and external.

  17. Benchmarking semantic web technology

    CERN Document Server

    García-Castro, R

    2009-01-01

    This book addresses the problem of benchmarking Semantic Web Technologies; first, from a methodological point of view, proposing a general methodology to follow in benchmarking activities over Semantic Web Technologies and, second, from a practical point of view, presenting two international benchmarking activities that involved benchmarking the interoperability of Semantic Web technologies using RDF(S) as the interchange language in one activity and OWL in the other.The book presents in detail how the different resources needed for these interoperability benchmarking activities were defined:

  18. Benchmarking in Identifying Priority Directions of Development of Telecommunication Operators

    OpenAIRE

    Zaharchenko Lolita A.; Kolesnyk Oksana A.

    2013-01-01

    The article analyses evolution of development and possibilities of application of benchmarking in the telecommunication sphere. It studies essence of benchmarking on the basis of generalisation of approaches of different scientists to definition of this notion. In order to improve activity of telecommunication operators, the article identifies the benchmarking technology and main factors, that determine success of the operator in the modern market economy, and the mechanism of benchmarking an...

  19. Characterization of rapid climate changes through isotope analyses of ice and entrapped air in the NEEM ice core

    DEFF Research Database (Denmark)

    Guillevic, Myriam

    Greenland ice core have revealed the occurrence of rapid climatic instabilities during the last glacial period, known as Dansgaard-Oeschger (DO) events, while marine cores from the North Atlantic have evidenced layers of ice rafted debris deposited by icebergs melt, caused by the collapse...... four Greenland deep ice cores (GRIP, GISP2, NGRIP and NEEM) are investigated over a series of Dansgaard– Oeschger events (DO 8, 9 and 10). Combined with firn modeling, δ15N data allow us to quantify abrupt temperature increases for each drill site (1σ = 0.6°C for NEEM, GRIP and GISP2, 1.5°C for NGRIP...

  20. Benchmarking in University Toolbox

    Directory of Open Access Journals (Sweden)

    Katarzyna Kuźmicz

    2015-06-01

    Full Text Available In the face of global competition and rising challenges that higher education institutions (HEIs meet, it is imperative to increase innovativeness and efficiency of their management. Benchmarking can be the appropriate tool to search for a point of reference necessary to assess institution’s competitive position and learn from the best in order to improve. The primary purpose of the paper is to present in-depth analysis of benchmarking application in HEIs worldwide. The study involves indicating premises of using benchmarking in HEIs. It also contains detailed examination of types, approaches and scope of benchmarking initiatives. The thorough insight of benchmarking applications enabled developing classification of benchmarking undertakings in HEIs. The paper includes review of the most recent benchmarking projects and relating them to the classification according to the elaborated criteria (geographical range, scope, type of data, subject, support and continuity. The presented examples were chosen in order to exemplify different approaches to benchmarking in higher education setting. The study was performed on the basis of the published reports from benchmarking projects, scientific literature and the experience of the author from the active participation in benchmarking projects. The paper concludes with recommendations for university managers undertaking benchmarking, derived on the basis of the conducted analysis.

  1. Data from core analyses, aquifer testing, and geophysical logging of Denver Basin bedrock aquifers at Castle Pines, Colorado

    Science.gov (United States)

    Robson, S.G.; Banta, E.R.

    1993-01-01

    This report contains data pertaining to the geologic and hydrologic characteristics of the bedrock aquifers of the Denver basin at a site near Castle Pines, Colorado. Data consist of a lithologic- description of about 2,400 ft of drill core and laboratory determinations of mineralogy, grain size, bulk and grain density, porosity, specific yield, and specific retention for selected core samples. Water-level data, atmospheric-pressure measurements, aquifer-compression measurements, and borehole geophysical logs also are included.

  2. Characterization of rapid climate changes through isotope analyses of ice and entrapped air in the NEEM ice core

    DEFF Research Database (Denmark)

    Guillevic, Myriam

    Greenland ice core have revealed the occurrence of rapid climatic instabilities during the last glacial period, known as Dansgaard-Oeschger (DO) events, while marine cores from the North Atlantic have evidenced layers of ice rafted debris deposited by icebergs melt, caused by the collapse...... of Northern hemisphere ice sheets, known as Heinrich events. The imprint of DO and Heinrich events is also recorded at mid to low latitudes in different archives of the northern hemisphere. A detailed multi-proxy study of the sequence of these rapid instabilities is essential for understanding the climate...... mechanisms at play. Recent analytical developments have made possible to measure new paleoclimate proxies in Greenland ice cores. In this thesis we first contribute to these analytical developments by measuring the new innovative parameter 17O-excess at LSCE (Laboratoire des Sciences du Climatet de l...

  3. The Conic Benchmark Format

    DEFF Research Database (Denmark)

    Friberg, Henrik A.

    This document constitutes the technical reference manual of the Conic Benchmark Format with le extension: .cbf or .CBF. It unies linear, second-order cone (also known as conic quadratic) and semidenite optimization with mixed-integer variables. The format has been designed with benchmark libraries...... in mind, and therefore focuses on compact and easily parsable representations. The problem structure is separated from the problem data, and the format moreover facilitate benchmarking of hotstart capability through sequences of changes....

  4. Benchmarking ENDF/B-VII.0

    Science.gov (United States)

    van der Marck, Steven C.

    2006-12-01

    The new major release VII.0 of the ENDF/B nuclear data library has been tested extensively using benchmark calculations. These were based upon MCNP-4C3 continuous-energy Monte Carlo neutronics simulations, together with nuclear data processed using the code NJOY. Three types of benchmarks were used, viz., criticality safety benchmarks, (fusion) shielding benchmarks, and reference systems for which the effective delayed neutron fraction is reported. For criticality safety, more than 700 benchmarks from the International Handbook of Criticality Safety Benchmark Experiments were used. Benchmarks from all categories were used, ranging from low-enriched uranium, compound fuel, thermal spectrum ones (LEU-COMP-THERM), to mixed uranium-plutonium, metallic fuel, fast spectrum ones (MIX-MET-FAST). For fusion shielding many benchmarks were based on IAEA specifications for the Oktavian experiments (for Al, Co, Cr, Cu, LiF, Mn, Mo, Si, Ti, W, Zr), Fusion Neutronics Source in Japan (for Be, C, N, O, Fe, Pb), and Pulsed Sphere experiments at Lawrence Livermore National Laboratory (for 6Li, 7Li, Be, C, N, O, Mg, Al, Ti, Fe, Pb, D 2O, H 2O, concrete, polyethylene and teflon). For testing delayed neutron data more than thirty measurements in widely varying systems were used. Among these were measurements in the Tank Critical Assembly (TCA in Japan) and IPEN/MB-01 (Brazil), both with a thermal spectrum, and two cores in Masurca (France) and three cores in the Fast Critical Assembly (FCA, Japan), all with fast spectra. In criticality safety, many benchmarks were chosen from the category with a thermal spectrum, low-enriched uranium, compound fuel (LEU-COMP-THERM), because this is typical of most current-day reactors, and because these benchmarks were previously underpredicted by as much as 0.5% by most nuclear data libraries (such as ENDF/B-VI.8, JEFF-3.0). The calculated results presented here show that this underprediction is no longer there for ENDF/B-VII.0. The average over 257

  5. Internet based benchmarking

    DEFF Research Database (Denmark)

    Bogetoft, Peter; Nielsen, Kurt

    2005-01-01

    We discuss the design of interactive, internet based benchmarking using parametric (statistical) as well as nonparametric (DEA) models. The user receives benchmarks and improvement potentials. The user is also given the possibility to search different efficiency frontiers and hereby to explore...

  6. Thermal Performance Benchmarking (Presentation)

    Energy Technology Data Exchange (ETDEWEB)

    Moreno, G.

    2014-11-01

    This project will benchmark the thermal characteristics of automotive power electronics and electric motor thermal management systems. Recent vehicle systems will be benchmarked to establish baseline metrics, evaluate advantages and disadvantages of different thermal management systems, and identify areas of improvement to advance the state-of-the-art.

  7. Handleiding benchmark VO

    NARCIS (Netherlands)

    Blank, j.l.t.

    2008-01-01

    OnderzoeksrapportenArchiefTechniek, Bestuur en Management> Over faculteit> Afdelingen> Innovation Systems> IPSE> Onderzoek> Publicaties> Onderzoeksrapporten> Handleiding benchmark VO Handleiding benchmark VO 25 november 2008 door IPSE Studies Door J.L.T. Blank. Handleiding voor het lezen van de i

  8. Benchmark af erhvervsuddannelserne

    DEFF Research Database (Denmark)

    Bogetoft, Peter; Wittrup, Jesper

    I dette arbejdspapir diskuterer vi, hvorledes de danske erhvervsskoler kan benchmarkes, og vi præsenterer resultaterne af en række beregningsmodeller. Det er begrebsmæssigt kompliceret at benchmarke erhvervsskolerne. Skolerne udbyder en lang række forskellige uddannelser. Det gør det vanskeligt...

  9. Benchmarking af kommunernes sagsbehandling

    DEFF Research Database (Denmark)

    Amilon, Anna

    Fra 2007 skal Ankestyrelsen gennemføre benchmarking af kommuernes sagsbehandlingskvalitet. Formålet med benchmarkingen er at udvikle praksisundersøgelsernes design med henblik på en bedre opfølgning og at forbedre kommunernes sagsbehandling. Dette arbejdspapir diskuterer metoder for benchmarking...

  10. Methodology for Benchmarking IPsec Gateways

    Directory of Open Access Journals (Sweden)

    Adam Tisovský

    2012-08-01

    Full Text Available The paper analyses forwarding performance of IPsec gateway over the rage of offered loads. It focuses on the forwarding rate and packet loss particularly at the gateway’s performance peak and at the state of gateway’s overload. It explains possible performance degradation when the gateway is overloaded by excessive offered load. The paper further evaluates different approaches for obtaining forwarding performance parameters – a widely used throughput described in RFC 1242, maximum forwarding rate with zero packet loss and us proposed equilibrium throughput. According to our observations equilibrium throughput might be the most universal parameter for benchmarking security gateways as the others may be dependent on the duration of test trials. Employing equilibrium throughput would also greatly shorten the time required for benchmarking. Lastly, the paper presents methodology and a hybrid step/binary search algorithm for obtaining value of equilibrium throughput.

  11. How Activists Use Benchmarks

    DEFF Research Database (Denmark)

    Seabrooke, Leonard; Wigan, Duncan

    2015-01-01

    Non-governmental organisations use benchmarks as a form of symbolic violence to place political pressure on firms, states, and international organisations. The development of benchmarks requires three elements: (1) salience, that the community of concern is aware of the issue and views...... are put to the test. The first is a reformist benchmarking cycle where organisations defer to experts to create a benchmark that conforms with the broader system of politico-economic norms. The second is a revolutionary benchmarking cycle driven by expert-activists that seek to contest strong vested...... interests and challenge established politico-economic norms. Differentiating these cycles provides insights into how activists work through organisations and with expert networks, as well as how campaigns on complex economic issues can be mounted and sustained....

  12. Verification and validation benchmarks.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy

    2007-02-01

    Verification and validation (V&V) are the primary means to assess the accuracy and reliability of computational simulations. V&V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V&V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the level of

  13. Isotopic and chemical analyses of a temperate firn core from a Chinese alpine glacier and its regional climatic significance

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    Mt. Yulong is the southernmost currently glacier-covered area in Eurasia, including China. There are 19 sub-tropical temperate glaciers on the mountain, controlled by the south-western monsoon climate. In the summer of 1999, a firn core, 10. 10 m long, extending down to glacier ice, was recovered in the accumulation area of the largest glacier, Baishui No. 1. Periodic variations of climatic signals above 7. 8 m depth were apparent, and net accumulation of four years was identified by the annual oscillations of isotopic and ionic composition. The boundaries of annual accumulation were confirmed by higher values of electrical conductivity and pH, and by dirty refreezing ice layers at the levels of summer surfaces. Calculated mean annual net accumulation from 1994/1995 to 1997/1998 was about 900 mm water equivalent. The amplitude of isotopic variations in the profile decreased with increasing depth, and isotopic homogenization occurred below 7. 8 m as a result of meltwater percolation. Variations of δ18O above 7. 8 m showed an approximate correlation with the winter climatic trend at Li Jiang Station, 25 km away. Concentrations of Ca2+ and Mg2+ were much higher than those of Na+ and K+ , indicating that the air masses for precipitation were mainly from a continental source, and that the core material accumulated during the winter period. The close correspondence of C1- and Na+ indicated their common origin. Very low concentrations of SO2-4 and NO3- suggest that pollution caused by human activities is quite low in the area. The mean annual net accumulation in the core and the estimated ablation indicate that the average annual precipitation above the glacier's equilibrium line is 2400 - 3150 mm, but this needs to be confirmed by long term observation of mass balance.

  14. Interpretation of actinide-distribution data obtained from non-destructive and destructive post-test analyses of an intact-core column of Culebra dolomite.

    Science.gov (United States)

    Perkins, W G; Lucero, D A

    2001-02-01

    The US Department of Energy (DOE), with technical assistance from Sandia National Laboratories, has successfully received EPA certification and opened the Waste Isolation Pilot Plant (WIPP), a nuclear waste disposal facility located approximately 42 km east of Carlsbad, NM. Performance assessment (PA) analyses indicate that human intrusions by inadvertent, intermittent drilling for resources provide the only credible mechanisms for significant releases of radionuclides from the disposal system. For long-term brine releases, migration pathways through the permeable layers of rock above the Salado formation are important. Major emphasis is placed on the Culebra Member of the Rustler Formation because this is the most transmissive geologic layer overlying the WIPP site. In order to help quantify parameters for the calculated releases, radionuclide transport experiments have been carried out using intact-core columns obtained from the Culebra dolomite member of the Rustler Formation within the WIPP site. This paper deals primarily with results of analyses for 241Pu and 241Am distributions developed during transport experiments in one of these cores. Transport experiments were done using a synthetic brine that simulates Culebra brine at the core recovery location (the WIPP air-intake shaft (AIS)). Hydraulic characteristics (i.e., apparent porosity and apparent dispersion coefficient) for intact-core columns were obtained via experiments using the conservative tracer 22Na. Elution experiments carried out over periods of a few days with tracers 232U and 239Np indicated that these tracers were weakly retarded as indicated by delayed elution of the species. Elution experiments with tracers 241Pu and 241Am were attempted but no elution of either species has been observed to date, including experiments of many months' duration. In order to quantify retardation of the non-eluted species 241Pu and 241Am after a period of brine flow, non-destructive and destructive analyses of

  15. Benchmarking infrastructure for mutation text mining

    Science.gov (United States)

    2014-01-01

    Background Experimental research on the automatic extraction of information about mutations from texts is greatly hindered by the lack of consensus evaluation infrastructure for the testing and benchmarking of mutation text mining systems. Results We propose a community-oriented annotation and benchmarking infrastructure to support development, testing, benchmarking, and comparison of mutation text mining systems. The design is based on semantic standards, where RDF is used to represent annotations, an OWL ontology provides an extensible schema for the data and SPARQL is used to compute various performance metrics, so that in many cases no programming is needed to analyze results from a text mining system. While large benchmark corpora for biological entity and relation extraction are focused mostly on genes, proteins, diseases, and species, our benchmarking infrastructure fills the gap for mutation information. The core infrastructure comprises (1) an ontology for modelling annotations, (2) SPARQL queries for computing performance metrics, and (3) a sizeable collection of manually curated documents, that can support mutation grounding and mutation impact extraction experiments. Conclusion We have developed the principal infrastructure for the benchmarking of mutation text mining tasks. The use of RDF and OWL as the representation for corpora ensures extensibility. The infrastructure is suitable for out-of-the-box use in several important scenarios and is ready, in its current state, for initial community adoption. PMID:24568600

  16. Full sphere hydrodynamic and dynamo benchmarks

    KAUST Repository

    Marti, P.

    2014-01-26

    Convection in planetary cores can generate fluid flow and magnetic fields, and a number of sophisticated codes exist to simulate the dynamic behaviour of such systems. We report on the first community activity to compare numerical results of computer codes designed to calculate fluid flow within a whole sphere. The flows are incompressible and rapidly rotating and the forcing of the flow is either due to thermal convection or due to moving boundaries. All problems defined have solutions that alloweasy comparison, since they are either steady, slowly drifting or perfectly periodic. The first two benchmarks are defined based on uniform internal heating within the sphere under the Boussinesq approximation with boundary conditions that are uniform in temperature and stress-free for the flow. Benchmark 1 is purely hydrodynamic, and has a drifting solution. Benchmark 2 is a magnetohydrodynamic benchmark that can generate oscillatory, purely periodic, flows and magnetic fields. In contrast, Benchmark 3 is a hydrodynamic rotating bubble benchmark using no slip boundary conditions that has a stationary solution. Results from a variety of types of code are reported, including codes that are fully spectral (based on spherical harmonic expansions in angular coordinates and polynomial expansions in radius), mixed spectral and finite difference, finite volume, finite element and also a mixed Fourier-finite element code. There is good agreement between codes. It is found that in Benchmarks 1 and 2, the approximation of a whole sphere problem by a domain that is a spherical shell (a sphere possessing an inner core) does not represent an adequate approximation to the system, since the results differ from whole sphere results. © The Authors 2014. Published by Oxford University Press on behalf of The Royal Astronomical Society.

  17. Measuring NUMA effects with the STREAM benchmark

    CERN Document Server

    Bergstrom, Lars

    2011-01-01

    Modern high-end machines feature multiple processor packages, each of which contains multiple independent cores and integrated memory controllers connected directly to dedicated physical RAM. These packages are connected via a shared bus, creating a system with a heterogeneous memory hierarchy. Since this shared bus has less bandwidth than the sum of the links to memory, aggregate memory bandwidth is higher when parallel threads all access memory local to their processor package than when they access memory attached to a remote package. But, the impact of this heterogeneous memory architecture is not easily understood from vendor benchmarks. Even where these measurements are available, they provide only best-case memory throughput. This work presents a series of modifications to the well-known STREAM benchmark to measure the effects of NUMA on both a 48-core AMD Opteron machine and a 32-core Intel Xeon machine.

  18. Tectono-geochemistry analyses of fault rocks in shear zone of metamorphic core complex in north Jiangxi, China

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    Through a systematic sampling test and mass equilibrium analysis of the three sorts of complex assemblages (intrusive complex, tectonic complex and metamorphic complex) penetrating the metamorphic core complex (MCC) in the Xingzi area of north Jiangxi, the authors find that, like major elements, the trace elements of small ion radius, big specific gravity and high potential form the accumulative series in fault rocks, instead of divergence series. In rare earth elements, ΣREE and HREE are relatively centralized, characteristic of rising and Eu loss in the distribution pattern. Only on the upside of the ductile fault, there exist some phenomena contrary to the general rules, most of which are restricted by the rock rheologic differentiation, coupling of mechanics and chemistry, and inversion of tectonic regime.

  19. Core Genome Multilocus Sequence Typing Scheme for Stable, Comparative Analyses of Campylobacter jejuni and C. coli Human Disease Isolates

    Science.gov (United States)

    Bray, James E.; Jolley, Keith A.; McCarthy, Noel D.

    2017-01-01

    ABSTRACT Human campylobacteriosis, caused by Campylobacter jejuni and C. coli, remains a leading cause of bacterial gastroenteritis in many countries, but the epidemiology of campylobacteriosis outbreaks remains poorly defined, largely due to limitations in the resolution and comparability of isolate characterization methods. Whole-genome sequencing (WGS) data enable the improvement of sequence-based typing approaches, such as multilocus sequence typing (MLST), by substantially increasing the number of loci examined. A core genome MLST (cgMLST) scheme defines a comprehensive set of those loci present in most members of a bacterial group, balancing very high resolution with comparability across the diversity of the group. Here we propose a set of 1,343 loci as a human campylobacteriosis cgMLST scheme (v1.0), the allelic profiles of which can be assigned to core genome sequence types. The 1,343 loci chosen were a subset of the 1,643 loci identified in the reannotation of the genome sequence of C. jejuni isolate NCTC 11168, chosen as being present in >95% of draft genomes of 2,472 representative United Kingdom campylobacteriosis isolates, comprising 2,207 (89.3%) C. jejuni isolates and 265 (10.7%) C. coli isolates. Validation of the cgMLST scheme was undertaken with 1,478 further high-quality draft genomes, containing 150 or fewer contiguous sequences, from disease isolate collections: 99.5% of these isolates contained ≥95% of the 1,343 cgMLST loci. In addition to the rapid and effective high-resolution analysis of large numbers of diverse isolates, the cgMLST scheme enabled the efficient identification of very closely related isolates from a well-defined single-source campylobacteriosis outbreak. PMID:28446571

  20. The record of Miocene climatic events in AND-2A drill core (Antarctica): Insights from provenance analyses of basement clasts

    Science.gov (United States)

    Sandroni, Sonia; Talarico, Franco M.

    2011-01-01

    This paper includes the results of a detailed quantitative provenance investigation on gravel-size clasts occurring within the late Early to Late Miocene sedimentary glacimarine section recovered for the first time by the AND-2A core in the SW sector of the Ross Sea (southern McMurdo Sound, Antarctica). This period of time is of crucial interest, as it includes two of the major Cenozoic events in the global climatic evolution: the mid-Miocene climatic optimum and the middle Miocene climate transition. Petrographical and mineral chemistry data on basement clasts allow to individuate two different diagnostic clast assemblages, which clearly suggest two specific sectors of southern Victoria Land as the most likely sources: the Mulock-Skelton glacier and the Koettlitz-Blue glacier regions. Distribution patterns reveal high fluctuations of the detritus source areas throughout the investigated core interval, variations which can be interpreted as the direct result of an evolving McMurdo Sound paleogeography during the late Early to Late Miocene. Consistently with sedimentological studies, gravel-fraction clast distribution patterns clearly testify that the Antarctic ice sheet experienced a dramatic contraction at ca. 17.35 ± 0.14 Ma (likely correlated to the onset of the climatic optimum), and in a gravel-fraction clasts show that the variations of paleoenvironmental drivers characterising this period were able to exert deep transformation of the Antarctic ice sheet and reveal the methodology to be a powerful tool for the reconstruction of paleo-glacial-flow direction and paleogeographic scenarios.

  1. Toxicological Benchmarks for Wildlife

    Energy Technology Data Exchange (ETDEWEB)

    Sample, B.E. Opresko, D.M. Suter, G.W.

    1993-01-01

    Ecological risks of environmental contaminants are evaluated by using a two-tiered process. In the first tier, a screening assessment is performed where concentrations of contaminants in the environment are compared to no observed adverse effects level (NOAEL)-based toxicological benchmarks. These benchmarks represent concentrations of chemicals (i.e., concentrations presumed to be nonhazardous to the biota) in environmental media (water, sediment, soil, food, etc.). While exceedance of these benchmarks does not indicate any particular level or type of risk, concentrations below the benchmarks should not result in significant effects. In practice, when contaminant concentrations in food or water resources are less than these toxicological benchmarks, the contaminants may be excluded from further consideration. However, if the concentration of a contaminant exceeds a benchmark, that contaminant should be retained as a contaminant of potential concern (COPC) and investigated further. The second tier in ecological risk assessment, the baseline ecological risk assessment, may use toxicological benchmarks as part of a weight-of-evidence approach (Suter 1993). Under this approach, based toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. Other sources of evidence include media toxicity tests, surveys of biota (abundance and diversity), measures of contaminant body burdens, and biomarkers. This report presents NOAEL- and lowest observed adverse effects level (LOAEL)-based toxicological benchmarks for assessment of effects of 85 chemicals on 9 representative mammalian wildlife species (short-tailed shrew, little brown bat, meadow vole, white-footed mouse, cottontail rabbit, mink, red fox, and whitetail deer) or 11 avian wildlife species (American robin, rough-winged swallow, American woodcock, wild turkey, belted kingfisher, great blue heron, barred owl, barn owl, Cooper's hawk, and red

  2. Benchmarking expert system tools

    Science.gov (United States)

    Riley, Gary

    1988-01-01

    As part of its evaluation of new technologies, the Artificial Intelligence Section of the Mission Planning and Analysis Div. at NASA-Johnson has made timing tests of several expert system building tools. Among the production systems tested were Automated Reasoning Tool, several versions of OPS5, and CLIPS (C Language Integrated Production System), an expert system builder developed by the AI section. Also included in the test were a Zetalisp version of the benchmark along with four versions of the benchmark written in Knowledge Engineering Environment, an object oriented, frame based expert system tool. The benchmarks used for testing are studied.

  3. Financial Integrity Benchmarks

    Data.gov (United States)

    City of Jackson, Mississippi — This data compiles standard financial integrity benchmarks that allow the City to measure its financial standing. It measure the City's debt ratio and bond ratings....

  4. GeodeticBenchmark_GEOMON

    Data.gov (United States)

    Vermont Center for Geographic Information — The GeodeticBenchmark_GEOMON data layer consists of geodetic control monuments (points) that have a known position or spatial reference. The locations of these...

  5. Diagnostic Algorithm Benchmarking

    Science.gov (United States)

    Poll, Scott

    2011-01-01

    A poster for the NASA Aviation Safety Program Annual Technical Meeting. It describes empirical benchmarking on diagnostic algorithms using data from the ADAPT Electrical Power System testbed and a diagnostic software framework.

  6. Applying relationship anecdotes paradigm interviews to study client-therapist relationship narratives: Core conflictual relationship theme analyses.

    Science.gov (United States)

    Wiseman, Hadas; Tishby, Orya

    2017-05-01

    We describe client-therapist relational narratives collected in relationship anecdotes paradigm (RAP) interviews during psychotherapy and the application of the core conflictual relationship theme (CCRT) method. Changes in clients' and therapists' CCRT in relation to each other are examined and associations between their CCRTs and self-reported ruptures and repairs are explored. Sixty-seven clients and 27 therapists underwent RAP interviews and completed self-report rupture and repair items at early, middle, and late psychodynamic psychotherapy. Client-therapist relationship narratives were rated on the CCRT and the relational interplay within dyads was explored qualitatively. CCRT changes from early to late therapy showed that with time clients perceived the therapist (RO) and the self (RS) more positively, and the therapist perceived the self (RS) less negatively. Some associations were found between tension in the session and clients' and therapists' negative RO and RS. Therapists' reports of alliance repairs were associated with positive RO and RS. Relational narratives that clients and therapists tell in RAP interviews about meaningful interactions between them, enhance our understanding of clients' and therapists' inner experiences during interpersonal dances in the therapeutic relationship. Limitations and directions for future research are discussed, and implications for training are suggested.

  7. On Big Data Benchmarking

    OpenAIRE

    Han, Rui; Lu, Xiaoyi

    2014-01-01

    Big data systems address the challenges of capturing, storing, managing, analyzing, and visualizing big data. Within this context, developing benchmarks to evaluate and compare big data systems has become an active topic for both research and industry communities. To date, most of the state-of-the-art big data benchmarks are designed for specific types of systems. Based on our experience, however, we argue that considering the complexity, diversity, and rapid evolution of big data systems, fo...

  8. Benchmarking in Foodservice Operations.

    Science.gov (United States)

    2007-11-02

    51. Lingle JH, Schiemann WA. From balanced scorecard to strategic gauges: Is measurement worth it? Mgt Rev. 1996; 85(3):56-61. 52. Struebing L...studies lasted from nine to twelve months, and could extend beyond that time for numerous reasons (49). Benchmarking was not industrial tourism , a...not simply data comparison, a fad, a means for reducing resources, a quick-fix program, or industrial tourism . Benchmarking was a complete process

  9. Constraints on Neogene deformation in the southern Terror Rift from calcite twinning analyses of veins within the ANDRILL MIS core, Victoria Land Basin, Antarctica

    Science.gov (United States)

    Paulsen, T. S.; Demosthenous, C.; Wilson, T. J.; Millan, C.

    2009-12-01

    The ANDRILL MIS (McMurdo Ice Shelf) Drilling Project obtained over 1200 meters of Neogene sedimentary and volcanic rocks in 2006/2007. Systematic fracture logging of the AND-1B core identified 1,475 natural fractures, i.e. pre-existing fractures in the rock intersected by coring. The most abundant natural fractures are normal faults and calcite veins; reverse faults, brecciated zones, and sedimentary intrusions are also present. In order to better understand Neogene deformation patterns within the southern Terror Rift, we have been conducting strain analyses on mechanically twinned calcite within healed fractures in the drill core. Twinning strains using all of the data from each sample studied to date range from 2% to 10%. The cleaned data (20% of the largest magnitude deviations removed) typically show ≤30% negative expected values, consistent with a single deformation episode or multiple ~coaxial deformation episodes. The majority of the samples record horizontal extension, similar to strain patterns expected in a normal fault regime and/or vertical sedimentary compaction in a continental rift system. The morphology, width, and intensity of twins in the samples suggest that twinning typically occurred at temperatures <170° C. Twinning intensities suggest differential stress magnitudes that caused the twinning ranged from 216 to 295 MPa.

  10. Benchmarking File System Benchmarking: It *IS* Rocket Science

    OpenAIRE

    Seltzer, Margo I.; Tarasov, Vasily; Bhanage, Saumitra; Zadok, Erez

    2011-01-01

    The quality of file system benchmarking has not improved in over a decade of intense research spanning hundreds of publications. Researchers repeatedly use a wide range of poorly designed benchmarks, and in most cases, develop their own ad-hoc benchmarks. Our community lacks a definition of what we want to benchmark in a file system. We propose several dimensions of file system benchmarking and review the wide range of tools and techniques in widespread use. We experimentally show that even t...

  11. Interpretation of data obtained from non-destructive and destructive post-test analyses of an intact-core column of culebra dolomite

    Energy Technology Data Exchange (ETDEWEB)

    Lucero, Daniel L.; Perkins, W. George

    1998-09-01

    The U.S. Department of Energy (DOE) has been developing a nuclear waste disposal facility, the Waste Isolation Pilot Plant (WIPP), located approximately 42 km east of Carlsbad, New Mexico. The WIPP is designed to demonstrate the safe disposal of transuranic wastes produced by the defense nuclear-weapons program. Pefiormance assessment analyses (U.S. DOE, 1996) indicate that human intrusion by inadvertent and intermittent drilling for resources provide the only credible mechanisms for significant releases of radionuclides horn the disposal system. These releases may occur by five mechanisms: (1) cuttings, (2) cavings, (3) spallings, (4) direct brine releases, and (5) long- term brine releases. The first four mechanisms could result in immediate release of contaminant to the accessible environment. For the last mechanisq migration pathways through the permeable layers of rock above the Salado are important, and major emphasis is placed on the Culebra Member of the Rustler Formation because this is the most transmissive geologic layer in the disposal system. For reasons of initial quantity, half-life, and specific radioactivity, certain isotopes of T~ U, Am, and Pu would dominate calculated releases from the WIPP. In order to help quantifi parameters for the calculated releases, radionuclide transport experiments have been carried out using five intact-core columns obtained from the Culebra dolomite member of the Rustler Formation within the Waste Isolation Pilot Pknt (WIPP) site in southeastern New Mexico. This report deals primarily with results of analyses for 241Pu and 241Am distributions developed during transport experiments in one of these cores. All intact-core column transport experiments were done using Culebra-simukmt brine relevant to the core recovery location (the WIPP air-intake shaft - AK). Hydraulic characteristics (i.e., apparent porosity and apparent dispersion coefficient) for intact-core columns were obtained via experiments using conservative

  12. The KMAT: Benchmarking Knowledge Management.

    Science.gov (United States)

    de Jager, Martha

    Provides an overview of knowledge management and benchmarking, including the benefits and methods of benchmarking (e.g., competitive, cooperative, collaborative, and internal benchmarking). Arthur Andersen's KMAT (Knowledge Management Assessment Tool) is described. The KMAT is a collaborative benchmarking tool, designed to help organizations make…

  13. Storage-Intensive Supercomputing Benchmark Study

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, J; Dossa, D; Gokhale, M; Hysom, D; May, J; Pearce, R; Yoo, A

    2007-10-30

    Critical data science applications requiring frequent access to storage perform poorly on today's computing architectures. This project addresses efficient computation of data-intensive problems in national security and basic science by exploring, advancing, and applying a new form of computing called storage-intensive supercomputing (SISC). Our goal is to enable applications that simply cannot run on current systems, and, for a broad range of data-intensive problems, to deliver an order of magnitude improvement in price/performance over today's data-intensive architectures. This technical report documents much of the work done under LDRD 07-ERD-063 Storage Intensive Supercomputing during the period 05/07-09/07. The following chapters describe: (1) a new file I/O monitoring tool iotrace developed to capture the dynamic I/O profiles of Linux processes; (2) an out-of-core graph benchmark for level-set expansion of scale-free graphs; (3) an entity extraction benchmark consisting of a pipeline of eight components; and (4) an image resampling benchmark drawn from the SWarp program in the LSST data processing pipeline. The performance of the graph and entity extraction benchmarks was measured in three different scenarios: data sets residing on the NFS file server and accessed over the network; data sets stored on local disk; and data sets stored on the Fusion I/O parallel NAND Flash array. The image resampling benchmark compared performance of software-only to GPU-accelerated. In addition to the work reported here, an additional text processing application was developed that used an FPGA to accelerate n-gram profiling for language classification. The n-gram application will be presented at SC07 at the High Performance Reconfigurable Computing Technologies and Applications Workshop. The graph and entity extraction benchmarks were run on a Supermicro server housing the NAND Flash 40GB parallel disk array, the Fusion-io. The Fusion system specs are as follows

  14. Building with Benchmarks: The Role of the District in Philadelphia's Benchmark Assessment System

    Science.gov (United States)

    Bulkley, Katrina E.; Christman, Jolley Bruce; Goertz, Margaret E.; Lawrence, Nancy R.

    2010-01-01

    In recent years, interim assessments have become an increasingly popular tool in districts seeking to improve student learning and achievement. Philadelphia has been at the forefront of this change, implementing a set of Benchmark assessments aligned with its Core Curriculum district-wide in 2004. In this article, we examine the overall context…

  15. An ultra-clean technique for accurately analysing Pb isotopes and heavy metals at high spatial resolution in ice cores with sub-pg g(-1) Pb concentrations.

    Science.gov (United States)

    Burn, Laurie J; Rosman, Kevin J R; Candelone, Jean-Pierre; Vallelonga, Paul; Burton, Graeme R; Smith, Andrew M; Morgan, Vin I; Barbante, Carlo; Hong, Sungmin; Boutron, Claude F

    2009-02-23

    Measurements of Pb isotope ratios in ice containing sub-pg g(-1) concentrations are easily compromised by contamination, particularly where limited sample is available. Improved techniques are essential if Antarctic ice cores are to be analysed with sufficient spatial resolution to reveal seasonal variations due to climate. This was achieved here by using stainless steel chisels and saws and strict protocols in an ultra-clean cold room to decontaminate and section ice cores. Artificial ice cores, prepared from high purity water were used to develop and refine the procedures and quantify blanks. Ba and In, two other important elements present at pg g(-1) and fg g(-1) concentrations in Polar ice, were also measured. The final blank amounted to 0.2+/-0.2 pg of Pb with (206)Pb/(207)Pb and (208)Pb/(207)Pb ratios of 1.16+/-0.12 and 2.35+/-0.16, respectively, 1.5+/-0.4 pg of Ba and 0.6+/-2.0 fg of In, most of which probably originates from abrasion of the steel saws by the ice. The procedure was demonstrated on a Holocene Antarctic ice core section and was shown to contribute blanks of only approximately 5%, approximately 14% and approximately 0.8% to monthly resolved samples with respective Pb, Ba and In concentrations of 0.12 pg g(-1), 0.3 pg g(-1) and 2.3 fg g(-1). Uncertainties in the Pb isotopic ratio measurements were degraded by only approximately 0.2%.

  16. A framework of benchmarking land models

    Science.gov (United States)

    Luo, Y. Q.; Randerson, J.; Abramowitz, G.; Bacour, C.; Blyth, E.; Carvalhais, N.; Ciais, P.; Dalmonech, D.; Fisher, J.; Fisher, R.; Friedlingstein, P.; Hibbard, K.; Hoffman, F.; Huntzinger, D.; Jones, C. D.; Koven, C.; Lawrence, D.; Li, D. J.; Mahecha, M.; Niu, S. L.; Norby, R.; Piao, S. L.; Qi, X.; Peylin, P.; Prentice, I. C.; Riley, W.; Reichstein, M.; Schwalm, C.; Wang, Y. P.; Xia, J. Y.; Zaehle, S.; Zhou, X. H.

    2012-02-01

    Land models, which have been developed by the modeling community in the past two decades to predict future states of ecosystems and climate, have to be critically evaluated for their performance skills of simulating ecosystem responses and feedback to climate change. Benchmarking is an emerging procedure to measure and evaluate performance of models against a set of defined standards. This paper proposes a benchmarking framework for evaluation of land models. The framework includes (1) targeted aspects of model performance to be evaluated; (2) a set of benchmarks as defined references to test model performance; (3) metrics to measure and compare performance skills among models so as to identify model strengths and deficiencies; and (4) model improvement. Component 4 may or may not be involved in a benchmark analysis but is an ultimate goal of general modeling research. Land models are required to simulate exchange of water, energy, carbon and sometimes other trace gases between the atmosphere and the land-surface, and should be evaluated for their simulations of biophysical processes, biogeochemical cycles, and vegetation dynamics across timescales in response to both weather and climate change. Benchmarks that are used to evaluate models generally consist of direct observations, data-model products, and data-derived patterns and relationships. Metrics of measuring mismatches between models and benchmarks may include (1) a priori thresholds of acceptable model performance and (2) a scoring system to combine data-model mismatches for various processes at different temporal and spatial scales. The benchmark analyses should identify clues of weak model performance for future improvement. Iterations between model evaluation and improvement via benchmarking shall demonstrate progress of land modeling and help establish confidence in land models for their predictions of future states of ecosystems and climate.

  17. A framework of benchmarking land models

    Directory of Open Access Journals (Sweden)

    Y. Q. Luo

    2012-02-01

    Full Text Available Land models, which have been developed by the modeling community in the past two decades to predict future states of ecosystems and climate, have to be critically evaluated for their performance skills of simulating ecosystem responses and feedback to climate change. Benchmarking is an emerging procedure to measure and evaluate performance of models against a set of defined standards. This paper proposes a benchmarking framework for evaluation of land models. The framework includes (1 targeted aspects of model performance to be evaluated; (2 a set of benchmarks as defined references to test model performance; (3 metrics to measure and compare performance skills among models so as to identify model strengths and deficiencies; and (4 model improvement. Component 4 may or may not be involved in a benchmark analysis but is an ultimate goal of general modeling research. Land models are required to simulate exchange of water, energy, carbon and sometimes other trace gases between the atmosphere and the land-surface, and should be evaluated for their simulations of biophysical processes, biogeochemical cycles, and vegetation dynamics across timescales in response to both weather and climate change. Benchmarks that are used to evaluate models generally consist of direct observations, data-model products, and data-derived patterns and relationships. Metrics of measuring mismatches between models and benchmarks may include (1 a priori thresholds of acceptable model performance and (2 a scoring system to combine data-model mismatches for various processes at different temporal and spatial scales. The benchmark analyses should identify clues of weak model performance for future improvement. Iterations between model evaluation and improvement via benchmarking shall demonstrate progress of land modeling and help establish confidence in land models for their predictions of future states of ecosystems and climate.

  18. A framework for benchmarking land models

    Science.gov (United States)

    Luo, Y. Q.; Randerson, J. T.; Abramowitz, G.; Bacour, C.; Blyth, E.; Carvalhais, N.; Ciais, P.; Dalmonech, D.; Fisher, J. B.; Fisher, R.; Friedlingstein, P.; Hibbard, K.; Hoffman, F.; Huntzinger, D.; Jones, C. D.; Koven, C.; Lawrence, D.; Li, D. J.; Mahecha, M.; Niu, S. L.; Norby, R.; Piao, S. L.; Qi, X.; Peylin, P.; Prentice, I. C.; Riley, W.; Reichstein, M.; Schwalm, C.; Wang, Y. P.; Xia, J. Y.; Zaehle, S.; Zhou, X. H.

    2012-10-01

    Land models, which have been developed by the modeling community in the past few decades to predict future states of ecosystems and climate, have to be critically evaluated for their performance skills of simulating ecosystem responses and feedback to climate change. Benchmarking is an emerging procedure to measure performance of models against a set of defined standards. This paper proposes a benchmarking framework for evaluation of land model performances and, meanwhile, highlights major challenges at this infant stage of benchmark analysis. The framework includes (1) targeted aspects of model performance to be evaluated, (2) a set of benchmarks as defined references to test model performance, (3) metrics to measure and compare performance skills among models so as to identify model strengths and deficiencies, and (4) model improvement. Land models are required to simulate exchange of water, energy, carbon and sometimes other trace gases between the atmosphere and land surface, and should be evaluated for their simulations of biophysical processes, biogeochemical cycles, and vegetation dynamics in response to climate change across broad temporal and spatial scales. Thus, one major challenge is to select and define a limited number of benchmarks to effectively evaluate land model performance. The second challenge is to develop metrics of measuring mismatches between models and benchmarks. The metrics may include (1) a priori thresholds of acceptable model performance and (2) a scoring system to combine data-model mismatches for various processes at different temporal and spatial scales. The benchmark analyses should identify clues of weak model performance to guide future development, thus enabling improved predictions of future states of ecosystems and climate. The near-future research effort should be on development of a set of widely acceptable benchmarks that can be used to objectively, effectively, and reliably evaluate fundamental properties of land models

  19. A framework for benchmarking land models

    Directory of Open Access Journals (Sweden)

    Y. Q. Luo

    2012-10-01

    Full Text Available Land models, which have been developed by the modeling community in the past few decades to predict future states of ecosystems and climate, have to be critically evaluated for their performance skills of simulating ecosystem responses and feedback to climate change. Benchmarking is an emerging procedure to measure performance of models against a set of defined standards. This paper proposes a benchmarking framework for evaluation of land model performances and, meanwhile, highlights major challenges at this infant stage of benchmark analysis. The framework includes (1 targeted aspects of model performance to be evaluated, (2 a set of benchmarks as defined references to test model performance, (3 metrics to measure and compare performance skills among models so as to identify model strengths and deficiencies, and (4 model improvement. Land models are required to simulate exchange of water, energy, carbon and sometimes other trace gases between the atmosphere and land surface, and should be evaluated for their simulations of biophysical processes, biogeochemical cycles, and vegetation dynamics in response to climate change across broad temporal and spatial scales. Thus, one major challenge is to select and define a limited number of benchmarks to effectively evaluate land model performance. The second challenge is to develop metrics of measuring mismatches between models and benchmarks. The metrics may include (1 a priori thresholds of acceptable model performance and (2 a scoring system to combine data–model mismatches for various processes at different temporal and spatial scales. The benchmark analyses should identify clues of weak model performance to guide future development, thus enabling improved predictions of future states of ecosystems and climate. The near-future research effort should be on development of a set of widely acceptable benchmarks that can be used to objectively, effectively, and reliably evaluate fundamental properties

  20. Benchmarking in Mobarakeh Steel Company

    OpenAIRE

    Sasan Ghasemi; Mohammad Nazemi; Mehran Nejati

    2008-01-01

    Benchmarking is considered as one of the most effective ways of improving performance in companies. Although benchmarking in business organizations is a relatively new concept and practice, it has rapidly gained acceptance worldwide. This paper introduces the benchmarking project conducted in Esfahan's Mobarakeh Steel Company, as the first systematic benchmarking project conducted in Iran. It aims to share the process deployed for the benchmarking project in this company and illustrate how th...

  1. Benchmarking in Mobarakeh Steel Company

    Directory of Open Access Journals (Sweden)

    Sasan Ghasemi

    2008-05-01

    Full Text Available Benchmarking is considered as one of the most effective ways of improving performance incompanies. Although benchmarking in business organizations is a relatively new concept and practice, ithas rapidly gained acceptance worldwide. This paper introduces the benchmarking project conducted in Esfahan’s Mobarakeh Steel Company, as the first systematic benchmarking project conducted in Iran. It aimsto share the process deployed for the benchmarking project in this company and illustrate how the projectsystematic implementation led to succes.

  2. R/S Analyses of Some Geochemical Indexes for Tianshuihai Lake Core in West Kunlun Mountain and Their Environmental Implications

    Institute of Scientific and Technical Information of China (English)

    周厚云; 李世杰; 等

    2000-01-01

    With the decrease of gloal temperature,glacial epoch came over the earth and global climate fluctuated over a great range since the beginning of Quatenary .Paleoclimotologists of various countries have focussed attention to the periodic characteristics and dynamics of climation flucturation in the past many years(Berger,1977;Imbrie and Hays,1984:ding zhongli et al.,1990;Yu zhiwei et al.,1992:Liu Youmei et al.,1996),Although some of the workers have paid their attention to the nonlinear characteristics of the global Quaternary environmental evolution(Nicolis and Nicolis,1984,Lu Houyuan et al.,1993)_,it is worth while to do this kind of work in some special areas in the world,for example the Qinghai-Tibet Plateau.Using R/S analysis,the authors calculated the Hurst indexes H of some geochemical proxies,including organic carbon,FeO,Fe2O3 and FeO/Fe2O3,from the Tianshuihai Lake core in West Kunlun Mountain of the Qinghai-Tibet Plateau,The proxies satisfy the Hurst law with Honrg.carbon=0.735,HFe2O3=0.757,HFeO=0.848and HFeO/Fe2O3=0./646,All the indexs are greater than 0.5,meaning that from 240 to 15ka B.P.,there were some long-run dependencies-persistence in the climatic and environmental evolution around the Tianshuihai Lake area.This is in accordance with the climate there from 240 to 15ka B.P.(Yu Suhua et al.,1996),The paleo-climate and paleo-environment evolution around the Tianshuihai Lake area is of persistence as well as of fluctuation and is a combination of these two components,There are some differences between the four Hurst indexes,Which Probably resulted from the differnet intensitites of Persistence of the four proxies,organic carbon.FeO,Fe2O3 and FeO/Fe2O3,or from the change of drainage system around the Tianshuihai Lake area from opemness to closeness(Li Bingyuan et al.,1991;Sun Honglie,1996:Shi Yafeng et al.1998_).The Qinghai-Tibet plateau was the starter and sensor of the climatic and environmental variation of the surrounding areas(Yao Tandong et al

  3. PNNL Information Technology Benchmarking

    Energy Technology Data Exchange (ETDEWEB)

    DD Hostetler

    1999-09-08

    Benchmarking is a methodology for searching out industry best practices that lead to superior performance. It is exchanging information, not just with any organization, but with organizations known to be the best within PNNL, in industry, or in dissimilar industries with equivalent functions. It is used as a continuous improvement tool for business and technical processes, products, and services. Information technology--comprising all computer and electronic communication products and services--underpins the development and/or delivery of many PNNL products and services. This document describes the Pacific Northwest National Laboratory's (PNNL's) approach to information technology (IT) benchmarking. The purpose is to engage other organizations in the collaborative process of benchmarking in order to improve the value of IT services provided to customers. TM document's intended audience consists of other US Department of Energy (DOE) national laboratories and their IT staff. Although the individual participants must define the scope of collaborative benchmarking, an outline of IT service areas for possible benchmarking is described.

  4. Neutronics Benchmarks for the Utilization of Mixed-Oxide Fuel: Joint US/Russian Progress Report for Fiscal Year 1997, Volume 4, part 4-ESADA Plutonium Program Critical Experiments: Single-Region Core Configurations

    Energy Technology Data Exchange (ETDEWEB)

    Akkurt, H.; Abdurrahman, N.M.

    1999-05-01

    The purpose of this study is to simulate and assess the findings from selected ESADA experiments. It is presented in the format prescribed by the Nuclear Energy Agency Nuclear Science Committee for material to be included in the International Handbook of Evaluated Criticality Safety Benchmark Experiments.

  5. ASBench: benchmarking sets for allosteric discovery.

    Science.gov (United States)

    Huang, Wenkang; Wang, Guanqiao; Shen, Qiancheng; Liu, Xinyi; Lu, Shaoyong; Geng, Lv; Huang, Zhimin; Zhang, Jian

    2015-08-01

    Allostery allows for the fine-tuning of protein function. Targeting allosteric sites is gaining increasing recognition as a novel strategy in drug design. The key challenge in the discovery of allosteric sites has strongly motivated the development of computational methods and thus high-quality, publicly accessible standard data have become indispensable. Here, we report benchmarking data for experimentally determined allosteric sites through a complex process, including a 'Core set' with 235 unique allosteric sites and a 'Core-Diversity set' with 147 structurally diverse allosteric sites. These benchmarking sets can be exploited to develop efficient computational methods to predict unknown allosteric sites in proteins and reveal unique allosteric ligand-protein interactions to guide allosteric drug design.

  6. Uncertainties of the KIKO3D-ATHLET calculations using the Kalinin-3 benchmark (Phase II) data

    Energy Technology Data Exchange (ETDEWEB)

    Panka, Istvan; Hegyi, Gyoergy; Maraczy, Csaba; Kereszturi, Andras [Hungarian Academy of Sciences, Centre for Energy Research, Budapest (Hungary). Reactor Analysis Dept.

    2016-09-15

    The best estimate simulation of three-dimensional phenomena in nuclear reactor cores requires the use of coupled neutron physics and thermal-hydraulics calculations. However these analyses should be supplemented by the survey of the corresponding uncertainties. In this paper the uncertainties of the coupled KIKO3D-ATHLET calculations are presented for a VVER-1000 type core using the OECD NEA Kalinin-3 (Phase II) benchmark data, although only the neutronic uncertainties are considered and further simplifications are applied and discussed. Additionally, this study has been performed in the conjunction with the OECD NEA UAM benchmark, as well. In the first part of the paper, the uncertainties of the effective multiplication factor, the assembly-wise radial power distribution, the axial power distribution, the rod worth, etc. are presented at steady-state. After that some uncertainties of the transient calculations are discussed for the considered switch-off of one Main Circulation Pump (MCP) type transient.

  7. Benchmarking Pthreads performance

    Energy Technology Data Exchange (ETDEWEB)

    May, J M; de Supinski, B R

    1999-04-27

    The importance of the performance of threads libraries is growing as clusters of shared memory machines become more popular POSIX threads, or Pthreads, is an industry threads library standard. We have implemented the first Pthreads benchmark suite. In addition to measuring basic thread functions, such as thread creation, we apply the L.ogP model to standard Pthreads communication mechanisms. We present the results of our tests for several hardware platforms. These results demonstrate that the performance of existing Pthreads implementations varies widely; parts of nearly all of these implementations could be further optimized. Since hardware differences do not fully explain these performance variations, optimizations could improve the implementations. 2. Incorporating Threads Benchmarks into SKaMPI is an MPI benchmark suite that provides a general framework for performance analysis [7]. SKaMPI does not exhaustively test the MPI standard. Instead, it

  8. Deviating From the Benchmarks

    DEFF Research Database (Denmark)

    Rocha, Vera; Van Praag, Mirjam; Carneiro, Anabela

    survival? The analysis is based on a matched employer-employee dataset and covers about 17,500 startups in manufacturing and services. We adopt a new procedure to estimate individual benchmarks for the quantity and quality of initial human resources, acknowledging correlations between hiring decisions...... the benchmark can be substantial, are persistent over time, and hinder the survival of firms. The implications may, however, vary according to the sector and the ownership structure at entry. Given the stickiness of initial choices, wrong human capital decisions at entry turn out to be a close to irreversible...

  9. Benchmarking for Best Practice

    CERN Document Server

    Zairi, Mohamed

    1998-01-01

    Benchmarking for Best Practice uses up-to-the-minute case-studies of individual companies and industry-wide quality schemes to show how and why implementation has succeeded. For any practitioner wanting to establish best practice in a wide variety of business areas, this book makes essential reading. .It is also an ideal textbook on the applications of TQM since it describes concepts, covers definitions and illustrates the applications with first-hand examples. Professor Mohamed Zairi is an international expert and leading figure in the field of benchmarking. His pioneering work in this area l

  10. Genome Sequence of Azospirillum brasilense CBG497 and Comparative Analyses of Azospirillum Core and Accessory Genomes provide Insight into Niche Adaptation

    Directory of Open Access Journals (Sweden)

    Victor González

    2012-09-01

    Full Text Available Bacteria of the genus Azospirillum colonize roots of important cereals and grasses, and promote plant growth by several mechanisms, notably phytohormone synthesis. The genomes of several Azospirillum strains belonging to different species, isolated from various host plants and locations, were recently sequenced and published. In this study, an additional genome of an A. brasilense strain, isolated from maize grown on an alkaline soil in the northeast of Mexico, strain CBG497, was obtained. Comparative genomic analyses were performed on this new genome and three other genomes (A. brasilense Sp245, A. lipoferum 4B and Azospirillum sp. B510. The Azospirillum core genome was established and consists of 2,328 proteins, representing between 30% to 38% of the total encoded proteins within a genome. It is mainly chromosomally-encoded and contains 74% of genes of ancestral origin shared with some aquatic relatives. The non-ancestral part of the core genome is enriched in genes involved in signal transduction, in transport and in metabolism of carbohydrates and amino-acids, and in surface properties features linked to adaptation in fluctuating environments, such as soil and rhizosphere. Many genes involved in colonization of plant roots, plant-growth promotion (such as those involved in phytohormone biosynthesis, and properties involved in rhizosphere adaptation (such as catabolism of phenolic compounds, uptake of iron are restricted to a particular strain and/or species, strongly suggesting niche-specific adaptation.

  11. Genome Sequence of Azospirillum brasilense CBG497 and Comparative Analyses of Azospirillum Core and Accessory Genomes provide Insight into Niche Adaptation

    Science.gov (United States)

    Wisniewski-Dyé, Florence; Lozano, Luis; Acosta-Cruz, Erika; Borland, Stéphanie; Drogue, Benoît; Prigent-Combaret, Claire; Rouy, Zoé; Barbe, Valérie; Mendoza Herrera, Alberto; González, Victor; Mavingui, Patrick

    2012-01-01

    Bacteria of the genus Azospirillum colonize roots of important cereals and grasses, and promote plant growth by several mechanisms, notably phytohormone synthesis. The genomes of several Azospirillum strains belonging to different species, isolated from various host plants and locations, were recently sequenced and published. In this study, an additional genome of an A. brasilense strain, isolated from maize grown on an alkaline soil in the northeast of Mexico, strain CBG497, was obtained. Comparative genomic analyses were performed on this new genome and three other genomes (A. brasilense Sp245, A. lipoferum 4B and Azospirillum sp. B510). The Azospirillum core genome was established and consists of 2,328 proteins, representing between 30% to 38% of the total encoded proteins within a genome. It is mainly chromosomally-encoded and contains 74% of genes of ancestral origin shared with some aquatic relatives. The non-ancestral part of the core genome is enriched in genes involved in signal transduction, in transport and in metabolism of carbohydrates and amino-acids, and in surface properties features linked to adaptation in fluctuating environments, such as soil and rhizosphere. Many genes involved in colonization of plant roots, plant-growth promotion (such as those involved in phytohormone biosynthesis), and properties involved in rhizosphere adaptation (such as catabolism of phenolic compounds, uptake of iron) are restricted to a particular strain and/or species, strongly suggesting niche-specific adaptation. PMID:24705077

  12. HPCS HPCchallenge Benchmark Suite

    Science.gov (United States)

    2007-11-02

    measured HPCchallenge Benchmark performance on various HPC architectures — from Cray X1s to Beowulf clusters — in the presentation and paper...from Cray X1s to Beowulf clusters — using the updated results at http://icl.cs.utk.edu/hpcc/hpcc_results.cgi Even a small percentage of random

  13. Benchmarking Danish Industries

    DEFF Research Database (Denmark)

    Gammelgaard, Britta; Bentzen, Eric; Aagaard Andreassen, Mette

    2003-01-01

    compatible survey. The International Manufacturing Strategy Survey (IMSS) doesbring up the question of supply chain management, but unfortunately, we did not have access to thedatabase. Data from the members of the SCOR-model, in the form of benchmarked performance data,may exist, but are nonetheless...

  14. Benchmarks: WICHE Region 2012

    Science.gov (United States)

    Western Interstate Commission for Higher Education, 2013

    2013-01-01

    Benchmarks: WICHE Region 2012 presents information on the West's progress in improving access to, success in, and financing of higher education. The information is updated annually to monitor change over time and encourage its use as a tool for informed discussion in policy and education communities. To establish a general context for the…

  15. Surveys and Benchmarks

    Science.gov (United States)

    Bers, Trudy

    2012-01-01

    Surveys and benchmarks continue to grow in importance for community colleges in response to several factors. One is the press for accountability, that is, for colleges to report the outcomes of their programs and services to demonstrate their quality and prudent use of resources, primarily to external constituents and governing boards at the state…

  16. Benchmarking and Performance Management

    Directory of Open Access Journals (Sweden)

    Adrian TANTAU

    2010-12-01

    Full Text Available The relevance of the chosen topic is explained by the meaning of the firm efficiency concept - the firm efficiency means the revealed performance (how well the firm performs in the actual market environment given the basic characteristics of the firms and their markets that are expected to drive their profitability (firm size, market power etc.. This complex and relative performance could be due to such things as product innovation, management quality, work organization, some other factors can be a cause even if they are not directly observed by the researcher. The critical need for the management individuals/group to continuously improve their firm/company’s efficiency and effectiveness, the need for the managers to know which are the success factors and the competitiveness determinants determine consequently, what performance measures are most critical in determining their firm’s overall success. Benchmarking, when done properly, can accurately identify both successful companies and the underlying reasons for their success. Innovation and benchmarking firm level performance are critical interdependent activities. Firm level variables, used to infer performance, are often interdependent due to operational reasons. Hence, the managers need to take the dependencies among these variables into account when forecasting and benchmarking performance. This paper studies firm level performance using financial ratio and other type of profitability measures. It uses econometric models to describe and then propose a method to forecast and benchmark performance.

  17. Professional Performance and Bureaucratic Benchmarking Information

    DEFF Research Database (Denmark)

    Schneider, Melanie L.; Mahlendorf, Matthias D.; Schäffer, Utz

    Prior research documents positive effects of benchmarking information provision on performance and attributes this to social comparisons. However, the effects on professional recipients are unclear. Studies of professional control indicate that professional recipients often resist bureaucratic...... controls because of organizational-professional conflicts. We therefore analyze the association between bureaucratic benchmarking information provision and professional performance and suggest that the association is more positive if prior professional performance was low. We test our hypotheses based...... and professional performance but only if prior professional performance was low. Supplemental analyses support the robustness of our results. Findings indicate conditions under which bureaucratic benchmarking information may affect professional performance and advance research on professional control and social...

  18. [Results of the evaluation of German benchmarking networks funded by the Ministry of Health].

    Science.gov (United States)

    de Cruppé, Werner; Blumenstock, Gunnar; Fischer, Imma; Selbmann, Hans-Konrad; Geraedts, Max

    2011-01-01

    Nine out of ten demonstration projects on clinical benchmarking funded by the German Ministry of Health were evaluated. Project reports and interviews were uniformly analysed using a list of criteria and a scheme to categorize the realized benchmarking approach. At the end of the funding period four benchmarking networks had implemented all benchmarking steps, and six were continued after funding had expired. The improvement of outcome quality cannot yet be assessed. Factors promoting the introduction of benchmarking networks with regard to organisational and process aspects of benchmarking implementation were derived.

  19. Benchmarking i den offentlige sektor

    DEFF Research Database (Denmark)

    Bukh, Per Nikolaj; Dietrichson, Lars; Sandalgaard, Niels

    2008-01-01

    I artiklen vil vi kort diskutere behovet for benchmarking i fraværet af traditionelle markedsmekanismer. Herefter vil vi nærmere redegøre for, hvad benchmarking er med udgangspunkt i fire forskellige anvendelser af benchmarking. Regulering af forsyningsvirksomheder vil blive behandlet, hvorefter...

  20. Reactor Physics Methods and Preconceptual Core Design Analyses for Conversion of the Advanced Test Reactor to Low-Enriched Uranium Fuel Annual Report for Fiscal Year 2012

    Energy Technology Data Exchange (ETDEWEB)

    David W. Nigg; Sean R. Morrell

    2012-09-01

    Under the current long-term DOE policy and planning scenario, both the ATR and the ATRC will be reconfigured at an appropriate time within the next several years to operate with low-enriched uranium (LEU) fuel. This will be accomplished under the auspices of the Reduced Enrichment Research and Test Reactor (RERTR) Program, administered by the DOE National Nuclear Security Administration (NNSA). At a minimum, the internal design and composition of the fuel element plates and support structure will change, to accommodate the need for low enrichment in a manner that maintains total core excess reactivity at a suitable level for anticipated operational needs throughout each cycle while respecting all control and shutdown margin requirements and power distribution limits. The complete engineering design and optimization of LEU cores for the ATR and the ATRC will require significant multi-year efforts in the areas of fuel design, development and testing, as well as a complete re-analysis of the relevant reactor physics parameters for a core composed of LEU fuel, with possible control system modifications. Ultimately, revalidation of the computational physics parameters per applicable national and international standards against data from experimental measurements for prototypes of the new ATR and ATRC core designs will also be required for Safety Analysis Report (SAR) changes to support routine operations with LEU. This report is focused on reactor physics analyses conducted during Fiscal Year (FY) 2012 to support the initial development of several potential preconceptual fuel element designs that are suitable candidates for further study and refinement during FY-2013 and beyond. In a separate, but related, effort in the general area of computational support for ATR operations, the Idaho National Laboratory (INL) is conducting a focused multiyear effort to introduce modern high-fidelity computational reactor physics software and associated validation protocols to replace

  1. Radiography benchmark 2014

    Energy Technology Data Exchange (ETDEWEB)

    Jaenisch, G.-R., E-mail: Gerd-Ruediger.Jaenisch@bam.de; Deresch, A., E-mail: Gerd-Ruediger.Jaenisch@bam.de; Bellon, C., E-mail: Gerd-Ruediger.Jaenisch@bam.de [Federal Institute for Materials Research and Testing, Unter den Eichen 87, 12205 Berlin (Germany); Schumm, A.; Lucet-Sanchez, F.; Guerin, P. [EDF R and D, 1 avenue du Général de Gaulle, 92141 Clamart (France)

    2015-03-31

    The purpose of the 2014 WFNDEC RT benchmark study was to compare predictions of various models of radiographic techniques, in particular those that predict the contribution of scattered radiation. All calculations were carried out for homogenous materials and a mono-energetic X-ray point source in the energy range between 100 keV and 10 MeV. The calculations were to include the best physics approach available considering electron binding effects. Secondary effects like X-ray fluorescence and bremsstrahlung production were to be taken into account if possible. The problem to be considered had two parts. Part I examined the spectrum and the spatial distribution of radiation behind a single iron plate. Part II considered two equally sized plates, made of iron and aluminum respectively, only evaluating the spatial distribution. Here we present the results of above benchmark study, comparing them to MCNP as the assumed reference model. The possible origins of the observed deviations are discussed.

  2. Improving accuracy and precision of ice core δD(CH4 analyses using methane pre-pyrolysis and hydrogen post-pyrolysis trapping and subsequent chromatographic separation

    Directory of Open Access Journals (Sweden)

    M. Bock

    2014-07-01

    Full Text Available Firn and polar ice cores offer the only direct palaeoatmospheric archive. Analyses of past greenhouse gas concentrations and their isotopic compositions in air bubbles in the ice can help to constrain changes in global biogeochemical cycles in the past. For the analysis of the hydrogen isotopic composition of methane (δD(CH4 or δ2H(CH4 0.5 to 1.5 kg of ice was hitherto used. Here we present a method to improve precision and reduce the sample amount for δD(CH4 measurements in (ice core air. Pre-concentrated methane is focused in front of a high temperature oven (pre-pyrolysis trapping, and molecular hydrogen formed by pyrolysis is trapped afterwards (post-pyrolysis trapping, both on a carbon-PLOT capillary at −196 °C. Argon, oxygen, nitrogen, carbon monoxide, unpyrolysed methane and krypton are trapped together with H2 and must be separated using a second short, cooled chromatographic column to ensure accurate results. Pre- and post-pyrolysis trapping largely removes the isotopic fractionation induced during chromatographic separation and results in a narrow peak in the mass spectrometer. Air standards can be measured with a precision better than 1‰. For polar ice samples from glacial periods, we estimate a precision of 2.3‰ for 350 g of ice (or roughly 30 mL – at standard temperature and pressure (STP – of air with 350 ppb of methane. This corresponds to recent tropospheric air samples (about 1900 ppb CH4 of about 6 mL (STP or about 500 pmol of pure CH4.

  3. Benchmarking of LSTM Networks

    OpenAIRE

    Breuel, Thomas M.

    2015-01-01

    LSTM (Long Short-Term Memory) recurrent neural networks have been highly successful in a number of application areas. This technical report describes the use of the MNIST and UW3 databases for benchmarking LSTM networks and explores the effect of different architectural and hyperparameter choices on performance. Significant findings include: (1) LSTM performance depends smoothly on learning rates, (2) batching and momentum has no significant effect on performance, (3) softmax training outperf...

  4. Benchmarking: applications to transfusion medicine.

    Science.gov (United States)

    Apelseth, Torunn Oveland; Molnar, Laura; Arnold, Emmy; Heddle, Nancy M

    2012-10-01

    Benchmarking is as a structured continuous collaborative process in which comparisons for selected indicators are used to identify factors that, when implemented, will improve transfusion practices. This study aimed to identify transfusion medicine studies reporting on benchmarking, summarize the benchmarking approaches used, and identify important considerations to move the concept of benchmarking forward in the field of transfusion medicine. A systematic review of published literature was performed to identify transfusion medicine-related studies that compared at least 2 separate institutions or regions with the intention of benchmarking focusing on 4 areas: blood utilization, safety, operational aspects, and blood donation. Forty-five studies were included: blood utilization (n = 35), safety (n = 5), operational aspects of transfusion medicine (n = 5), and blood donation (n = 0). Based on predefined criteria, 7 publications were classified as benchmarking, 2 as trending, and 36 as single-event studies. Three models of benchmarking are described: (1) a regional benchmarking program that collects and links relevant data from existing electronic sources, (2) a sentinel site model where data from a limited number of sites are collected, and (3) an institutional-initiated model where a site identifies indicators of interest and approaches other institutions. Benchmarking approaches are needed in the field of transfusion medicine. Major challenges include defining best practices and developing cost-effective methods of data collection. For those interested in initiating a benchmarking program, the sentinel site model may be most effective and sustainable as a starting point, although the regional model would be the ideal goal.

  5. Benchmarking and Regulation

    DEFF Research Database (Denmark)

    Agrell, Per J.; Bogetoft, Peter

    nchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators....... The application of benchmarking in regulation, however, requires specific steps in terms of data validation, model specification and outlier detection that are not systematically documented in open publications, leading to discussions about regulatory stability and economic feasibility of these techniques....... In this paper, we review the modern foundations for frontier-based regulation and we discuss its actual use in several jurisdictions....

  6. 2001 benchmarking guide.

    Science.gov (United States)

    Hoppszallern, S

    2001-01-01

    Our fifth annual guide to benchmarking under managed care presents data that is a study in market dynamics and adaptation. New this year are financial indicators on HMOs exiting the market and those remaining. Hospital financial ratios and details on department performance are included. The physician group practice numbers show why physicians are scrutinizing capitated payments. Overall, hospitals in markets with high managed care penetration are more successful in managing labor costs and show productivity gains in imaging services, physical therapy and materials management.

  7. Benchmarking Query Execution Robustness

    Science.gov (United States)

    Wiener, Janet L.; Kuno, Harumi; Graefe, Goetz

    Benchmarks that focus on running queries on a well-tuned database system ignore a long-standing problem: adverse runtime conditions can cause database system performance to vary widely and unexpectedly. When the query execution engine does not exhibit resilience to these adverse conditions, addressing the resultant performance problems can contribute significantly to the total cost of ownership for a database system in over-provisioning, lost efficiency, and increased human administrative costs. For example, focused human effort may be needed to manually invoke workload management actions or fine-tune the optimization of specific queries.

  8. How to Advance TPC Benchmarks with Dependability Aspects

    Science.gov (United States)

    Almeida, Raquel; Poess, Meikel; Nambiar, Raghunath; Patil, Indira; Vieira, Marco

    Transactional systems are the core of the information systems of most organizations. Although there is general acknowledgement that failures in these systems often entail significant impact both on the proceeds and reputation of companies, the benchmarks developed and managed by the Transaction Processing Performance Council (TPC) still maintain their focus on reporting bare performance. Each TPC benchmark has to pass a list of dependability-related tests (to verify ACID properties), but not all benchmarks require measuring their performances. While TPC-E measures the recovery time of some system failures, TPC-H and TPC-C only require functional correctness of such recovery. Consequently, systems used in TPC benchmarks are tuned mostly for performance. In this paper we argue that nowadays systems should be tuned for a more comprehensive suite of dependability tests, and that a dependability metric should be part of TPC benchmark publications. The paper discusses WHY and HOW this can be achieved. Two approaches are introduced and discussed: augmenting each TPC benchmark in a customized way, by extending each specification individually; and pursuing a more unified approach, defining a generic specification that could be adjoined to any TPC benchmark.

  9. Benchmarking concentrating photovoltaic systems

    Science.gov (United States)

    Duerr, Fabian; Muthirayan, Buvaneshwari; Meuret, Youri; Thienpont, Hugo

    2010-08-01

    Integral to photovoltaics is the need to provide improved economic viability. To achieve this goal, photovoltaic technology has to be able to harness more light at less cost. A large variety of concentrating photovoltaic concepts has provided cause for pursuit. To obtain a detailed profitability analysis, a flexible evaluation is crucial for benchmarking the cost-performance of this variety of concentrating photovoltaic concepts. To save time and capital, a way to estimate the cost-performance of a complete solar energy system is to use computer aided modeling. In this work a benchmark tool is introduced based on a modular programming concept. The overall implementation is done in MATLAB whereas Advanced Systems Analysis Program (ASAP) is used for ray tracing calculations. This allows for a flexible and extendable structuring of all important modules, namely an advanced source modeling including time and local dependence, and an advanced optical system analysis of various optical designs to obtain an evaluation of the figure of merit. An important figure of merit: the energy yield for a given photovoltaic system at a geographical position over a specific period, can be calculated.

  10. Entropy-based benchmarking methods

    OpenAIRE

    2012-01-01

    We argue that benchmarking sign-volatile series should be based on the principle of movement and sign preservation, which states that a bench-marked series should reproduce the movement and signs in the original series. We show that the widely used variants of Denton (1971) method and the growth preservation method of Causey and Trager (1981) may violate this principle, while its requirements are explicitly taken into account in the pro-posed entropy-based benchmarking methods. Our illustrati...

  11. Yucatan Subsurface Stratigraphy from Geophysical Data, Well Logs and Core Analyses in the Chicxulub Impact Crater and Implications for Target Heterogeneities

    Science.gov (United States)

    Canales, I.; Fucugauchi, J. U.; Perez-Cruz, L. L.; Camargo, A. Z.; Perez-Cruz, G.

    2011-12-01

    Asymmetries in the geophysical signature of Chicxulub crater are being evaluated to investigate on effects of impact angle and trajectory and pre-existing target structural controls for final crater form. Early studies interpreted asymmetries in the gravity anomaly in the offshore sector to propose oblique either northwest- and northeast-directed trajectories. An oblique impact was correlated to the global ejecta distribution and enhanced environmental disturbance. In contrast, recent studies using marine seismic data and computer modeling have shown that crater asymmetries correlate with pre-existing undulations of the Cretaceous continental shelf, suggesting a structural control of target heterogeneities. Documentation of Yucatan subsurface stratigraphy has been limited by lack of outcrops of pre-Paleogene rocks. The extensive cover of platform carbonate rocks has not been affected by faulting or deformation and with no rivers cutting the carbonates, information comes mainly from the drilling programs and geophysical surveys. Here we revisit the subsurface stratigraphy in the crater area from the well log data and cores retrieved in the drilling projects and marine seismic reflection profiles. Other source of information being exploited comes from the impact breccias, which contain a sampling of disrupted target sequences, including crystalline basement and Mesozoic sediments. We analyze gravity and seismic data from the various exploration surveys, including multiple Pemex profiles in the platform and the Chicxulub experiments. Analyses of well log data and seismic profiles identify contacts for Lower Cretaceous, Cretaceous/Jurassic and K/Pg boundaries. Results show that the Cretaceous continental shelf was shallower on the south and southwest than on the east, with emerged areas in Quintana Roo and Belize. Mesozoic and upper Paleozoic sediments show variable thickness, possibly reflecting the crystalline basement regional structure. Paleozoic and Precambrian

  12. HPC Benchmark Suite NMx Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Intelligent Automation Inc., (IAI) and University of Central Florida (UCF) propose to develop a comprehensive numerical test suite for benchmarking current and...

  13. Benchmarking and accounting for the (private) cloud

    Science.gov (United States)

    Belleman, J.; Schwickerath, U.

    2015-12-01

    During the past two years large parts of the CERN batch farm have been moved to virtual machines running on the CERN internal cloud. During this process a large fraction of the resources, which had previously been used as physical batch worker nodes, were converted into hypervisors. Due to the large spread of the per-core performance in the farm, caused by its heterogenous nature, it is necessary to have a good knowledge of the performance of the virtual machines. This information is used both for scheduling in the batch system and for accounting. While in the previous setup worker nodes were classified and benchmarked based on the purchase order number, for virtual batch worker nodes this is no longer possible; the information is now either hidden or hard to retrieve. Therefore we developed a new scheme to classify worker nodes according to their performance. The new scheme is flexible enough to be usable both for virtual and physical machines in the batch farm. With the new classification it is possible to have an estimation of the performance of worker nodes also in a very dynamic farm with worker nodes coming and going at a high rate, without the need to benchmark each new node again. An extension to public cloud resources is possible if all conditions under which the benchmark numbers have been obtained are fulfilled.

  14. Benchmarking foreign electronics technologies

    Energy Technology Data Exchange (ETDEWEB)

    Bostian, C.W.; Hodges, D.A.; Leachman, R.C.; Sheridan, T.B.; Tsang, W.T.; White, R.M.

    1994-12-01

    This report has been drafted in response to a request from the Japanese Technology Evaluation Center`s (JTEC) Panel on Benchmarking Select Technologies. Since April 1991, the Competitive Semiconductor Manufacturing (CSM) Program at the University of California at Berkeley has been engaged in a detailed study of quality, productivity, and competitiveness in semiconductor manufacturing worldwide. The program is a joint activity of the College of Engineering, the Haas School of Business, and the Berkeley Roundtable on the International Economy, under sponsorship of the Alfred P. Sloan Foundation, and with the cooperation of semiconductor producers from Asia, Europe and the United States. Professors David A. Hodges and Robert C. Leachman are the project`s Co-Directors. The present report for JTEC is primarily based on data and analysis drawn from that continuing program. The CSM program is being conducted by faculty, graduate students and research staff from UC Berkeley`s Schools of Engineering and Business, and Department of Economics. Many of the participating firms are represented on the program`s Industry Advisory Board. The Board played an important role in defining the research agenda. A pilot study was conducted in 1991 with the cooperation of three semiconductor plants. The research plan and survey documents were thereby refined. The main phase of the CSM benchmarking study began in mid-1992 and will continue at least through 1997. reports are presented on the manufacture of integrated circuits; data storage; wireless technology; human-machine interfaces; and optoelectronics. Selected papers are indexed separately for inclusion in the Energy Science and Technology Database.

  15. Benchmarking monthly homogenization algorithms

    Directory of Open Access Journals (Sweden)

    V. K. C. Venema

    2011-08-01

    Full Text Available The COST (European Cooperation in Science and Technology Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative. The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random break-type inhomogeneities were added to the simulated datasets modeled as a Poisson process with normally distributed breakpoint sizes. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide trend was added.

    Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including (i the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii the error in linear trend estimates and (iii traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve

  16. Benchmarking the Multidimensional Stellar Implicit Code MUSIC

    Science.gov (United States)

    Goffrey, T.; Pratt, J.; Viallet, M.; Baraffe, I.; Popov, M. V.; Walder, R.; Folini, D.; Geroux, C.; Constantino, T.

    2017-04-01

    We present the results of a numerical benchmark study for the MUltidimensional Stellar Implicit Code (MUSIC) based on widely applicable two- and three-dimensional compressible hydrodynamics problems relevant to stellar interiors. MUSIC is an implicit large eddy simulation code that uses implicit time integration, implemented as a Jacobian-free Newton Krylov method. A physics based preconditioning technique which can be adjusted to target varying physics is used to improve the performance of the solver. The problems used for this benchmark study include the Rayleigh-Taylor and Kelvin-Helmholtz instabilities, and the decay of the Taylor-Green vortex. Additionally we show a test of hydrostatic equilibrium, in a stellar environment which is dominated by radiative effects. In this setting the flexibility of the preconditioning technique is demonstrated. This work aims to bridge the gap between the hydrodynamic test problems typically used during development of numerical methods and the complex flows of stellar interiors. A series of multidimensional tests were performed and analysed. Each of these test cases was analysed with a simple, scalar diagnostic, with the aim of enabling direct code comparisons. As the tests performed do not have analytic solutions, we verify MUSIC by comparing it to established codes including ATHENA and the PENCIL code. MUSIC is able to both reproduce behaviour from established and widely-used codes as well as results expected from theoretical predictions. This benchmarking study concludes a series of papers describing the development of the MUSIC code and provides confidence in future applications.

  17. Benchmark job – Watch out!

    CERN Multimedia

    Staff Association

    2017-01-01

    On 12 December 2016, in Echo No. 259, we already discussed at length the MERIT and benchmark jobs. Still, we find that a couple of issues warrant further discussion. Benchmark job – administrative decision on 1 July 2017 On 12 January 2017, the HR Department informed all staff members of a change to the effective date of the administrative decision regarding benchmark jobs. The benchmark job title of each staff member will be confirmed on 1 July 2017, instead of 1 May 2017 as originally announced in HR’s letter on 18 August 2016. Postponing the administrative decision by two months will leave a little more time to address the issues related to incorrect placement in a benchmark job. Benchmark job – discuss with your supervisor, at the latest during the MERIT interview In order to rectify an incorrect placement in a benchmark job, it is essential that the supervisor and the supervisee go over the assigned benchmark job together. In most cases, this placement has been done autom...

  18. Benchmark for Strategic Performance Improvement.

    Science.gov (United States)

    Gohlke, Annette

    1997-01-01

    Explains benchmarking, a total quality management tool used to measure and compare the work processes in a library with those in other libraries to increase library performance. Topics include the main groups of upper management, clients, and staff; critical success factors for each group; and benefits of benchmarking. (Author/LRW)

  19. Internal Benchmarking for Institutional Effectiveness

    Science.gov (United States)

    Ronco, Sharron L.

    2012-01-01

    Internal benchmarking is an established practice in business and industry for identifying best in-house practices and disseminating the knowledge about those practices to other groups in the organization. Internal benchmarking can be done with structures, processes, outcomes, or even individuals. In colleges or universities with multicampuses or a…

  20. Entropy-based benchmarking methods

    NARCIS (Netherlands)

    Temurshoev, Umed

    2012-01-01

    We argue that benchmarking sign-volatile series should be based on the principle of movement and sign preservation, which states that a bench-marked series should reproduce the movement and signs in the original series. We show that the widely used variants of Denton (1971) method and the growth pre

  1. Common Core: Victory Is Yours!

    Science.gov (United States)

    Fink, Jennifer L. W.

    2012-01-01

    In this article, the author discusses how to implement the Common Core State Standards in the classroom. She presents examples and activities that will leave teachers feeling "rosy" about tackling the new standards. She breaks down important benchmarks and shows how other teachers are doing the Core--and loving it!

  2. Benchmarking & European Sustainable Transport Policies

    DEFF Research Database (Denmark)

    Gudmundsson, H.

    2003-01-01

    , Benchmarking is one of the management tools that have recently been introduced in the transport sector. It is rapidly being applied to a wide range of transport operations, services and policies. This paper is a contribution to the discussion of the role of benchmarking in the future efforts...... to support Sustainable European Transport Policies. The key message is that transport benchmarking has not yet been developed to cope with the challenges of this task. Rather than backing down completely, the paper suggests some critical conditions for applying and adopting benchmarking for this purpose. One...... way forward is to ensure a higher level of environmental integration in transport policy benchmarking. To this effect the paper will discuss the possible role of the socalled Transport and Environment Reporting Mechanism developed by the European Environment Agency. The paper provides an independent...

  3. Benchmarking and Sustainable Transport Policy

    DEFF Research Database (Denmark)

    Gudmundsson, Henrik; Wyatt, Andrew; Gordon, Lucy

    2004-01-01

    is generally not advised. Several other ways in which benchmarking and policy can support one another are identified in the analysis. This leads to a range of recommended initiatives to exploit the benefits of benchmarking in transport while avoiding some of the lurking pitfalls and dead ends......Order to learn from the best. In 2000 the European Commission initiated research to explore benchmarking as a tool to promote policies for ‘sustainable transport’. This paper reports findings and recommendations on how to address this challenge. The findings suggest that benchmarking is a valuable...... tool that may indeed help to move forward the transport policy agenda. However, there are major conditions and limitations. First of all it is not always so straightforward to delimit, measure and compare transport services in order to establish a clear benchmark. Secondly ‘sustainable transport...

  4. Assessment of Static Delamination Propagation Capabilities in Commercial Finite Element Codes Using Benchmark Analysis

    Science.gov (United States)

    Orifici, Adrian C.; Krueger, Ronald

    2010-01-01

    With capabilities for simulating delamination growth in composite materials becoming available, the need for benchmarking and assessing these capabilities is critical. In this study, benchmark analyses were performed to assess the delamination propagation simulation capabilities of the VCCT implementations in Marc TM and MD NastranTM. Benchmark delamination growth results for Double Cantilever Beam, Single Leg Bending and End Notched Flexure specimens were generated using a numerical approach. This numerical approach was developed previously, and involves comparing results from a series of analyses at different delamination lengths to a single analysis with automatic crack propagation. Specimens were analyzed with three-dimensional and two-dimensional models, and compared with previous analyses using Abaqus . The results demonstrated that the VCCT implementation in Marc TM and MD Nastran(TradeMark) was capable of accurately replicating the benchmark delamination growth results and that the use of the numerical benchmarks offers advantages over benchmarking using experimental and analytical results.

  5. WIPP Benchmark calculations with the large strain SPECTROM codes

    Energy Technology Data Exchange (ETDEWEB)

    Callahan, G.D.; DeVries, K.L. [RE/SPEC, Inc., Rapid City, SD (United States)

    1995-08-01

    This report provides calculational results from the updated Lagrangian structural finite-element programs SPECTROM-32 and SPECTROM-333 for the purpose of qualifying these codes to perform analyses of structural situations in the Waste Isolation Pilot Plant (WIPP). Results are presented for the Second WIPP Benchmark (Benchmark II) Problems and for a simplified heated room problem used in a parallel design calculation study. The Benchmark II problems consist of an isothermal room problem and a heated room problem. The stratigraphy involves 27 distinct geologic layers including ten clay seams of which four are modeled as frictionless sliding interfaces. The analyses of the Benchmark II problems consider a 10-year simulation period. The evaluation of nine structural codes used in the Benchmark II problems shows that inclusion of finite-strain effects is not as significant as observed for the simplified heated room problem, and a variety of finite-strain and small-strain formulations produced similar results. The simplified heated room problem provides stratigraphic complexity equivalent to the Benchmark II problems but neglects sliding along the clay seams. The simplified heated problem does, however, provide a calculational check case where the small strain-formulation produced room closures about 20 percent greater than those obtained using finite-strain formulations. A discussion is given of each of the solved problems, and the computational results are compared with available published results. In general, the results of the two SPECTROM large strain codes compare favorably with results from other codes used to solve the problems.

  6. Benchmarking of energy time series

    Energy Technology Data Exchange (ETDEWEB)

    Williamson, M.A.

    1990-04-01

    Benchmarking consists of the adjustment of time series data from one source in order to achieve agreement with similar data from a second source. The data from the latter source are referred to as the benchmark(s), and often differ in that they are observed at a lower frequency, represent a higher level of temporal aggregation, and/or are considered to be of greater accuracy. This report provides an extensive survey of benchmarking procedures which have appeared in the statistical literature, and reviews specific benchmarking procedures currently used by the Energy Information Administration (EIA). The literature survey includes a technical summary of the major benchmarking methods and their statistical properties. Factors influencing the choice and application of particular techniques are described and the impact of benchmark accuracy is discussed. EIA applications and procedures are reviewed and evaluated for residential natural gas deliveries series and coal production series. It is found that the current method of adjusting the natural gas series is consistent with the behavior of the series and the methods used in obtaining the initial data. As a result, no change is recommended. For the coal production series, a staged approach based on a first differencing technique is recommended over the current procedure. A comparison of the adjustments produced by the two methods is made for the 1987 Indiana coal production series. 32 refs., 5 figs., 1 tab.

  7. Benchmarking in academic pharmacy departments.

    Science.gov (United States)

    Bosso, John A; Chisholm-Burns, Marie; Nappi, Jean; Gubbins, Paul O; Ross, Leigh Ann

    2010-10-11

    Benchmarking in academic pharmacy, and recommendations for the potential uses of benchmarking in academic pharmacy departments are discussed in this paper. Benchmarking is the process by which practices, procedures, and performance metrics are compared to an established standard or best practice. Many businesses and industries use benchmarking to compare processes and outcomes, and ultimately plan for improvement. Institutions of higher learning have embraced benchmarking practices to facilitate measuring the quality of their educational and research programs. Benchmarking is used internally as well to justify the allocation of institutional resources or to mediate among competing demands for additional program staff or space. Surveying all chairs of academic pharmacy departments to explore benchmarking issues such as department size and composition, as well as faculty teaching, scholarly, and service productivity, could provide valuable information. To date, attempts to gather this data have had limited success. We believe this information is potentially important, urge that efforts to gather it should be continued, and offer suggestions to achieve full participation.

  8. Benchmarking biofuels; Biobrandstoffen benchmarken

    Energy Technology Data Exchange (ETDEWEB)

    Croezen, H.; Kampman, B.; Bergsma, G.

    2012-03-15

    A sustainability benchmark for transport biofuels has been developed and used to evaluate the various biofuels currently on the market. For comparison, electric vehicles, hydrogen vehicles and petrol/diesel vehicles were also included. A range of studies as well as growing insight are making it ever clearer that biomass-based transport fuels may have just as big a carbon footprint as fossil fuels like petrol or diesel, or even bigger. At the request of Greenpeace Netherlands, CE Delft has brought together current understanding on the sustainability of fossil fuels, biofuels and electric vehicles, with particular focus on the performance of the respective energy carriers on three sustainability criteria, with the first weighing the heaviest: (1) Greenhouse gas emissions; (2) Land use; and (3) Nutrient consumption [Dutch] Greenpeace Nederland heeft CE Delft gevraagd een duurzaamheidsmeetlat voor biobrandstoffen voor transport te ontwerpen en hierop de verschillende biobrandstoffen te scoren. Voor een vergelijk zijn ook elektrisch rijden, rijden op waterstof en rijden op benzine of diesel opgenomen. Door onderzoek en voortschrijdend inzicht blijkt steeds vaker dat transportbrandstoffen op basis van biomassa soms net zoveel of zelfs meer broeikasgassen veroorzaken dan fossiele brandstoffen als benzine en diesel. CE Delft heeft voor Greenpeace Nederland op een rijtje gezet wat de huidige inzichten zijn over de duurzaamheid van fossiele brandstoffen, biobrandstoffen en elektrisch rijden. Daarbij is gekeken naar de effecten van de brandstoffen op drie duurzaamheidscriteria, waarbij broeikasgasemissies het zwaarst wegen: (1) Broeikasgasemissies; (2) Landgebruik; en (3) Nutriëntengebruik.

  9. Correlational effect size benchmarks.

    Science.gov (United States)

    Bosco, Frank A; Aguinis, Herman; Singh, Kulraj; Field, James G; Pierce, Charles A

    2015-03-01

    Effect size information is essential for the scientific enterprise and plays an increasingly central role in the scientific process. We extracted 147,328 correlations and developed a hierarchical taxonomy of variables reported in Journal of Applied Psychology and Personnel Psychology from 1980 to 2010 to produce empirical effect size benchmarks at the omnibus level, for 20 common research domains, and for an even finer grained level of generality. Results indicate that the usual interpretation and classification of effect sizes as small, medium, and large bear almost no resemblance to findings in the field, because distributions of effect sizes exhibit tertile partitions at values approximately one-half to one-third those intuited by Cohen (1988). Our results offer information that can be used for research planning and design purposes, such as producing better informed non-nil hypotheses and estimating statistical power and planning sample size accordingly. We also offer information useful for understanding the relative importance of the effect sizes found in a particular study in relationship to others and which research domains have advanced more or less, given that larger effect sizes indicate a better understanding of a phenomenon. Also, our study offers information about research domains for which the investigation of moderating effects may be more fruitful and provide information that is likely to facilitate the implementation of Bayesian analysis. Finally, our study offers information that practitioners can use to evaluate the relative effectiveness of various types of interventions. PsycINFO Database Record (c) 2015 APA, all rights reserved.

  10. Benchmarking in water project analysis

    Science.gov (United States)

    Griffin, Ronald C.

    2008-11-01

    The with/without principle of cost-benefit analysis is examined for the possible bias that it brings to water resource planning. Theory and examples for this question are established. Because benchmarking against the demonstrably low without-project hurdle can detract from economic welfare and can fail to promote efficient policy, improvement opportunities are investigated. In lieu of the traditional, without-project benchmark, a second-best-based "difference-making benchmark" is proposed. The project authorizations and modified review processes instituted by the U.S. Water Resources Development Act of 2007 may provide for renewed interest in these findings.

  11. Advances in methods of commercial FBR core characteristics analyses. Investigations of a treatment of the double-heterogeneity and a method to calculate homogenized control rod cross sections

    Energy Technology Data Exchange (ETDEWEB)

    Sugino, Kazuteru [Power Reactor and Nuclear Fuel Development Corp., Oarai, Ibaraki (Japan). Oarai Engineering Center; Iwai, Takehiko

    1998-07-01

    A standard data base for FBR core nuclear design is under development in order to improve the accuracy of FBR design calculation. As a part of the development, we investigated an improved treatment of double-heterogeneity and a method to calculate homogenized control rod cross sections in a commercial reactor geometry, for the betterment of the analytical accuracy of commercial FBR core characteristics. As an improvement in the treatment of double-heterogeneity, we derived a new method (the direct method) and compared both this and conventional methods with continuous energy Monte-Carlo calculations. In addition, we investigated the applicability of the reaction rate ratio preservation method as a advanced method to calculate homogenized control rod cross sections. The present studies gave the following information: (1) An improved treatment of double-heterogeneity: for criticality the conventional method showed good agreement with Monte-Carlo result within one sigma standard deviation; the direct method was consistent with conventional one. Preliminary evaluation of effects in core characteristics other than criticality showed that the effect of sodium void reactivity (coolant reactivity) due to the double-heterogeneity was large. (2) An advanced method to calculate homogenize control rod cross sections: for control rod worths the reaction rate ratio preservation method agreed with those produced by the calculations with the control rod heterogeneity included in the core geometry; in Monju control rod worth analysis, the present method overestimated control rod worths by 1 to 2% compared with the conventional method, but these differences were caused by more accurate model in the present method and it is considered that this method is more reliable than the conventional one. These two methods investigated in this study can be directly applied to core characteristics other than criticality or control rod worth. Thus it is concluded that these methods will

  12. BENCHMARKING ON-LINE SERVICES INDUSTRIES

    Institute of Scientific and Technical Information of China (English)

    John HAMILTON

    2006-01-01

    The Web Quality Analyser (WQA) is a new benchmarking tool for industry. It hasbeen extensively tested across services industries. Forty five critical success features are presented as measures that capture the user's perception of services industry websites. This tool differs to previous tools, in that it captures the information technology (IT) related driver sectors of website performance, along with the marketing-services related driver sectors. These driver sectors capture relevant structure, function and performance components.An 'on-off' switch measurement approach determines each component. Relevant component measures scale into a relative presence of the applicable feature, with a feature block delivering one of the sector drivers. Although it houses both measurable and a few subjective components, the WQA offers a proven and useful means to compare relevant websites.The WQA defines website strengths and weaknesses, thereby allowing for corrections to the website structure of the specific business. WQA benchmarking against services related business competitors delivers a position on the WQA index, facilitates specific website driver rating comparisons, and demonstrates where key competitive advantage may reside. This paper reports on the marketing-services driver sectors of this new benchmarking WQA tool.

  13. Water Level Superseded Benchmark Sheets

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Images of National Coast & Geodetic Survey (now NOAA's National Geodetic Survey/NGS) tidal benchmarks which have been superseded by new markers or locations....

  14. Benchmark simulation models, quo vadis?

    DEFF Research Database (Denmark)

    Jeppsson, U.; Alex, J; Batstone, D. J.

    2013-01-01

    As the work of the IWA Task Group on Benchmarking of Control Strategies for wastewater treatment plants (WWTPs) is coming to an end, it is essential to disseminate the knowledge gained. For this reason, all authors of the IWA Scientific and Technical Report on benchmarking have come together to p...... already being done within the context of the benchmarking simulation models (BSMs) or applicable work in the wider literature. Of key importance is increasing capability, usability and transparency of the BSM package while avoiding unnecessary complexity. © IWA Publishing 2013....... and spatial extension, process modifications within the WWTP, the realism of models, control strategy extensions and the potential for new evaluation tools within the existing benchmark system. We find that there are major opportunities for application within all of these areas, either from existing work...

  15. Benchmarking and Sustainable Transport Policy

    DEFF Research Database (Denmark)

    Gudmundsson, Henrik; Wyatt, Andrew; Gordon, Lucy

    2004-01-01

    Order to learn from the best. In 2000 the European Commission initiated research to explore benchmarking as a tool to promote policies for ‘sustainable transport’. This paper reports findings and recommendations on how to address this challenge. The findings suggest that benchmarking is a valuable...... tool that may indeed help to move forward the transport policy agenda. However, there are major conditions and limitations. First of all it is not always so straightforward to delimit, measure and compare transport services in order to establish a clear benchmark. Secondly ‘sustainable transport......’ evokes a broad range of concerns that are hard to address fully at the level of specific practices. Thirdly policies are not directly comparable across space and context. For these reasons attempting to benchmark ‘sustainable transport policies’ against one another would be a highly complex task, which...

  16. Performance Targets and External Benchmarking

    DEFF Research Database (Denmark)

    Friis, Ivar; Hansen, Allan; Vámosi, Tamás S.

    Research on relative performance measures, transfer pricing, beyond budgeting initiatives, target costing, piece rates systems and value based management has for decades underlined the importance of external benchmarking in performance management. Research conceptualises external benchmarking...... the conditions upon which the market mechanism is performing within organizations. This paper aims to contribute to research by providing more insight to the conditions for the use of external benchmarking as an element in performance management in organizations. Our study explores a particular type of external...... towards the conditions for the use of the external benchmarks we provide more insights to some of the issues and challenges that are related to using this mechanism for performance management and advance competitiveness in organizations....

  17. Using postgraduate students' evaluations of research experience to benchmark departments and faculties: issues and challenges.

    Science.gov (United States)

    Ginns, Paul; Marsh, Herbert W; Behnia, Masud; Cheng, Jacqueline H S; Scalas, L Francesca

    2009-09-01

    The introduction of the Australian Research Training Scheme has been a strong reason for assuring the quality of the research higher degree (RHD) experience; if students experience poor supervision, an unsupportive climate, and inadequate infrastructure, prior research suggests RHD students will be less likely to complete their degree, with negative consequences for the student, the university, and society at large. The present study examines the psychometric properties of a survey instrument, the Student Research Experience Questionnaire (SREQ), for measuring the RHD experience of currently enrolled students. The core scales of the SREQ focus on student experiences of Supervision; Infrastructure; Intellectual and Social Climate; and Generic Skills Development. Participants were 2,213 postgraduate research students of a large, research-intensive Australian university. Preliminary factor analyses conducted at the student level supported the a priori four factors that the SREQ was designed to measure. However, multi-level analyses indicated that there was almost no differentiation between faculties or departments nested with faculties, suggesting that the SREQ responses are not appropriate for benchmarking faculties or departments. Consistent with earlier research based on comparisons across universities, the SREQ is shown to be almost completely unreliable in terms of benchmarking faculties or departments within a university.

  18. Performance Targets and External Benchmarking

    DEFF Research Database (Denmark)

    Friis, Ivar; Hansen, Allan; Vámosi, Tamás S.

    as a market mechanism that can be brought inside the firm to provide incentives for continuous improvement and the development of competitive advances. However, whereas extant research primarily has focused on the importance and effects of using external benchmarks, less attention has been directed towards...... towards the conditions for the use of the external benchmarks we provide more insights to some of the issues and challenges that are related to using this mechanism for performance management and advance competitiveness in organizations....

  19. Benchmarking East Tennessee`s economic capacity

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-04-20

    This presentation is comprised of viewgraphs delineating major economic factors operating in 15 counties in East Tennessee. The purpose of the information presented is to provide a benchmark analysis of economic conditions for use in guiding economic growth in the region. The emphasis of the presentation is economic infrastructure, which is classified into six categories: human resources, technology, financial resources, physical infrastructure, quality of life, and tax and regulation. Data for analysis of key indicators in each of the categories are presented. Preliminary analyses, in the form of strengths and weaknesses and comparison to reference groups, are given.

  20. A Benchmark for Virtual Camera Control

    DEFF Research Database (Denmark)

    Burelli, Paolo; Yannakakis, Georgios N.

    2015-01-01

    Automatically animating and placing the virtual camera in a dynamic environment is a challenging task. The camera is expected to maximise and maintain a set of properties — i.e. visual composition — while smoothly moving through the environment and avoiding obstacles. A large number of different....... For this reason, in this paper, we propose a benchmark for the problem of virtual camera control and we analyse a number of different problems in different virtual environments. Each of these scenarios is described through a set of complexity measures and, as a result of this analysis, a subset of scenarios...

  1. Research on computer systems benchmarking

    Science.gov (United States)

    Smith, Alan Jay (Principal Investigator)

    1996-01-01

    This grant addresses the topic of research on computer systems benchmarking and is more generally concerned with performance issues in computer systems. This report reviews work in those areas during the period of NASA support under this grant. The bulk of the work performed concerned benchmarking and analysis of CPUs, compilers, caches, and benchmark programs. The first part of this work concerned the issue of benchmark performance prediction. A new approach to benchmarking and machine characterization was reported, using a machine characterizer that measures the performance of a given system in terms of a Fortran abstract machine. Another report focused on analyzing compiler performance. The performance impact of optimization in the context of our methodology for CPU performance characterization was based on the abstract machine model. Benchmark programs are analyzed in another paper. A machine-independent model of program execution was developed to characterize both machine performance and program execution. By merging these machine and program characterizations, execution time can be estimated for arbitrary machine/program combinations. The work was continued into the domain of parallel and vector machines, including the issue of caches in vector processors and multiprocessors. All of the afore-mentioned accomplishments are more specifically summarized in this report, as well as those smaller in magnitude supported by this grant.

  2. Bioelectrochemical Systems Workshop:Standardized Analyses, Design Benchmarks, and Reporting

    Science.gov (United States)

    2012-01-01

    microbial nanowire networks. Nature Nanotechnology , 2011. 6(9): p. 573-579. 35. Butler, C.S. and R. Nerenberg, Performance and microbial ecology of...metals - Medium composition - Sterility - Buffer composition/capacity - Conductivity - TOC, DOC - COD (influent, effluent) - total, soluble - BOD

  3. Miocene Antarctic ice dynamics in the Ross Embayment (Western Ross Sea, Antarctica): Insights from provenance analyses of sedimentary clasts in the AND-2A drill core

    Science.gov (United States)

    Cornamusini, Gianluca; Talarico, Franco M.

    2016-11-01

    A detailed study of gravel-size sedimentary clasts in the ANDRILL-2A (AND-2A) drill core reveals distinct changes in provenance and allows reconstructions to be produced of the paleo ice flow in the McMurdo Sound region (Ross Sea) from the Early Miocene to the Holocene. The sedimentary clasts in AND-2A are divided into seven distinct petrofacies. A comparison of these with potential source rocks from the Transantarctic Mountains and the coastal Southern Victoria Land suggests that the majority of the sedimentary clasts were derived from formations within the Devonian-Triassic Beacon Supergroup. The siliciclastic-carbonate petrofacies are similar to the fossiliferous erratics found in the Quaternary Moraine in the southern McMurdo Sound and were probably sourced from Eocene strata that are currently hidden beneath the Ross Ice Shelf. Intraformational clasts were almost certainly reworked from diamictite and mudstone sequences that were originally deposited proximal to the drill site. The distribution of sedimentary gravel clasts in AND-2A suggests that sedimentary sequences in the drill core were deposited under two main glacial scenarios: 1) a highly dynamic ice sheet that did not extend beyond the coastal margin and produced abundant debris-rich icebergs from outlet glaciers in the central Transantarctic Mountains and South Victoria Land; 2) and an ice sheet that extended well beyond the coastal margin and periodically advanced across the Ross Embayment. Glacial scenario 1 dominated the early to mid-Miocene (between ca. 1000 and 225 mbsf in AND-2A) and scenario 2 the early Miocene (between ca. 1138 and 1000 mbsf) and late Neogene to Holocene (above ca. 225 mbsf). This study augments previous research on the clast provenance and highlights the added value that sedimentary clasts offer in terms of reconstructing past glacial conditions from Antarctic drill core records.

  4. Benchmarks for measurement of duplicate detection methods in nucleotide databases.

    Science.gov (United States)

    Chen, Qingyu; Zobel, Justin; Verspoor, Karin

    2017-01-08

    Duplication of information in databases is a major data quality challenge. The presence of duplicates, implying either redundancy or inconsistency, can have a range of impacts on the quality of analyses that use the data. To provide a sound basis for research on this issue in databases of nucleotide sequences, we have developed new, large-scale validated collections of duplicates, which can be used to test the effectiveness of duplicate detection methods. Previous collections were either designed primarily to test efficiency, or contained only a limited number of duplicates of limited kinds. To date, duplicate detection methods have been evaluated on separate, inconsistent benchmarks, leading to results that cannot be compared and, due to limitations of the benchmarks, of questionable generality. In this study, we present three nucleotide sequence database benchmarks, based on information drawn from a range of resources, including information derived from mapping to two data sections within the UniProt Knowledgebase (UniProtKB), UniProtKB/Swiss-Prot and UniProtKB/TrEMBL. Each benchmark has distinct characteristics. We quantify these characteristics and argue for their complementary value in evaluation. The benchmarks collectively contain a vast number of validated biological duplicates; the largest has nearly half a billion duplicate pairs (although this is probably only a tiny fraction of the total that is present). They are also the first benchmarks targeting the primary nucleotide databases. The records include the 21 most heavily studied organisms in molecular biology research. Our quantitative analysis shows that duplicates in the different benchmarks, and in different organisms, have different characteristics. It is thus unreliable to evaluate duplicate detection methods against any single benchmark. For example, the benchmark derived from UniProtKB/Swiss-Prot mappings identifies more diverse types of duplicates, showing the importance of expert curation, but

  5. Benchmarking of human resources management

    Directory of Open Access Journals (Sweden)

    David M. Akinnusi

    2008-12-01

    Full Text Available This paper reviews the role of human resource management (HRM which, today, plays a strategic partnership role in management. The focus of the paper is on HRM in the public sector, where much hope rests on HRM as a means of transforming the public service and achieving much needed service delivery. However, a critical evaluation of HRM practices in the public sector reveals that these services leave much to be desired. The paper suggests the adoption of benchmarking as a process to revamp HRM in the public sector so that it is able to deliver on its promises. It describes the nature and process of benchmarking and highlights the inherent difficulties in applying benchmarking in HRM. It concludes with some suggestions for a plan of action. The process of identifying “best” practices in HRM requires the best collaborative efforts of HRM practitioners and academicians. If used creatively, benchmarking has the potential to bring about radical and positive changes in HRM in the public sector. The adoption of the benchmarking process is, in itself, a litmus test of the extent to which HRM in the public sector has grown professionally.

  6. Benchmark simulation models, quo vadis?

    Science.gov (United States)

    Jeppsson, U; Alex, J; Batstone, D J; Benedetti, L; Comas, J; Copp, J B; Corominas, L; Flores-Alsina, X; Gernaey, K V; Nopens, I; Pons, M-N; Rodríguez-Roda, I; Rosen, C; Steyer, J-P; Vanrolleghem, P A; Volcke, E I P; Vrecko, D

    2013-01-01

    As the work of the IWA Task Group on Benchmarking of Control Strategies for wastewater treatment plants (WWTPs) is coming to an end, it is essential to disseminate the knowledge gained. For this reason, all authors of the IWA Scientific and Technical Report on benchmarking have come together to provide their insights, highlighting areas where knowledge may still be deficient and where new opportunities are emerging, and to propose potential avenues for future development and application of the general benchmarking framework and its associated tools. The paper focuses on the topics of temporal and spatial extension, process modifications within the WWTP, the realism of models, control strategy extensions and the potential for new evaluation tools within the existing benchmark system. We find that there are major opportunities for application within all of these areas, either from existing work already being done within the context of the benchmarking simulation models (BSMs) or applicable work in the wider literature. Of key importance is increasing capability, usability and transparency of the BSM package while avoiding unnecessary complexity.

  7. [Benchmarking projects examining patient care in Germany: methods of analysis, survey results, and best practice].

    Science.gov (United States)

    Blumenstock, Gunnar; Fischer, Imma; de Cruppé, Werner; Geraedts, Max; Selbmann, Hans-Konrad

    2011-01-01

    A survey among 232 German health care organisations addressed benchmarking projects in patient care. 53 projects were reported and analysed using a benchmarking development scheme and a list of criteria. None of the projects satisfied all the criteria. Rather, examples of best practice for single aspects have been identified.

  8. Assessing reactor physics codes capabilities to simulate fast reactors on the example of the BN-600 benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Ivanov, Vladimir [Scientific and Engineering Centre for Nuclear and Radiation Safety (SES NRS), Moscow (Russian Federation); Bousquet, Jeremy [Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS) gGmbH, Garching (Germany)

    2016-11-15

    This work aims to assess the capabilities of reactor physics codes (initially validated for thermal reactors) to simulate fast sodium cooled reactors. The BFS-62-3A critical experiment from the BN-600 Hybrid Core Benchmark Analyses was chosen for the investigation. Monte-Carlo codes (KENO from SCALE and SERPENT 2.1.23) and the deterministic diffusion code DYN3D-MG are applied to calculate the neutronic parameters. It was found that the multiplication factor and reactivity effects calculated by KENO and SERPENT using the ENDF/B-VII.0 continuous energy library are in a good agreement with each other and with the measured benchmark values. Few-groups macroscopic cross sections, required for DYN3D-MG, were prepared in applying different methods implemented in SCALE and SERPENT. The DYN3D-MG results of a simplified benchmark show reasonable agreement with results from Monte-Carlo calculations and measured values. The former results are used to justify DYN3D-MG implementation for sodium cooled fast reactors coupled deterministic analysis.

  9. Randomized benchmarking of multiqubit gates.

    Science.gov (United States)

    Gaebler, J P; Meier, A M; Tan, T R; Bowler, R; Lin, Y; Hanneke, D; Jost, J D; Home, J P; Knill, E; Leibfried, D; Wineland, D J

    2012-06-29

    We describe an extension of single-qubit gate randomized benchmarking that measures the error of multiqubit gates in a quantum information processor. This platform-independent protocol evaluates the performance of Clifford unitaries, which form a basis of fault-tolerant quantum computing. We implemented the benchmarking protocol with trapped ions and found an error per random two-qubit Clifford unitary of 0.162±0.008, thus setting the first benchmark for such unitaries. By implementing a second set of sequences with an extra two-qubit phase gate inserted after each step, we extracted an error per phase gate of 0.069±0.017. We conducted these experiments with transported, sympathetically cooled ions in a multizone Paul trap-a system that can in principle be scaled to larger numbers of ions.

  10. Radiation Detection Computational Benchmark Scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Shaver, Mark W.; Casella, Andrew M.; Wittman, Richard S.; McDonald, Ben S.

    2013-09-24

    Modeling forms an important component of radiation detection development, allowing for testing of new detector designs, evaluation of existing equipment against a wide variety of potential threat sources, and assessing operation performance of radiation detection systems. This can, however, result in large and complex scenarios which are time consuming to model. A variety of approaches to radiation transport modeling exist with complementary strengths and weaknesses for different problems. This variety of approaches, and the development of promising new tools (such as ORNL’s ADVANTG) which combine benefits of multiple approaches, illustrates the need for a means of evaluating or comparing different techniques for radiation detection problems. This report presents a set of 9 benchmark problems for comparing different types of radiation transport calculations, identifying appropriate tools for classes of problems, and testing and guiding the development of new methods. The benchmarks were drawn primarily from existing or previous calculations with a preference for scenarios which include experimental data, or otherwise have results with a high level of confidence, are non-sensitive, and represent problem sets of interest to NA-22. From a technical perspective, the benchmarks were chosen to span a range of difficulty and to include gamma transport, neutron transport, or both and represent different important physical processes and a range of sensitivity to angular or energy fidelity. Following benchmark identification, existing information about geometry, measurements, and previous calculations were assembled. Monte Carlo results (MCNP decks) were reviewed or created and re-run in order to attain accurate computational times and to verify agreement with experimental data, when present. Benchmark information was then conveyed to ORNL in order to guide testing and development of hybrid calculations. The results of those ADVANTG calculations were then sent to PNNL for

  11. Perceptual hashing algorithms benchmark suite

    Institute of Scientific and Technical Information of China (English)

    Zhang Hui; Schmucker Martin; Niu Xiamu

    2007-01-01

    Numerous perceptual hashing algorithms have been developed for identification and verification of multimedia objects in recent years. Many application schemes have been adopted for various commercial objects. Developers and users are looking for a benchmark tool to compare and evaluate their current algorithms or technologies. In this paper, a novel benchmark platform is presented. PHABS provides an open framework and lets its users define their own test strategy, perform tests, collect and analyze test data. With PHABS, various performance parameters of algorithms can be tested, and different algorithms or algorithms with different parameters can be evaluated and compared easily.

  12. Closed-loop neuromorphic benchmarks

    CSIR Research Space (South Africa)

    Stewart

    2015-11-01

    Full Text Available Benchmarks   Terrence C. Stewart 1* , Travis DeWolf 1 , Ashley Kleinhans 2 , Chris Eliasmith 1   1 University of Waterloo, Canada, 2 Council for Scientific and Industrial Research, South Africa   Submitted to Journal:   Frontiers in Neuroscience   Specialty... the study was exempt from ethical approval procedures.) Did the study presented in the manuscript involve human or animal subjects: No I v i w 1Closed-loop Neuromorphic Benchmarks Terrence C. Stewart 1,∗, Travis DeWolf 1, Ashley Kleinhans 2 and Chris...

  13. The contextual benchmark method: benchmarking e-government services

    NARCIS (Netherlands)

    Jansen, Jurjen; Vries, de Sjoerd; Schaik, van Paul

    2010-01-01

    This paper offers a new method for benchmarking e-Government services. Government organizations no longer doubt the need to deliver their services on line. Instead, the question that is more relevant is how well the electronic services offered by a particular organization perform in comparison with

  14. Characterizing Information Flux Within the Distributed Pediatric Expressive Language Network: A Core Region Mapped Through fMRI-Constrained MEG Effective Connectivity Analyses.

    Science.gov (United States)

    Kadis, Darren S; Dimitrijevic, Andrew; Toro-Serey, Claudio A; Smith, Mary Lou; Holland, Scott K

    2016-02-01

    Using noninvasive neuroimaging, researchers have shown that young children have bilateral and diffuse language networks, which become increasingly left lateralized and focal with development. Connectivity within the distributed pediatric language network has been minimally studied, and conventional neuroimaging approaches do not distinguish task-related signal changes from those that are task essential. In this study, we propose a novel multimodal method to map core language sites from patterns of information flux. We retrospectively analyze neuroimaging data collected in two groups of children, ages 5-18 years, performing verb generation in functional magnetic resonance imaging (fMRI) (n = 343) and magnetoencephalography (MEG) (n = 21). The fMRI data were conventionally analyzed and the group activation map parcellated to define node locations. Neuronal activity at each node was estimated from MEG data using a linearly constrained minimum variance beamformer, and effective connectivity within canonical frequency bands was computed using the phase slope index metric. We observed significant (p ≤ 0.05) effective connections in all subjects. The number of suprathreshold connections was significantly and linearly correlated with participant's age (r = 0.50, n = 21, p ≤ 0.05), suggesting that core language sites emerge as part of the normal developmental trajectory. Across frequencies, we observed significant effective connectivity among proximal left frontal nodes. Within the low frequency bands, information flux was rostrally directed within a focal, left frontal region, approximating Broca's area. At higher frequencies, we observed increased connectivity involving bilateral perisylvian nodes. Frequency-specific differences in patterns of information flux were resolved through fast (i.e., MEG) neuroimaging.

  15. Benchmarking Internet of Things devices

    CSIR Research Space (South Africa)

    Kruger, CP

    2014-07-01

    Full Text Available International Conference on Industrial Informatics (INDIN), 27-30 July 2014 Benchmarking Internet of Things devices C.P. Kruger y and G.P. Hancke yz *Advanced Sensor Networks Research Group, Counsil for Scientific and Industrial Research, South...

  16. Benchmarked Library Websites Comparative Study

    KAUST Repository

    Ramli, Rindra M.

    2015-01-01

    This presentation provides an analysis of services provided by the benchmarked library websites. The exploratory study includes comparison of these websites against a list of criterion and presents a list of services that are most commonly deployed by the selected websites. In addition to that, the investigators proposed a list of services that could be provided via the KAUST library website.

  17. Engine Benchmarking - Final CRADA Report

    Energy Technology Data Exchange (ETDEWEB)

    Wallner, Thomas [Argonne National Lab. (ANL), Argonne, IL (United States)

    2016-01-01

    Detailed benchmarking of the powertrains of three light-duty vehicles was performed. Results were presented and provided to CRADA partners. The vehicles included a MY2011 Audi A4, a MY2012 Mini Cooper and a MY2014 Nissan Versa.

  18. Benchmark Lisp And Ada Programs

    Science.gov (United States)

    Davis, Gloria; Galant, David; Lim, Raymond; Stutz, John; Gibson, J.; Raghavan, B.; Cheesema, P.; Taylor, W.

    1992-01-01

    Suite of nonparallel benchmark programs, ELAPSE, designed for three tests: comparing efficiency of computer processing via Lisp vs. Ada; comparing efficiencies of several computers processing via Lisp; or comparing several computers processing via Ada. Tests efficiency which computer executes routines in each language. Available for computer equipped with validated Ada compiler and/or Common Lisp system.

  19. Assessment of the uncertainties of MULTICELL calculations by the OECD NEA UAM PWR pin cell burnup benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Kereszturi, Andras [Hungarian Academy of Sciences, Budapest (Hungary). Centre for Energy Research; Panka, Istvan

    2015-09-15

    Defining precisely the burnup of the nuclear fuel is important from the point of view of core design calculations, safety analyses, criticality calculations (e.g. burnup credit calculations), etc. This paper deals with the uncertainties of MULTICELL calculations obtained by the solution of the OECD NEA UAM PWR pin cell burnup benchmark. In this assessment Monte-Carlo type statistical analyses are applied and the energy dependent covariance matrices of the cross-sections are taken into account. Additionally, the impact of the uncertainties of the fission yields is also considered. The target quantities are the burnup dependent uncertainties of the infinite multiplication factor, the two-group cross-sections, the reaction rates and the number densities of some isotopes up to the burnup of 60 MWd/kgU. In the paper the burnup dependent tendencies of the corresponding uncertainties and their sources are analyzed.

  20. Who watches the watchmen? An appraisal of benchmarks for multiple sequence alignment.

    Science.gov (United States)

    Iantorno, Stefano; Gori, Kevin; Goldman, Nick; Gil, Manuel; Dessimoz, Christophe

    2014-01-01

    Multiple sequence alignment (MSA) is a fundamental and ubiquitous technique in bioinformatics used to infer related residues among biological sequences. Thus alignment accuracy is crucial to a vast range of analyses, often in ways difficult to assess in those analyses. To compare the performance of different aligners and help detect systematic errors in alignments, a number of benchmarking strategies have been pursued. Here we present an overview of the main strategies-based on simulation, consistency, protein structure, and phylogeny-and discuss their different advantages and associated risks. We outline a set of desirable characteristics for effective benchmarking, and evaluate each strategy in light of them. We conclude that there is currently no universally applicable means of benchmarking MSA, and that developers and users of alignment tools should base their choice of benchmark depending on the context of application-with a keen awareness of the assumptions underlying each benchmarking strategy.

  1. Geologic field notes and geochemical analyses of outcrop and drill core from Mesoproterozoic rocks and iron-oxide deposits and prospects of southeast Missouri

    Science.gov (United States)

    Day, Warren C.; Granitto, Matthew

    2014-01-01

    The U.S. Geological Survey, in cooperation with the Missouri Department of Natural Resources/Missouri Geological Survey, undertook a study from 1988 to 1994 on the iron-oxide deposits and their host Mesoproterozoic igneous rocks in southeastern Missouri. The project resulted in an improvement of our understanding of the geologic setting, mode of formation, and the composition of many of the known deposits and prospects and the associated rocks of the St. Francois terrane in Missouri. The goal for this earlier work was to allow the comparison of Missouri iron-oxide deposits in context with other iron oxide-copper ± uranium (IOCG) types of mineral deposits observed globally. The raw geochemical analyses were released originally through the USGS National Geochemical Database (NGDB, http://mrdata.usgs.gov). The data presented herein offers all of the field notes, locations, rock descriptions, and geochemical analyses in a coherent package to facilitate new research efforts in IOCG deposit types. The data are provided in both Microsoft Excel (Version Office 2010) spreadsheet format (*.xlsx) and MS-DOS text formats (*.txt) for ease of use by numerous computer programs.

  2. 42 CFR 440.385 - Delivery of benchmark and benchmark-equivalent coverage through managed care entities.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Delivery of benchmark and benchmark-equivalent...: GENERAL PROVISIONS Benchmark Benefit and Benchmark-Equivalent Coverage § 440.385 Delivery of benchmark and benchmark-equivalent coverage through managed care entities. In implementing benchmark or...

  3. Benchmarking: More Aspects of High Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Ravindrudu, Rahul [Iowa State Univ., Ames, IA (United States)

    2004-01-01

    The original HPL algorithm makes the assumption that all data can be fit entirely in the main memory. This assumption will obviously give a good performance due to the absence of disk I/O. However, not all applications can fit their entire data in memory. These applications which require a fair amount of I/O to move data to and from main memory and secondary storage, are more indicative of usage of an Massively Parallel Processor (MPP) System. Given this scenario a well designed I/O architecture will play a significant part in the performance of the MPP System on regular jobs. And, this is not represented in the current Benchmark. The modified HPL algorithm is hoped to be a step in filling this void. The most important factor in the performance of out-of-core algorithms is the actual I/O operations performed and their efficiency in transferring data to/from main memory and disk, Various methods were introduced in the report for performing I/O operations. The I/O method to use depends on the design of the out-of-core algorithm. Conversely, the performance of the out-of-core algorithm is affected by the choice of I/O operations. This implies, good performance is achieved when I/O efficiency is closely tied with the out-of-core algorithms. The out-of-core algorithms must be designed from the start. It is easily observed in the timings for various plots, that I/O plays a significant part in the overall execution time. This leads to an important conclusion, retro-fitting an existing code may not be the best choice. The right-looking algorithm selected for the LU factorization is a recursive algorithm and performs well when the entire dataset is in memory. At each stage of the loop the entire trailing submatrix is read into memory panel by panel. This gives a polynomial number of I/O reads and writes. If the left-looking algorithm was selected for the main loop, the number of I/O operations involved will be linear on the number of columns. This is due to the data access

  4. Benchmarking clinical photography services in the NHS.

    Science.gov (United States)

    Arbon, Giles

    2015-01-01

    Benchmarking is used in services across the National Health Service (NHS) using various benchmarking programs. Clinical photography services do not have a program in place and services have to rely on ad hoc surveys of other services. A trial benchmarking exercise was undertaken with 13 services in NHS Trusts. This highlights valuable data and comparisons that can be used to benchmark and improve services throughout the profession.

  5. Benchmarking: Achieving the best in class

    Energy Technology Data Exchange (ETDEWEB)

    Kaemmerer, L

    1996-05-01

    Oftentimes, people find the process of organizational benchmarking an onerous task, or, because they do not fully understand the nature of the process, end up with results that are less than stellar. This paper presents the challenges of benchmarking and reasons why benchmarking can benefit an organization in today`s economy.

  6. The LDBC Social Network Benchmark: Interactive Workload

    NARCIS (Netherlands)

    Erling, O.; Averbuch, A.; Larriba-Pey, J.; Chafi, H.; Gubichev, A.; Prat, A.; Pham, M.D.; Boncz, P.A.

    2015-01-01

    The Linked Data Benchmark Council (LDBC) is now two years underway and has gathered strong industrial participation for its mission to establish benchmarks, and benchmarking practices for evaluating graph data management systems. The LDBC introduced a new choke-point driven methodology for developin

  7. How Benchmarking and Higher Education Came Together

    Science.gov (United States)

    Levy, Gary D.; Ronco, Sharron L.

    2012-01-01

    This chapter introduces the concept of benchmarking and how higher education institutions began to use benchmarking for a variety of purposes. Here, benchmarking is defined as a strategic and structured approach whereby an organization compares aspects of its processes and/or outcomes to those of another organization or set of organizations to…

  8. Geothermal Heat Pump Benchmarking Report

    Energy Technology Data Exchange (ETDEWEB)

    None

    1997-01-17

    A benchmarking study was conducted on behalf of the Department of Energy to determine the critical factors in successful utility geothermal heat pump programs. A Successful program is one that has achieved significant market penetration. Successfully marketing geothermal heat pumps has presented some major challenges to the utility industry. However, select utilities have developed programs that generate significant GHP sales. This benchmarking study concludes that there are three factors critical to the success of utility GHP marking programs: (1) Top management marketing commitment; (2) An understanding of the fundamentals of marketing and business development; and (3) An aggressive competitive posture. To generate significant GHP sales, competitive market forces must by used. However, because utilities have functioned only in a regulated arena, these companies and their leaders are unschooled in competitive business practices. Therefore, a lack of experience coupled with an intrinsically non-competitive culture yields an industry environment that impedes the generation of significant GHP sales in many, but not all, utilities.

  9. Benchmarking Variable Selection in QSAR.

    Science.gov (United States)

    Eklund, Martin; Norinder, Ulf; Boyer, Scott; Carlsson, Lars

    2012-02-01

    Variable selection is important in QSAR modeling since it can improve model performance and transparency, as well as reduce the computational cost of model fitting and predictions. Which variable selection methods that perform well in QSAR settings is largely unknown. To address this question we, in a total of 1728 benchmarking experiments, rigorously investigated how eight variable selection methods affect the predictive performance and transparency of random forest models fitted to seven QSAR datasets covering different endpoints, descriptors sets, types of response variables, and number of chemical compounds. The results show that univariate variable selection methods are suboptimal and that the number of variables in the benchmarked datasets can be reduced with about 60 % without significant loss in model performance when using multivariate adaptive regression splines MARS and forward selection.

  10. A Benchmark for Management Effectiveness

    OpenAIRE

    Zimmermann, Bill; Chanaron, Jean-Jacques; Klieb, Leslie

    2007-01-01

    International audience; This study presents a tool to gauge managerial effectiveness in the form of a questionnaire that is easy to administer and score. The instrument covers eight distinct areas of organisational climate and culture of management inside a company or department. Benchmark scores were determined by administering sample-surveys to a wide cross-section of individuals from numerous firms in Southeast Louisiana, USA. Scores remained relatively constant over a seven-year timeframe...

  11. Restaurant Energy Use Benchmarking Guideline

    Energy Technology Data Exchange (ETDEWEB)

    Hedrick, R.; Smith, V.; Field, K.

    2011-07-01

    A significant operational challenge for food service operators is defining energy use benchmark metrics to compare against the performance of individual stores. Without metrics, multiunit operators and managers have difficulty identifying which stores in their portfolios require extra attention to bring their energy performance in line with expectations. This report presents a method whereby multiunit operators may use their own utility data to create suitable metrics for evaluating their operations.

  12. Reactor based plutonium disposition - physics and fuel behaviour benchmark studies of an OECD/NEA experts group

    Energy Technology Data Exchange (ETDEWEB)

    D' Hondt, P. [SCK.CEN, Mol (Belgium); Gehin, J. [ORNL, Oak Ridge, TN (United States); Na, B.C.; Sartori, E. [Organisation for Economic Co-Operation and Development, Nuclear Energy Agency, 92 - Issy les Moulineaux (France); Wiesenack, W. [Organisation for Economic Co-Operation and Development/HRP, Halden (Norway)

    2001-07-01

    One of the options envisaged for disposing of weapons grade plutonium, declared surplus for national defence in the Russian Federation and Usa, is to burn it in nuclear power reactors. The scientific/technical know-how accumulated in the use of MOX as a fuel for electricity generation is of great relevance for the plutonium disposition programmes. An Expert Group of the OECD/Nea is carrying out a series of benchmarks with the aim of facilitating the use of this know-how for meeting this objective. This paper describes the background that led to establishing the Expert Group, and the present status of results from these benchmarks. The benchmark studies cover a theoretical reactor physics benchmark on a VVER-1000 core loaded with MOX, two experimental benchmarks on MOX lattices and a benchmark concerned with MOX fuel behaviour for both solid and hollow pellets. First conclusions are outlined as well as future work. (author)

  13. Matrix metalloproteinase-10/TIMP-2 structure and analyses define conserved core interactions and diverse exosite interactions in MMP/TIMP complexes.

    Science.gov (United States)

    Batra, Jyotica; Soares, Alexei S; Mehner, Christine; Radisky, Evette S

    2013-01-01

    Matrix metalloproteinases (MMPs) play central roles in vertebrate tissue development, remodeling, and repair. The endogenous tissue inhibitors of metalloproteinases (TIMPs) regulate proteolytic activity by binding tightly to the MMP active site. While each of the four TIMPs can inhibit most MMPs, binding data reveal tremendous heterogeneity in affinities of different TIMP/MMP pairs, and the structural features that differentiate stronger from weaker complexes are poorly understood. Here we report the crystal structure of the comparatively weakly bound human MMP-10/TIMP-2 complex at 2.1 Å resolution. Comparison with previously reported structures of MMP-3/TIMP-1, MT1-MMP/TIMP-2, MMP-13/TIMP-2, and MMP-10/TIMP-1 complexes offers insights into the structural basis of binding selectivity. Our analyses identify a group of highly conserved contacts at the heart of MMP/TIMP complexes that define the conserved mechanism of inhibition, as well as a second category of diverse adventitious contacts at the periphery of the interfaces. The AB loop of the TIMP N-terminal domain and the contact loops of the TIMP C-terminal domain form highly variable peripheral contacts that can be considered as separate exosite interactions. In some complexes these exosite contacts are extensive, while in other complexes the AB loop or C-terminal domain contacts are greatly reduced and appear to contribute little to complex stability. Our data suggest that exosite interactions can enhance MMP/TIMP binding, although in the relatively weakly bound MMP-10/TIMP-2 complex they are not well optimized to do so. Formation of highly variable exosite interactions may provide a general mechanism by which TIMPs are fine-tuned for distinct regulatory roles in biology.

  14. Thermal Performance Benchmarking: Annual Report

    Energy Technology Data Exchange (ETDEWEB)

    Moreno, Gilbert

    2016-04-08

    The goal for this project is to thoroughly characterize the performance of state-of-the-art (SOA) automotive power electronics and electric motor thermal management systems. Information obtained from these studies will be used to: Evaluate advantages and disadvantages of different thermal management strategies; establish baseline metrics for the thermal management systems; identify methods of improvement to advance the SOA; increase the publicly available information related to automotive traction-drive thermal management systems; help guide future electric drive technologies (EDT) research and development (R&D) efforts. The performance results combined with component efficiency and heat generation information obtained by Oak Ridge National Laboratory (ORNL) may then be used to determine the operating temperatures for the EDT components under drive-cycle conditions. In FY15, the 2012 Nissan LEAF power electronics and electric motor thermal management systems were benchmarked. Testing of the 2014 Honda Accord Hybrid power electronics thermal management system started in FY15; however, due to time constraints it was not possible to include results for this system in this report. The focus of this project is to benchmark the thermal aspects of the systems. ORNL's benchmarking of electric and hybrid electric vehicle technology reports provide detailed descriptions of the electrical and packaging aspects of these automotive systems.

  15. RISKIND verification and benchmark comparisons

    Energy Technology Data Exchange (ETDEWEB)

    Biwer, B.M.; Arnish, J.J.; Chen, S.Y.; Kamboj, S.

    1997-08-01

    This report presents verification calculations and benchmark comparisons for RISKIND, a computer code designed to estimate potential radiological consequences and health risks to individuals and the population from exposures associated with the transportation of spent nuclear fuel and other radioactive materials. Spreadsheet calculations were performed to verify the proper operation of the major options and calculational steps in RISKIND. The program is unique in that it combines a variety of well-established models into a comprehensive treatment for assessing risks from the transportation of radioactive materials. Benchmark comparisons with other validated codes that incorporate similar models were also performed. For instance, the external gamma and neutron dose rate curves for a shipping package estimated by RISKIND were compared with those estimated by using the RADTRAN 4 code and NUREG-0170 methodology. Atmospheric dispersion of released material and dose estimates from the GENII and CAP88-PC codes. Verification results have shown the program to be performing its intended function correctly. The benchmark results indicate that the predictions made by RISKIND are within acceptable limits when compared with predictions from similar existing models.

  16. Argonne Code Center: Benchmark problem book.

    Energy Technology Data Exchange (ETDEWEB)

    None, None

    1977-06-01

    This book is an outgrowth of activities of the Computational Benchmark Problems Committee of the Mathematics and Computation Division of the American Nuclear Society. This is the second supplement of the original benchmark book which was first published in February, 1968 and contained computational benchmark problems in four different areas. Supplement No. 1, which was published in December, 1972, contained corrections to the original benchmark book plus additional problems in three new areas. The current supplement. Supplement No. 2, contains problems in eight additional new areas. The objectives of computational benchmark work and the procedures used by the committee in pursuing the objectives are outlined in the original edition of the benchmark book (ANL-7416, February, 1968). The members of the committee who have made contributions to Supplement No. 2 are listed below followed by the contributors to the earlier editions of the benchmark book.

  17. Benchmarks

    Data.gov (United States)

    Earth Data Analysis Center, University of New Mexico — The National Flood Hazard Layer (NFHL) data incorporates all Digital Flood Insurance Rate Map(DFIRM) databases published by FEMA, and any Letters Of Map Revision...

  18. Benchmarking strategies for measuring the quality of healthcare: problems and prospects.

    Science.gov (United States)

    Lovaglio, Pietro Giorgio

    2012-01-01

    Over the last few years, increasing attention has been directed toward the problems inherent to measuring the quality of healthcare and implementing benchmarking strategies. Besides offering accreditation and certification processes, recent approaches measure the performance of healthcare institutions in order to evaluate their effectiveness, defined as the capacity to provide treatment that modifies and improves the patient's state of health. This paper, dealing with hospital effectiveness, focuses on research methods for effectiveness analyses within a strategy comparing different healthcare institutions. The paper, after having introduced readers to the principle debates on benchmarking strategies, which depend on the perspective and type of indicators used, focuses on the methodological problems related to performing consistent benchmarking analyses. Particularly, statistical methods suitable for controlling case-mix, analyzing aggregate data, rare events, and continuous outcomes measured with error are examined. Specific challenges of benchmarking strategies, such as the risk of risk adjustment (case-mix fallacy, underreporting, risk of comparing noncomparable hospitals), selection bias, and possible strategies for the development of consistent benchmarking analyses, are discussed. Finally, to demonstrate the feasibility of the illustrated benchmarking strategies, an application focused on determining regional benchmarks for patient satisfaction (using 2009 Lombardy Region Patient Satisfaction Questionnaire) is proposed.

  19. Benchmarking Analysis of Institutional University Autonomy in Denmark, Lithuania, Romania, Scotland, and Sweden

    DEFF Research Database (Denmark)

    This book presents a benchmark, comparative analysis of institutional university autonomy in Denmark, Lithuania, Romania, Scotland and Sweden. These countries are partners in a EU TEMPUS funded project 'Enhancing University Autonomy in Moldova' (EUniAM). This benchmark analysis was conducted...... by the EUniAM Lead Task Force team that collected and analysed secondary and primary data in each of these countries and produced four benchmark reports that are part of this book. For each dimension and interface of institutional university autonomy, the members of the Lead Task Force team identified...... respective evaluation criteria and searched for similarities and differences in approaches to higher education sectors and respective autonomy regimes in these countries. The consolidated report that precedes the benchmark reports summarises the process and key findings from the four benchmark reports...

  20. PageRank Pipeline Benchmark: Proposal for a Holistic System Benchmark for Big-Data Platforms

    CERN Document Server

    Dreher, Patrick; Hill, Chris; Gadepally, Vijay; Kuszmaul, Bradley; Kepner, Jeremy

    2016-01-01

    The rise of big data systems has created a need for benchmarks to measure and compare the capabilities of these systems. Big data benchmarks present unique scalability challenges. The supercomputing community has wrestled with these challenges for decades and developed methodologies for creating rigorous scalable benchmarks (e.g., HPC Challenge). The proposed PageRank pipeline benchmark employs supercomputing benchmarking methodologies to create a scalable benchmark that is reflective of many real-world big data processing systems. The PageRank pipeline benchmark builds on existing prior scalable benchmarks (Graph500, Sort, and PageRank) to create a holistic benchmark with multiple integrated kernels that can be run together or independently. Each kernel is well defined mathematically and can be implemented in any programming environment. The linear algebraic nature of PageRank makes it well suited to being implemented using the GraphBLAS standard. The computations are simple enough that performance predictio...

  1. Melcor benchmarking against integral severe fuel damage tests

    Energy Technology Data Exchange (ETDEWEB)

    Madni, I.K. [Brookhaven National Lab., Upton, NY (United States)

    1995-09-01

    MELCOR is a fully integrated computer code that models all phases of the progression of severe accidents in light water reactor nuclear power plants, and is being developed for the U.S. Nuclear Regulatory Commission (NRC) by Sandia National Laboratories (SNL). Brookhaven National Laboratory (BNL) has a program with the NRC to provide independent assessment of MELCOR, and a very important part of this program is to benchmark MELCOR against experimental data from integral severe fuel damage tests and predictions of that data from more mechanistic codes such as SCDAP or SCDAP/RELAP5. Benchmarking analyses with MELCOR have been carried out at BNL for five integral severe fuel damage tests, namely, PBF SFD 1-1, SFD 14, and NRU FLHT-2, analyses, and their role in identifying areas of modeling strengths and weaknesses in MELCOR.

  2. Histograms showing variations in oil yield, water yield, and specific gravity of oil from Fischer assay analyses of oil-shale drill cores and cuttings from the Piceance Basin, northwestern Colorado

    Science.gov (United States)

    Dietrich, John D.; Brownfield, Michael E.; Johnson, Ronald C.; Mercier, Tracey J.

    2014-01-01

    Recent studies indicate that the Piceance Basin in northwestern Colorado contains over 1.5 trillion barrels of oil in place, making the basin the largest known oil-shale deposit in the world. Previously published histograms display oil-yield variations with depth and widely correlate rich and lean oil-shale beds and zones throughout the basin. Histograms in this report display oil-yield data plotted alongside either water-yield or oil specific-gravity data. Fischer assay analyses of core and cutting samples collected from exploration drill holes penetrating the Eocene Green River Formation in the Piceance Basin can aid in determining the origins of those deposits, as well as estimating the amount of organic matter, halite, nahcolite, and water-bearing minerals. This report focuses only on the oil yield plotted against water yield and oil specific gravity.

  3. NASA Software Engineering Benchmarking Effort

    Science.gov (United States)

    Godfrey, Sally; Rarick, Heather

    2012-01-01

    Benchmarking was very interesting and provided a wealth of information (1) We did see potential solutions to some of our "top 10" issues (2) We have an assessment of where NASA stands with relation to other aerospace/defense groups We formed new contacts and potential collaborations (1) Several organizations sent us examples of their templates, processes (2) Many of the organizations were interested in future collaboration: sharing of training, metrics, Capability Maturity Model Integration (CMMI) appraisers, instructors, etc. We received feedback from some of our contractors/ partners (1) Desires to participate in our training; provide feedback on procedures (2) Welcomed opportunity to provide feedback on working with NASA

  4. Benchmarking of human resources management

    OpenAIRE

    David M. Akinnusi

    2008-01-01

    This paper reviews the role of human resource management (HRM) which, today, plays a strategic partnership role in management. The focus of the paper is on HRM in the public sector, where much hope rests on HRM as a means of transforming the public service and achieving much needed service delivery. However, a critical evaluation of HRM practices in the public sector reveals that these services leave much to be desired. The paper suggests the adoption of benchmarking as a process to revamp HR...

  5. Design Principles for Synthesizable Processor Cores

    DEFF Research Database (Denmark)

    Schleuniger, Pascal; McKee, Sally A.; Karlsson, Sven

    2012-01-01

    As FPGAs get more competitive, synthesizable processor cores become an attractive choice for embedded computing. Currently popular commercial processor cores do not fully exploit current FPGA architectures. In this paper, we propose general design principles to increase instruction throughput...... through the use of micro-benchmarks that our principles guide the design of a processor core that improves performance by an average of 38% over a similar Xilinx MicroBlaze configuration....

  6. Implementing an Internal Development Process Benchmark Using PDM-Data

    OpenAIRE

    Roelofsen, J.; Fuchs, S. D.; Fuchs, D. K.; Lindemann, U.

    2009-01-01

    This paper introduces the concept for an internal development process benchmark using PDM-data. The analysis of the PDM-data at a company is used to compare development work at three different locations across Europe. The concept of a tool implemented at the company is shown as well as exemplary analyses carried out by this tool. The interpretation portfolio provided to support the interpretation of the generated charts is explained and different types of reports derived from the ...

  7. Rock Properties and Internal Structure of the San Andreas Fault near ~ 3 km Depth in the SAFOD Borehole Based on Meso- to Micro-scale Analyses of Phase III whole rock core

    Science.gov (United States)

    Bradbury, K.; Evans, J. P.

    2010-12-01

    We examine the relationships between rock properties and structure within ~ 41 m of PHASE III whole-rock core collected from ~ 3 km depth along the SAF in the San Andreas Fault Observatory at Depth (SAFOD) borehole, near Parkfield, CA. Direct mesoscale observations of the core are integrated with detailed petrography and microstructural analyses coupled with X-Ray Diffraction and X-Ray Fluorescence techniques to document variations in composition, alteration, and structures that may be related to deformation and/or fluid-rock interactions. Across the low velocity zone (LVZ) defined by borehole geophysical data, lithologies are comprised of a heterogeneous sequence of fine-grained sandstones, siltstones, mudstones, and shales with block-in-matrix textures and pervasively foliated fabrics. More competent clasts within the block-in-matrix materials exhibit pinch-and-swell shaped structures with crosscutting veins that do not extend into the surrounding phyllosilicate-rich matrix. Narrow fault strands at 3192 and 3302 m bound the LVZ and correspond to sites of active casing deformation (aseismic creep). Here, the rock consists of ~ 2 m thick serpentinite-bearing phyllosilicate gouge with a pervasive penetrative scaly clay fabric and phacoidal-shaped clasts. Bounding these two active slip surfaces are highly sheared and comminuted ultrafine-grained black fault rocks with abundant calcite veins parallel and oblique to the foliation trend. Localized shear surfaces bound multi-layered zones of medium to ultra-fine grained cataclasite in the near-fault environment and record multiple generations of brittle deformation processes. Deformation at high-strain rates is suggested by the presence of crack-seal veins in clasts within the block-in-matrix materials, the presence of porphyroclasts, and the development of S-C fabrics in the phyllosilicate-rich gouge. Across the fault(s) and related damage zones, foliated fabrics alternating with discrete fractures suggest a mixed

  8. Benchmarking and accounting for the (private) cloud

    CERN Document Server

    Belleman, J

    2015-01-01

    During the past two years large parts of the CERN batch farm have been moved to virtual machines running on the CERN internal cloud. During this process a large fraction of the resources, which had previously been used as physical batch worker nodes, were converted into hypervisors. Due to the large spread of the per-core performance in the farm, caused by its heterogenous nature, it is necessary to have a good knowledge of the performance of the virtual machines. This information is used both for scheduling in the batch system and for accounting. While in the previous setup worker nodes were classified and benchmarked based on the purchase order number, for virtual batch worker nodes this is no longer possible, the information is now either hidden or hard to retrieve. Therefore we developed a new scheme to classify worker nodes according to their performance. The new scheme is flexible enough to be usable both for virtual and physical machines in the batch farm. With the new classification it is possible to ...

  9. Discrepancies in Communication Versus Documentation of Weight-Management Benchmarks

    Science.gov (United States)

    Turer, Christy B.; Barlow, Sarah E.; Montaño, Sergio; Flores, Glenn

    2017-01-01

    To examine gaps in communication versus documentation of weight-management clinical practices, communication was recorded during primary care visits with 6- to 12-year-old overweight/obese Latino children. Communication/documentation content was coded by 3 reviewers using communication transcripts and health-record documentation. Discrepancies in communication/documentation content codes were resolved through consensus. Bivariate/multivariable analyses examined factors associated with discrepancies in benchmark communication/documentation. Benchmarks were neither communicated nor documented in up to 42% of visits, and communicated but not documented or documented but not communicated in up to 20% of visits. Lowest benchmark performance rates were for laboratory studies (35%) and nutrition/weight-management referrals (42%). In multivariable analysis, overweight (vs obesity) was associated with 1.6 more discrepancies in communication versus documentation (P = .03). Many weight-management benchmarks are not met, not documented, or performed without being communicated. Enhanced communication with families and documentation in health records may promote lifestyle changes in overweight children and higher quality care for overweight children in primary care.

  10. An Effective Approach for Benchmarking Implementation

    OpenAIRE

    B. M. Deros; Tan, J.; M.N.A. Rahman; N. A.Q.M. Daud

    2011-01-01

    Problem statement: The purpose of this study is to present a benchmarking guideline, conceptual framework and computerized mini program to assists companies achieve better performance in terms of quality, cost, delivery, supply chain and eventually increase their competitiveness in the market. The study begins with literature review on benchmarking definition, barriers and advantages from the implementation and the study of benchmarking framework. Approach: Thirty res...

  11. Computational Chemistry Comparison and Benchmark Database

    Science.gov (United States)

    SRD 101 NIST Computational Chemistry Comparison and Benchmark Database (Web, free access)   The NIST Computational Chemistry Comparison and Benchmark Database is a collection of experimental and ab initio thermochemical properties for a selected set of molecules. The goals are to provide a benchmark set of molecules for the evaluation of ab initio computational methods and allow the comparison between different ab initio computational methods for the prediction of thermochemical properties.

  12. Benchmarking i eksternt regnskab og revision

    DEFF Research Database (Denmark)

    Thinggaard, Frank; Kiertzner, Lars

    2001-01-01

    løbende i en benchmarking-proces. Dette kapitel vil bredt undersøge, hvor man med nogen ret kan få benchmarking-begrebet knyttet til eksternt regnskab og revision. Afsnit 7.1 beskæftiger sig med det eksterne årsregnskab, mens afsnit 7.2 tager fat i revisionsområdet. Det sidste afsnit i kapitlet opsummerer...... betragtningerne om benchmarking i forbindelse med begge områder....

  13. Developing Benchmarks for Solar Radio Bursts

    Science.gov (United States)

    Biesecker, D. A.; White, S. M.; Gopalswamy, N.; Black, C.; Domm, P.; Love, J. J.; Pierson, J.

    2016-12-01

    Solar radio bursts can interfere with radar, communication, and tracking signals. In severe cases, radio bursts can inhibit the successful use of radio communications and disrupt a wide range of systems that are reliant on Position, Navigation, and Timing services on timescales ranging from minutes to hours across wide areas on the dayside of Earth. The White House's Space Weather Action Plan has asked for solar radio burst intensity benchmarks for an event occurrence frequency of 1 in 100 years and also a theoretical maximum intensity benchmark. The solar radio benchmark team was also asked to define the wavelength/frequency bands of interest. The benchmark team developed preliminary (phase 1) benchmarks for the VHF (30-300 MHz), UHF (300-3000 MHz), GPS (1176-1602 MHz), F10.7 (2800 MHz), and Microwave (4000-20000) bands. The preliminary benchmarks were derived based on previously published work. Limitations in the published work will be addressed in phase 2 of the benchmark process. In addition, deriving theoretical maxima requires additional work, where it is even possible to, in order to meet the Action Plan objectives. In this presentation, we will present the phase 1 benchmarks and the basis used to derive them. We will also present the work that needs to be done in order to complete the final, or phase 2 benchmarks.

  14. Benchmarking for controllere: Metoder, teknikker og muligheder

    DEFF Research Database (Denmark)

    Bukh, Per Nikolaj; Sandalgaard, Niels; Dietrichson, Lars

    2008-01-01

    Der vil i artiklen blive stillet skarpt på begrebet benchmarking ved at præsentere og diskutere forskellige facetter af det. Der vil blive redegjort for fire forskellige anvendelser af benchmarking for at vise begrebets bredde og væsentligheden af at klarlægge formålet med et benchmarkingprojekt......, inden man går i gang. Forskellen på resultatbenchmarking og procesbenchmarking vil blive behandlet, hvorefter brugen af intern hhv. ekstern benchmarking vil blive diskuteret. Endelig introduceres brugen af benchmarking i budgetlægning og budgetopfølgning....

  15. Establishing benchmarks and metrics for utilization management.

    Science.gov (United States)

    Melanson, Stacy E F

    2014-01-01

    The changing environment of healthcare reimbursement is rapidly leading to a renewed appreciation of the importance of utilization management in the clinical laboratory. The process of benchmarking of laboratory operations is well established for comparing organizational performance to other hospitals (peers) and for trending data over time through internal benchmarks. However, there are relatively few resources available to assist organizations in benchmarking for laboratory utilization management. This article will review the topic of laboratory benchmarking with a focus on the available literature and services to assist in managing physician requests for laboratory testing. © 2013.

  16. Benchmarking Implementations of Functional Languages with ``Pseudoknot'', a Float-Intensive Benchmark

    NARCIS (Netherlands)

    Hartel, P.H.; Feeley, M.; Alt, M.; Augustsson, L.

    1996-01-01

    Over 25 implementations of different functional languages are benchmarked using the same program, a floatingpoint intensive application taken from molecular biology. The principal aspects studied are compile time and execution time for the various implementations that were benchmarked. An important

  17. The Zoo, Benchmarks & You: How To Reach the Oregon State Benchmarks with Zoo Resources.

    Science.gov (United States)

    2002

    This document aligns Oregon state educational benchmarks and standards with Oregon Zoo resources. Benchmark areas examined include English, mathematics, science, social studies, and career and life roles. Brief descriptions of the programs offered by the zoo are presented. (SOE)

  18. The Zoo, Benchmarks & You: How To Reach the Oregon State Benchmarks with Zoo Resources.

    Science.gov (United States)

    2002

    This document aligns Oregon state educational benchmarks and standards with Oregon Zoo resources. Benchmark areas examined include English, mathematics, science, social studies, and career and life roles. Brief descriptions of the programs offered by the zoo are presented. (SOE)

  19. Benchmarking Implementations of Functional Languages with "Pseudoknot", a float-intensive benchmark

    NARCIS (Netherlands)

    Hartel, Pieter H.; Feeley, M.; Alt, M.; Augustsson, L.

    Over 25 implementations of different functional languages are benchmarked using the same program, a floatingpoint intensive application taken from molecular biology. The principal aspects studied are compile time and execution time for the various implementations that were benchmarked. An important

  20. Performance Benchmarking of Fast Multipole Methods

    KAUST Repository

    Al-Harthi, Noha A.

    2013-06-01

    The current trends in computer architecture are shifting towards smaller byte/flop ratios, while available parallelism is increasing at all levels of granularity – vector length, core count, and MPI process. Intel’s Xeon Phi coprocessor, NVIDIA’s Kepler GPU, and IBM’s BlueGene/Q all have a Byte/flop ratio close to 0.2, which makes it very difficult for most algorithms to extract a high percentage of the theoretical peak flop/s from these architectures. Popular algorithms in scientific computing such as FFT are continuously evolving to keep up with this trend in hardware. In the meantime it is also necessary to invest in novel algorithms that are more suitable for computer architectures of the future. The fast multipole method (FMM) was originally developed as a fast algorithm for ap- proximating the N-body interactions that appear in astrophysics, molecular dynamics, and vortex based fluid dynamics simulations. The FMM possesses have a unique combination of being an efficient O(N) algorithm, while having an operational intensity that is higher than a matrix-matrix multiplication. In fact, the FMM can reduce the requirement of Byte/flop to around 0.01, which means that it will remain compute bound until 2020 even if the cur- rent trend in microprocessors continues. Despite these advantages, there have not been any benchmarks of FMM codes on modern architectures such as Xeon Phi, Kepler, and Blue- Gene/Q. This study aims to provide a comprehensive benchmark of a state of the art FMM code “exaFMM” on the latest architectures, in hopes of providing a useful reference for deciding when the FMM will become useful as the computational engine in a given application code. It may also serve as a warning to certain problem size domains areas where the FMM will exhibit insignificant performance improvements. Such issues depend strongly on the asymptotic constants rather than the asymptotics themselves, and therefore are strongly implementation and hardware

  1. Sensitivity Analysis of OECD Benchmark Tests in BISON

    Energy Technology Data Exchange (ETDEWEB)

    Swiler, Laura Painton [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Gamble, Kyle [Idaho National Lab. (INL), Idaho Falls, ID (United States); Schmidt, Rodney C. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Williamson, Richard [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-09-01

    This report summarizes a NEAMS (Nuclear Energy Advanced Modeling and Simulation) project focused on sensitivity analysis of a fuels performance benchmark problem. The benchmark problem was defined by the Uncertainty Analysis in Modeling working group of the Nuclear Science Committee, part of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development (OECD ). The benchmark problem involv ed steady - state behavior of a fuel pin in a Pressurized Water Reactor (PWR). The problem was created in the BISON Fuels Performance code. Dakota was used to generate and analyze 300 samples of 17 input parameters defining core boundary conditions, manuf acturing tolerances , and fuel properties. There were 24 responses of interest, including fuel centerline temperatures at a variety of locations and burnup levels, fission gas released, axial elongation of the fuel pin, etc. Pearson and Spearman correlatio n coefficients and Sobol' variance - based indices were used to perform the sensitivity analysis. This report summarizes the process and presents results from this study.

  2. Benchmarking: A tool to enhance performance

    Energy Technology Data Exchange (ETDEWEB)

    Munro, J.F. [Oak Ridge National Lab., TN (United States); Kristal, J. [USDOE Assistant Secretary for Environmental Management, Washington, DC (United States); Thompson, G.; Johnson, T. [Los Alamos National Lab., NM (United States)

    1996-12-31

    The Office of Environmental Management is bringing Headquarters and the Field together to implement process improvements throughout the Complex through a systematic process of organizational learning called benchmarking. Simply stated, benchmarking is a process of continuously comparing and measuring practices, processes, or methodologies with those of other private and public organizations. The EM benchmarking program, which began as the result of a recommendation from Xerox Corporation, is building trust and removing barriers to performance enhancement across the DOE organization. The EM benchmarking program is designed to be field-centered with Headquarters providing facilitatory and integrative functions on an ``as needed`` basis. One of the main goals of the program is to assist Field Offices and their associated M&O/M&I contractors develop the capabilities to do benchmarking for themselves. In this regard, a central precept is that in order to realize tangible performance benefits, program managers and staff -- the ones closest to the work - must take ownership of the studies. This avoids the ``check the box`` mentality associated with some third party studies. This workshop will provide participants with a basic level of understanding why the EM benchmarking team was developed and the nature and scope of its mission. Participants will also begin to understand the types of study levels and the particular methodology the EM benchmarking team is using to conduct studies. The EM benchmarking team will also encourage discussion on ways that DOE (both Headquarters and the Field) can team with its M&O/M&I contractors to conduct additional benchmarking studies. This ``introduction to benchmarking`` is intended to create a desire to know more and a greater appreciation of how benchmarking processes could be creatively employed to enhance performance.

  3. Benchmarking ICRF simulations for ITER

    Energy Technology Data Exchange (ETDEWEB)

    R. V. Budny, L. Berry, R. Bilato, P. Bonoli, M. Brambilla, R.J. Dumont, A. Fukuyama, R. Harvey, E.F. Jaeger, E. Lerche, C.K. Phillips, V. Vdovin, J. Wright, and members of the ITPA-IOS

    2010-09-28

    Abstract Benchmarking of full-wave solvers for ICRF simulations is performed using plasma profiles and equilibria obtained from integrated self-consistent modeling predictions of four ITER plasmas. One is for a high performance baseline (5.3 T, 15 MA) DT H-mode plasma. The others are for half-field, half-current plasmas of interest for the pre-activation phase with bulk plasma ion species being either hydrogen or He4. The predicted profiles are used by seven groups to predict the ICRF electromagnetic fields and heating profiles. Approximate agreement is achieved for the predicted heating power partitions for the DT and He4 cases. Profiles of the heating powers and electromagnetic fields are compared.

  4. Benchmarking Asteroid-Deflection Experiment

    Science.gov (United States)

    Remington, Tane; Bruck Syal, Megan; Owen, John Michael; Miller, Paul L.

    2016-10-01

    An asteroid impacting Earth could have devastating consequences. In preparation to deflect or disrupt one before it reaches Earth, it is imperative to have modeling capabilities that adequately simulate the deflection actions. Code validation is key to ensuring full confidence in simulation results used in an asteroid-mitigation plan. We are benchmarking well-known impact experiments using Spheral, an adaptive smoothed-particle hydrodynamics code, to validate our modeling of asteroid deflection. We describe our simulation results, compare them with experimental data, and discuss what we have learned from our work. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. LLNL-ABS-695540

  5. NASA Software Engineering Benchmarking Study

    Science.gov (United States)

    Rarick, Heather L.; Godfrey, Sara H.; Kelly, John C.; Crumbley, Robert T.; Wifl, Joel M.

    2013-01-01

    To identify best practices for the improvement of software engineering on projects, NASA's Offices of Chief Engineer (OCE) and Safety and Mission Assurance (OSMA) formed a team led by Heather Rarick and Sally Godfrey to conduct this benchmarking study. The primary goals of the study are to identify best practices that: Improve the management and technical development of software intensive systems; Have a track record of successful deployment by aerospace industries, universities [including research and development (R&D) laboratories], and defense services, as well as NASA's own component Centers; and Identify candidate solutions for NASA's software issues. Beginning in the late fall of 2010, focus topics were chosen and interview questions were developed, based on the NASA top software challenges. Between February 2011 and November 2011, the Benchmark Team interviewed a total of 18 organizations, consisting of five NASA Centers, five industry organizations, four defense services organizations, and four university or university R and D laboratory organizations. A software assurance representative also participated in each of the interviews to focus on assurance and software safety best practices. Interviewees provided a wealth of information on each topic area that included: software policy, software acquisition, software assurance, testing, training, maintaining rigor in small projects, metrics, and use of the Capability Maturity Model Integration (CMMI) framework, as well as a number of special topics that came up in the discussions. NASA's software engineering practices compared favorably with the external organizations in most benchmark areas, but in every topic, there were ways in which NASA could improve its practices. Compared to defense services organizations and some of the industry organizations, one of NASA's notable weaknesses involved communication with contractors regarding its policies and requirements for acquired software. One of NASA's strengths

  6. COG validation: SINBAD Benchmark Problems

    Energy Technology Data Exchange (ETDEWEB)

    Lent, E M; Sale, K E; Buck, R M; Descalle, M

    2004-02-23

    We validated COG, a 3D Monte Carlo radiation transport code, against experimental data and MNCP4C simulations from the Shielding Integral Benchmark Archive Database (SINBAD) compiled by RSICC. We modeled three experiments: the Osaka Nickel and Aluminum sphere experiments conducted at the OKTAVIAN facility, and the liquid oxygen experiment conducted at the FNS facility. COG results are in good agreement with experimental data and generally within a few % of MCNP results. There are several possible sources of discrepancy between MCNP and COG results: (1) the cross-section database versions are different, MCNP uses ENDFB VI 1.1 while COG uses ENDFB VIR7, (2) the code implementations are different, and (3) the models may differ slightly. We also limited the use of variance reduction methods when running the COG version of the problems.

  7. General benchmarks for quantum repeaters

    CERN Document Server

    Pirandola, Stefano

    2015-01-01

    Using a technique based on quantum teleportation, we simplify the most general adaptive protocols for key distribution, entanglement distillation and quantum communication over a wide class of quantum channels in arbitrary dimension. Thanks to this method, we bound the ultimate rates for secret key generation and quantum communication through single-mode Gaussian channels and several discrete-variable channels. In particular, we derive exact formulas for the two-way assisted capacities of the bosonic quantum-limited amplifier and the dephasing channel in arbitrary dimension, as well as the secret key capacity of the qubit erasure channel. Our results establish the limits of quantum communication with arbitrary systems and set the most general and precise benchmarks for testing quantum repeaters in both discrete- and continuous-variable settings.

  8. HPC Analytics Support. Requirements for Uncertainty Quantification Benchmarks

    Energy Technology Data Exchange (ETDEWEB)

    Paulson, Patrick R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Purohit, Sumit [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Rodriguez, Luke R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2015-05-01

    This report outlines techniques for extending benchmark generation products so they support uncertainty quantification by benchmarked systems. We describe how uncertainty quantification requirements can be presented to candidate analytical tools supporting SPARQL. We describe benchmark data sets for evaluating uncertainty quantification, as well as an approach for using our benchmark generator to produce data sets for generating benchmark data sets.

  9. 42 CFR 440.330 - Benchmark health benefits coverage.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Benchmark health benefits coverage. 440.330 Section... SERVICES (CONTINUED) MEDICAL ASSISTANCE PROGRAMS SERVICES: GENERAL PROVISIONS Benchmark Benefit and Benchmark-Equivalent Coverage § 440.330 Benchmark health benefits coverage. Benchmark coverage is...

  10. An Effective Approach for Benchmarking Implementation

    Directory of Open Access Journals (Sweden)

    B. M. Deros

    2011-01-01

    Full Text Available Problem statement: The purpose of this study is to present a benchmarking guideline, conceptual framework and computerized mini program to assists companies achieve better performance in terms of quality, cost, delivery, supply chain and eventually increase their competitiveness in the market. The study begins with literature review on benchmarking definition, barriers and advantages from the implementation and the study of benchmarking framework. Approach: Thirty respondents were involved in the case study. They comprise of industrial practitioners, which had assessed usability and practicability of the guideline, conceptual framework and computerized mini program. Results: A guideline and template were proposed to simplify the adoption of benchmarking techniques. A conceptual framework was proposed by integrating the Deming’s PDCA and Six Sigma DMAIC theory. It was provided a step-by-step method to simplify the implementation and to optimize the benchmarking results. A computerized mini program was suggested to assist the users in adopting the technique as part of improvement project. As the result from the assessment test, the respondents found that the implementation method provided an idea for company to initiate benchmarking implementation and it guides them to achieve the desired goal as set in a benchmarking project. Conclusion: The result obtained and discussed in this study can be applied in implementing benchmarking in a more systematic way for ensuring its success.

  11. Synergetic effect of benchmarking competitive advantages

    Directory of Open Access Journals (Sweden)

    N.P. Tkachova

    2011-12-01

    Full Text Available It is analyzed the essence of synergistic competitive benchmarking. The classification of types of synergies is developed. It is determined the sources of synergies in conducting benchmarking of competitive advantages. It is proposed methodological framework definition of synergy in the formation of competitive advantage.

  12. Synergetic effect of benchmarking competitive advantages

    OpenAIRE

    N.P. Tkachova; P.G. Pererva

    2011-01-01

    It is analyzed the essence of synergistic competitive benchmarking. The classification of types of synergies is developed. It is determined the sources of synergies in conducting benchmarking of competitive advantages. It is proposed methodological framework definition of synergy in the formation of competitive advantage.

  13. Benchmarking set for domestic smart grid management

    NARCIS (Netherlands)

    Bosman, M.G.C.; Bakker, Vincent; Molderink, Albert; Hurink, Johann L.; Smit, Gerardus Johannes Maria

    2010-01-01

    In this paper we propose a benchmark for domestic smart grid management. It consists of an in-depth description of a domestic smart grid, in which local energy consumers, producers and buffers can be controlled. First, from this description a general benchmark framework is derived, which can be used

  14. Machines are benchmarked by code, not algorithms

    NARCIS (Netherlands)

    Poss, R.

    2013-01-01

    This article highlights how small modifications to either the source code of a benchmark program or the compilation options may impact its behavior on a specific machine. It argues that for evaluating machines, benchmark providers and users be careful to ensure reproducibility of results based on th

  15. Benchmark analysis of railway networks and undertakings

    NARCIS (Netherlands)

    Hansen, I.A.; Wiggenraad, P.B.L.; Wolff, J.W.

    2013-01-01

    Benchmark analysis of railway networks and companies has been stimulated by the European policy of deregulation of transport markets, the opening of national railway networks and markets to new entrants and separation of infrastructure and train operation. Recent international railway benchmarking s

  16. Benchmark Assessment for Improved Learning. AACC Report

    Science.gov (United States)

    Herman, Joan L.; Osmundson, Ellen; Dietel, Ronald

    2010-01-01

    This report describes the purposes of benchmark assessments and provides recommendations for selecting and using benchmark assessments--addressing validity, alignment, reliability, fairness and bias and accessibility, instructional sensitivity, utility, and reporting issues. We also present recommendations on building capacity to support schools'…

  17. Benchmark Two-Good Utility Functions

    NARCIS (Netherlands)

    de Jaegher, K.

    Benchmark two-good utility functions involving a good with zero income elasticity and unit income elasticity are well known. This paper derives utility functions for the additional benchmark cases where one good has zero cross-price elasticity, unit own-price elasticity, and zero own price

  18. Benchmark Two-Good Utility Functions

    NARCIS (Netherlands)

    de Jaegher, K.

    2007-01-01

    Benchmark two-good utility functions involving a good with zero income elasticity and unit income elasticity are well known. This paper derives utility functions for the additional benchmark cases where one good has zero cross-price elasticity, unit own-price elasticity, and zero own price elasticit

  19. Benchmarking Learning and Teaching: Developing a Method

    Science.gov (United States)

    Henderson-Smart, Cheryl; Winning, Tracey; Gerzina, Tania; King, Shalinie; Hyde, Sarah

    2006-01-01

    Purpose: To develop a method for benchmarking teaching and learning in response to an institutional need to validate a new program in Dentistry at the University of Sydney, Australia. Design/methodology/approach: After a collaborative partner, University of Adelaide, was identified, the areas of teaching and learning to be benchmarked, PBL…

  20. A Seafloor Benchmark for 3-dimensional Geodesy

    Science.gov (United States)

    Chadwell, C. D.; Webb, S. C.; Nooner, S. L.

    2014-12-01

    We have developed an inexpensive, permanent seafloor benchmark to increase the longevity of seafloor geodetic measurements. The benchmark provides a physical tie to the sea floor lasting for decades (perhaps longer) on which geodetic sensors can be repeatedly placed and removed with millimeter resolution. Global coordinates estimated with seafloor geodetic techniques will remain attached to the benchmark allowing for the interchange of sensors as they fail or become obsolete, or for the sensors to be removed and used elsewhere, all the while maintaining a coherent series of positions referenced to the benchmark. The benchmark has been designed to free fall from the sea surface with transponders attached. The transponder can be recalled via an acoustic command sent from the surface to release from the benchmark and freely float to the sea surface for recovery. The duration of the sensor attachment to the benchmark will last from a few days to a few years depending on the specific needs of the experiment. The recovered sensors are then available to be reused at other locations, or again at the same site in the future. Three pins on the sensor frame mate precisely and unambiguously with three grooves on the benchmark. To reoccupy a benchmark a Remotely Operated Vehicle (ROV) uses its manipulator arm to place the sensor pins into the benchmark grooves. In June 2014 we deployed four benchmarks offshore central Oregon. We used the ROV Jason to successfully demonstrate the removal and replacement of packages onto the benchmark. We will show the benchmark design and its operational capabilities. Presently models of megathrust slip within the Cascadia Subduction Zone (CSZ) are mostly constrained by the sub-aerial GPS vectors from the Plate Boundary Observatory, a part of Earthscope. More long-lived seafloor geodetic measures are needed to better understand the earthquake and tsunami risk associated with a large rupture of the thrust fault within the Cascadia subduction zone

  1. Udsættelser af lejere – Udvikling og benchmarking

    DEFF Research Database (Denmark)

    Christensen, Gunvor; Jeppesen, Anders Gade; Kjær, Agnete Aslaug;

    I denne rapport undersøges udviklingen i både fogedsager og effektive udsættelser fra 2007-2013. Rapporten indeholder desuden en benchmarking-analyse, der estimerer, om hver enkelt kommune har flere eller færre effektive udsættelser, end hvad man skulle forvente, når der tages højde for bl...... kommuner, der indgår i benchmarking-analysen har desuden flere effektive udsættelser i de almene boliger, end man kunne forvente, når der tages hensyn til kommunernes befolkningsgrundlag, det lokale boligmarked og kommunale forhold som fx størrelsen af kommunen. Rapporten er finansieret af Ministeriet...

  2. Performance benchmarks for a next generation numerical dynamo model

    Science.gov (United States)

    Matsui, Hiroaki; Heien, Eric; Aubert, Julien; Aurnou, Jonathan M.; Avery, Margaret; Brown, Ben; Buffett, Bruce A.; Busse, Friedrich; Christensen, Ulrich R.; Davies, Christopher J.; Featherstone, Nicholas; Gastine, Thomas; Glatzmaier, Gary A.; Gubbins, David; Guermond, Jean-Luc; Hayashi, Yoshi-Yuki; Hollerbach, Rainer; Hwang, Lorraine J.; Jackson, Andrew; Jones, Chris A.; Jiang, Weiyuan; Kellogg, Louise H.; Kuang, Weijia; Landeau, Maylis; Marti, Philippe; Olson, Peter; Ribeiro, Adolfo; Sasaki, Youhei; Schaeffer, Nathanaël.; Simitev, Radostin D.; Sheyko, Andrey; Silva, Luis; Stanley, Sabine; Takahashi, Futoshi; Takehiro, Shin-ichi; Wicht, Johannes; Willis, Ashley P.

    2016-05-01

    Numerical simulations of the geodynamo have successfully represented many observable characteristics of the geomagnetic field, yielding insight into the fundamental processes that generate magnetic fields in the Earth's core. Because of limited spatial resolution, however, the diffusivities in numerical dynamo models are much larger than those in the Earth's core, and consequently, questions remain about how realistic these models are. The typical strategy used to address this issue has been to continue to increase the resolution of these quasi-laminar models with increasing computational resources, thus pushing them toward more realistic parameter regimes. We assess which methods are most promising for the next generation of supercomputers, which will offer access to O(106) processor cores for large problems. Here we report performance and accuracy benchmarks from 15 dynamo codes that employ a range of numerical and parallelization methods. Computational performance is assessed on the basis of weak and strong scaling behavior up to 16,384 processor cores. Extrapolations of our weak-scaling results indicate that dynamo codes that employ two-dimensional or three-dimensional domain decompositions can perform efficiently on up to ˜106 processor cores, paving the way for more realistic simulations in the next model generation.

  3. Nonlinear Resonance Benchmarking Experiment at the CERN Proton Synchrotron

    CERN Document Server

    Hofmann, I; Giovannozzi, Massimo; Martini, M; Métral, Elias

    2003-01-01

    As a first step of a space charge - nonlinear resonance benchmarking experiment over a large number of turns, beam loss and emittance evolution were measured over 1 s on a 1.4 GeV kinetic energy flat-bottom in the presence of a single octupole. By lowering the working point towards the resonance a gradual transition from a loss-free core emittance blow-up to a regime dominated by continuous loss was found. Our 3D simulations with analytical space charge show that trapping on the resonance due to synchrotron oscillation causes the observed core emittance growth as well as halo formation, where the latter is explained as the source of the observed loss.

  4. Proteomics Core

    Data.gov (United States)

    Federal Laboratory Consortium — Proteomics Core is the central resource for mass spectrometry based proteomics within the NHLBI. The Core staff help collaborators design proteomics experiments in a...

  5. Proteomics Core

    Data.gov (United States)

    Federal Laboratory Consortium — Proteomics Core is the central resource for mass spectrometry based proteomics within the NHLBI. The Core staff help collaborators design proteomics experiments in...

  6. OECD/NEA benchmark for time-dependent neutron transport calculations without spatial homogenization

    Energy Technology Data Exchange (ETDEWEB)

    Hou, Jason, E-mail: jason.hou@ncsu.edu [Department of Nuclear Engineering, North Carolina State University, Raleigh, NC 27695 (United States); Ivanov, Kostadin N. [Department of Nuclear Engineering, North Carolina State University, Raleigh, NC 27695 (United States); Boyarinov, Victor F.; Fomichenko, Peter A. [National Research Centre “Kurchatov Institute”, Kurchatov Sq. 1, Moscow (Russian Federation)

    2017-06-15

    Highlights: • A time-dependent homogenization-free neutron transport benchmark was created. • The first phase, known as the kinetics phase, was described in this work. • Preliminary results for selected 2-D transient exercises were presented. - Abstract: A Nuclear Energy Agency (NEA), Organization for Economic Co-operation and Development (OECD) benchmark for the time-dependent neutron transport calculations without spatial homogenization has been established in order to facilitate the development and assessment of numerical methods for solving the space-time neutron kinetics equations. The benchmark has been named the OECD/NEA C5G7-TD benchmark, and later extended with three consecutive phases each corresponding to one modelling stage of the multi-physics transient analysis of the nuclear reactor core. This paper provides a detailed introduction of the benchmark specification of Phase I, known as the “kinetics phase”, including the geometry description, supporting neutron transport data, transient scenarios in both two-dimensional (2-D) and three-dimensional (3-D) configurations, as well as the expected output parameters from the participants. Also presented are the preliminary results for the initial state 2-D core and selected transient exercises that have been obtained using the Monte Carlo method and the Surface Harmonic Method (SHM), respectively.

  7. Radiologic-histopathologic correlation of microcalcifications from 11G vacuum biopsy: analysis of 3196 core biopsies; Radiologisch-histopathologische Korrelation von mikrokalzifikationen in 11-G-Vakuumbiopsaten - eine Analyse von insgesamt 3196 Proben

    Energy Technology Data Exchange (ETDEWEB)

    Fischmann, A.; Siegmann, K.; Wersebe, A.; Claussen, C.D. [Abt. Radiologische Diagnostik, Universitaetsklinikum Tuebingen (Germany); Pietsch-Breitfeld, B. [Inst. fuer Medizinische Informationsverarbeitung, Univ. Tuebingen (Germany); Mueller-Schimpfle, M. [Radiologisches Zentralinstitut, Staedtische Kliniken Frankfurt (Germany); Rothenberger-Janzen, K.; Janzen, J. [Inst. fuer Pathologie, Universitaetsklinikum Tuebingen (Germany)

    2004-04-01

    Purpose: To perform a statistical evaluation of microcalcifications (MC) from suspicious breast lesions detected by radiography and histopathology. Materials and Methods: Histological and radiological detection of calcifications were compared from 116 biopsies in 96 women. Lesions with identical description of calcifications detected in histopathology and radiography were considered concordant, patients with obvious discrepancies discordant. If histological and radiological groups of calcifications were identical in number but different in location, the case was considered pseudoconcordant. Results: Histopathology classified 24 of 116 lesions as malignant and 92 as benign. A total of 3196 core biopsies were examined, 851 of these contained groups of calcifications or single calcifications. Both modalities detected 579 calcifications, with 169 exclusively detected by radiography and 103 exclusively by histopathology. In 35 cases (30%) radiologic and pathologic results were concordant, in 6 cases pseudo-concordant (4%) and in 75 cases (65%) discordant. The case-based Kappa coefficient was -0.09 (-0.24 to 0.07). The 122 calcifications not detected by histopathology were few or single calcifications at the edge of the core that were probably lost during processing, 18 were possible artefacts. Six cores contained calcium oxalate, 3 contained milk of calcium. In 6 cases malignant disease was found after the first examination, hence the cores were not searched thoroughly for the missing calcifications. In the remaining 14 cases, no calcifications were found despite complete processing of the tissue. In 49 of 103 cases of radiologically undetected microcalcifications, the retrospect analysis showed dense tissue areas that probably contained the calcification. The remaining 54 cases contained calcifications, which were too small to be detected radiologically. Summary: Discordant results from pathological and radiological examinations of biopsies can mainly be explained by

  8. Evaluation of the applicability of the Benchmark approach to existing toxicological data. Framework: Chemical compounds in the working place

    NARCIS (Netherlands)

    Appel MJ; Bouman HGM; Pieters MN; Slob W; CSR

    2001-01-01

    Vijf stoffen in de werkomgeving waarvoor risico-evaluaties beschikbaar waren, werden geselecteerd voor analyse met de benchmark-benadering. De kritische studies werden voor elk van deze stoffen geanalyseerd. De onderzochte toxicologische parameters betroffen zowel continue als ordinale gegevens.

  9. GPUs benchmarking in subpixel image registration algorithm

    Science.gov (United States)

    Sanz-Sabater, Martin; Picazo-Bueno, Jose Angel; Micó, Vicente; Ferrerira, Carlos; Granero, Luis; Garcia, Javier

    2015-05-01

    Image registration techniques are used among different scientific fields, like medical imaging or optical metrology. The straightest way to calculate shifting between two images is using the cross correlation, taking the highest value of this correlation image. Shifting resolution is given in whole pixels which cannot be enough for certain applications. Better results can be achieved interpolating both images, as much as the desired resolution we want to get, and applying the same technique described before, but the memory needed by the system is significantly higher. To avoid memory consuming we are implementing a subpixel shifting method based on FFT. With the original images, subpixel shifting can be achieved multiplying its discrete Fourier transform by a linear phase with different slopes. This method is high time consuming method because checking a concrete shifting means new calculations. The algorithm, highly parallelizable, is very suitable for high performance computing systems. GPU (Graphics Processing Unit) accelerated computing became very popular more than ten years ago because they have hundreds of computational cores in a reasonable cheap card. In our case, we are going to register the shifting between two images, doing the first approach by FFT based correlation, and later doing the subpixel approach using the technique described before. We consider it as `brute force' method. So we will present a benchmark of the algorithm consisting on a first approach (pixel resolution) and then do subpixel resolution approaching, decreasing the shifting step in every loop achieving a high resolution in few steps. This program will be executed in three different computers. At the end, we will present the results of the computation, with different kind of CPUs and GPUs, checking the accuracy of the method, and the time consumed in each computer, discussing the advantages, disadvantages of the use of GPUs.

  10. Benchmarking of methods for genomic taxonomy.

    Science.gov (United States)

    Larsen, Mette V; Cosentino, Salvatore; Lukjancenko, Oksana; Saputra, Dhany; Rasmussen, Simon; Hasman, Henrik; Sicheritz-Pontén, Thomas; Aarestrup, Frank M; Ussery, David W; Lund, Ole

    2014-05-01

    One of the first issues that emerges when a prokaryotic organism of interest is encountered is the question of what it is--that is, which species it is. The 16S rRNA gene formed the basis of the first method for sequence-based taxonomy and has had a tremendous impact on the field of microbiology. Nevertheless, the method has been found to have a number of shortcomings. In the current study, we trained and benchmarked five methods for whole-genome sequence-based prokaryotic species identification on a common data set of complete genomes: (i) SpeciesFinder, which is based on the complete 16S rRNA gene; (ii) Reads2Type that searches for species-specific 50-mers in either the 16S rRNA gene or the gyrB gene (for the Enterobacteraceae family); (iii) the ribosomal multilocus sequence typing (rMLST) method that samples up to 53 ribosomal genes; (iv) TaxonomyFinder, which is based on species-specific functional protein domain profiles; and finally (v) KmerFinder, which examines the number of cooccurring k-mers (substrings of k nucleotides in DNA sequence data). The performances of the methods were subsequently evaluated on three data sets of short sequence reads or draft genomes from public databases. In total, the evaluation sets constituted sequence data from more than 11,000 isolates covering 159 genera and 243 species. Our results indicate that methods that sample only chromosomal, core genes have difficulties in distinguishing closely related species which only recently diverged. The KmerFinder method had the overall highest accuracy and correctly identified from 93% to 97% of the isolates in the evaluations sets.

  11. ICSBEP Benchmarks For Nuclear Data Applications

    Science.gov (United States)

    Briggs, J. Blair

    2005-05-01

    The International Criticality Safety Benchmark Evaluation Project (ICSBEP) was initiated in 1992 by the United States Department of Energy. The ICSBEP became an official activity of the Organization for Economic Cooperation and Development (OECD) — Nuclear Energy Agency (NEA) in 1995. Representatives from the United States, United Kingdom, France, Japan, the Russian Federation, Hungary, Republic of Korea, Slovenia, Serbia and Montenegro (formerly Yugoslavia), Kazakhstan, Spain, Israel, Brazil, Poland, and the Czech Republic are now participating. South Africa, India, China, and Germany are considering participation. The purpose of the ICSBEP is to identify, evaluate, verify, and formally document a comprehensive and internationally peer-reviewed set of criticality safety benchmark data. The work of the ICSBEP is published as an OECD handbook entitled "International Handbook of Evaluated Criticality Safety Benchmark Experiments." The 2004 Edition of the Handbook contains benchmark specifications for 3331 critical or subcritical configurations that are intended for use in validation efforts and for testing basic nuclear data. New to the 2004 Edition of the Handbook is a draft criticality alarm / shielding type benchmark that should be finalized in 2005 along with two other similar benchmarks. The Handbook is being used extensively for nuclear data testing and is expected to be a valuable resource for code and data validation and improvement efforts for decades to come. Specific benchmarks that are useful for testing structural materials such as iron, chromium, nickel, and manganese; beryllium; lead; thorium; and 238U are highlighted.

  12. The Isprs Benchmark on Indoor Modelling

    Science.gov (United States)

    Khoshelham, K.; Díaz Vilariño, L.; Peter, M.; Kang, Z.; Acharya, D.

    2017-09-01

    Automated generation of 3D indoor models from point cloud data has been a topic of intensive research in recent years. While results on various datasets have been reported in literature, a comparison of the performance of different methods has not been possible due to the lack of benchmark datasets and a common evaluation framework. The ISPRS benchmark on indoor modelling aims to address this issue by providing a public benchmark dataset and an evaluation framework for performance comparison of indoor modelling methods. In this paper, we present the benchmark dataset comprising several point clouds of indoor environments captured by different sensors. We also discuss the evaluation and comparison of indoor modelling methods based on manually created reference models and appropriate quality evaluation criteria. The benchmark dataset is available for download at: http://www2.isprs.org/commissions/comm4/wg5/benchmark-on-indoor-modelling.html"target="_blank">http://www2.isprs.org/commissions/comm4/wg5/benchmark-on-indoor-modelling.html.

  13. Plans to update benchmarking tool.

    Science.gov (United States)

    Stokoe, Mark

    2013-02-01

    The use of the current AssetMark system by hospital health facilities managers and engineers (in Australia) has decreased to a point of no activity occurring. A number of reasons have been cited, including cost, time to do, slow process, and level of information required. Based on current levels of activity, it would not be of any value to IHEA, or to its members, to continue with this form of AssetMark. For AssetMark to remain viable, it needs to be developed as a tool seen to be of value to healthcare facilities managers, and not just healthcare facility engineers. Benchmarking is still a very important requirement in the industry, and AssetMark can fulfil this need provided that it remains abreast of customer needs. The proposed future direction is to develop an online version of AssetMark with its current capabilities regarding capturing of data (12 Key Performance Indicators), reporting, and user interaction. The system would also provide end-users with access to live reporting features via a user-friendly web nterface linked through the IHEA web page.

  14. Academic Benchmarks for Otolaryngology Leaders.

    Science.gov (United States)

    Eloy, Jean Anderson; Blake, Danielle M; D'Aguillo, Christine; Svider, Peter F; Folbe, Adam J; Baredes, Soly

    2015-08-01

    This study aimed to characterize current benchmarks for academic otolaryngologists serving in positions of leadership and identify factors potentially associated with promotion to these positions. Information regarding chairs (or division chiefs), vice chairs, and residency program directors was obtained from faculty listings and organized by degree(s) obtained, academic rank, fellowship training status, sex, and experience. Research productivity was characterized by (a) successful procurement of active grants from the National Institutes of Health and prior grants from the American Academy of Otolaryngology-Head and Neck Surgery Foundation Centralized Otolaryngology Research Efforts program and (b) scholarly impact, as measured by the h-index. Chairs had the greatest amount of experience (32.4 years) and were the least likely to have multiple degrees, with 75.8% having an MD degree only. Program directors were the most likely to be fellowship trained (84.8%). Women represented 16% of program directors, 3% of chairs, and no vice chairs. Chairs had the highest scholarly impact (as measured by the h-index) and the greatest external grant funding. This analysis characterizes the current picture of leadership in academic otolaryngology. Chairs, when compared to their vice chair and program director counterparts, had more experience and greater research impact. Women were poorly represented among all academic leadership positions. © The Author(s) 2015.

  15. Benchmarking Measures of Network Influence

    Science.gov (United States)

    Bramson, Aaron; Vandermarliere, Benjamin

    2016-01-01

    Identifying key agents for the transmission of diseases (ideas, technology, etc.) across social networks has predominantly relied on measures of centrality on a static base network or a temporally flattened graph of agent interactions. Various measures have been proposed as the best trackers of influence, such as degree centrality, betweenness, and k-shell, depending on the structure of the connectivity. We consider SIR and SIS propagation dynamics on a temporally-extruded network of observed interactions and measure the conditional marginal spread as the change in the magnitude of the infection given the removal of each agent at each time: its temporal knockout (TKO) score. We argue that this TKO score is an effective benchmark measure for evaluating the accuracy of other, often more practical, measures of influence. We find that none of the network measures applied to the induced flat graphs are accurate predictors of network propagation influence on the systems studied; however, temporal networks and the TKO measure provide the requisite targets for the search for effective predictive measures. PMID:27670635

  16. Developing integrated benchmarks for DOE performance measurement

    Energy Technology Data Exchange (ETDEWEB)

    Barancik, J.I.; Kramer, C.F.; Thode, Jr. H.C.

    1992-09-30

    The objectives of this task were to describe and evaluate selected existing sources of information on occupational safety and health with emphasis on hazard and exposure assessment, abatement, training, reporting, and control identifying for exposure and outcome in preparation for developing DOE performance benchmarks. Existing resources and methodologies were assessed for their potential use as practical performance benchmarks. Strengths and limitations of current data resources were identified. Guidelines were outlined for developing new or improved performance factors, which then could become the basis for selecting performance benchmarks. Data bases for non-DOE comparison populations were identified so that DOE performance could be assessed relative to non-DOE occupational and industrial groups. Systems approaches were described which can be used to link hazards and exposure, event occurrence, and adverse outcome factors, as needed to generate valid, reliable, and predictive performance benchmarks. Data bases were identified which contain information relevant to one or more performance assessment categories . A list of 72 potential performance benchmarks was prepared to illustrate the kinds of information that can be produced through a benchmark development program. Current information resources which may be used to develop potential performance benchmarks are limited. There is need to develop an occupational safety and health information and data system in DOE, which is capable of incorporating demonstrated and documented performance benchmarks prior to, or concurrent with the development of hardware and software. A key to the success of this systems approach is rigorous development and demonstration of performance benchmark equivalents to users of such data before system hardware and software commitments are institutionalized.

  17. The analysis of the OECD/NEA/NSC PBMR-400 benchmark problem using PARCS-DIREKT

    Energy Technology Data Exchange (ETDEWEB)

    Seker, V.; Downar, T. J. [Purdue Univ., 400 Central Drive, West Lafayette, IN 47907 (United States)

    2006-07-01

    The OECD/NEA/NSC PBMR-400 benchmark problem was developed to support the validation and verification efforts for the PBMR design. This paper describes the analysis of this problem using the PARCS-DIREKT coupled code system. The benchmark problem involved the use of two different cross-section libraries, one which was generated from a VSOP equilibrium core calculation and has no dependence on core conditions. The second library provides for dependence on five state parameters and was designed for transient analysis. The paper here reports the steady-state cases using the VSOP set of cross-sections. The results are shown to be in good agreement with those of VSOP. Also reported here are the results of the steady-state thermal-hydraulic DIRECKT solution with a given power profile obtained from VSOP equilibrium core calculation. This analysis provides some insight as to the most important parameters in the design of PBMR-400. (authors)

  18. The benchmark analysis of gastric, colorectal and rectal cancer pathways: toward establishing standardized clinical pathway in the cancer care.

    Science.gov (United States)

    Ryu, Munemasa; Hamano, Masaaki; Nakagawara, Akira; Shinoda, Masayuki; Shimizu, Hideaki; Miura, Takeshi; Yoshida, Isao; Nemoto, Atsushi; Yoshikawa, Aki

    2011-01-01

    Most clinical pathways in treating cancers in Japan are based on individual physician's personal experiences rather than on an empirical analysis of clinical data such as benchmark comparison with other hospitals. Therefore, these pathways are far from being standardized. By comparing detailed clinical data from five cancer centers, we have observed various differences among hospitals. By conducting benchmark analyses, providing detailed feedback to the participating hospitals and by repeating the benchmark a year later, we strive to develop more standardized clinical pathways for the treatment of cancers. The Cancer Quality Initiative was launched in 2007 by five cancer centers. Using diagnosis procedure combination data, the member hospitals benchmarked their pre-operative and post-operative length of stays, the duration of antibiotics administrations and the post-operative fasting duration for gastric, colon and rectal cancers. The benchmark was conducted by disclosing hospital identities and performed using 2007 and 2008 data. In the 2007 benchmark, substantial differences were shown among five hospitals in the treatment of gastric, colon and rectal cancers. After providing the 2007 results to the participating hospitals and organizing several brainstorming discussions, significant improvements were observed in the 2008 data study. The benchmark analysis of clinical data is extremely useful in promoting more standardized care and, thus in improving the quality of cancer treatment in Japan. By repeating the benchmark analyses, we can offer truly clinical evidence-based higher quality standardized cancer treatment to our patients.

  19. Benchmarking – A tool for judgment or improvement?

    DEFF Research Database (Denmark)

    Rasmussen, Grane Mikael Gregaard

    2010-01-01

    these issues, and describes how effects are closely connected to the perception of benchmarking, the intended users of the system and the application of the benchmarking results. The fundamental basis of this paper is taken from the development of benchmarking in the Danish construction sector. Two distinct...... perceptions of benchmarking will be presented; public benchmarking and best practice benchmarking. These two types of benchmarking are used to characterize and discuss the Danish benchmarking system and to enhance which effects, possibilities and challenges that follow in the wake of using this kind...... of benchmarking. In conclusion it is argued that clients and the Danish government are the intended users of the benchmarking system. The benchmarking results are primarily used by the government for monitoring and regulation of the construction sector and by clients for contractor selection. The dominating use...

  20. Benchmarks for dynamic multi-objective optimisation

    CSIR Research Space (South Africa)

    Helbig, M

    2013-06-01

    Full Text Available When algorithms solve dynamic multi-objective optimisation problems (DMOOPs), benchmark functions should be used to determine whether the algorithm can overcome specific difficulties that can occur in real-world problems. However, for dynamic multi...

  1. Medicare Contracting - Redacted Benchmark Metric Reports

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Centers for Medicare and Medicaid Services has compiled aggregate national benchmark cost and workload metrics using data submitted to CMS by the AB MACs and the...

  2. XWeB: The XML Warehouse Benchmark

    Science.gov (United States)

    Mahboubi, Hadj; Darmont, Jérôme

    With the emergence of XML as a standard for representing business data, new decision support applications are being developed. These XML data warehouses aim at supporting On-Line Analytical Processing (OLAP) operations that manipulate irregular XML data. To ensure feasibility of these new tools, important performance issues must be addressed. Performance is customarily assessed with the help of benchmarks. However, decision support benchmarks do not currently support XML features. In this paper, we introduce the XML Warehouse Benchmark (XWeB), which aims at filling this gap. XWeB derives from the relational decision support benchmark TPC-H. It is mainly composed of a test data warehouse that is based on a unified reference model for XML warehouses and that features XML-specific structures, and its associate XQuery decision support workload. XWeB's usage is illustrated by experiments on several XML database management systems.

  3. XWeB: the XML Warehouse Benchmark

    CERN Document Server

    Mahboubi, Hadj

    2011-01-01

    With the emergence of XML as a standard for representing business data, new decision support applications are being developed. These XML data warehouses aim at supporting On-Line Analytical Processing (OLAP) operations that manipulate irregular XML data. To ensure feasibility of these new tools, important performance issues must be addressed. Performance is customarily assessed with the help of benchmarks. However, decision support benchmarks do not currently support XML features. In this paper, we introduce the XML Warehouse Benchmark (XWeB), which aims at filling this gap. XWeB derives from the relational decision support benchmark TPC-H. It is mainly composed of a test data warehouse that is based on a unified reference model for XML warehouses and that features XML-specific structures, and its associate XQuery decision support workload. XWeB's usage is illustrated by experiments on several XML database management systems.

  4. Professional Performance and Bureaucratic Benchmarking Information

    DEFF Research Database (Denmark)

    Schneider, Melanie L.; Mahlendorf, Matthias D.; Schäffer, Utz

    provision to the chief physician of the respective department. Professional performance is publicly disclosed due to regulatory requirements. At the same time, chief physicians typically receive bureaucratic benchmarking information from the administration. We find that more frequent bureaucratic...

  5. Benchmarking of PR Function in Serbian Companies

    National Research Council Canada - National Science Library

    Nikolić, Milan; Sajfert, Zvonko; Vukonjanski, Jelena

    2009-01-01

    The purpose of this paper is to present methodologies for carrying out benchmarking of the PR function in Serbian companies and to test the practical application of the research results and proposed...

  6. Benchmarking Attosecond Physics with Atomic Hydrogen

    Science.gov (United States)

    2015-05-25

    Final 3. DATES COVERED (From - To) 12 Mar 12 – 11 Mar 15 4. TITLE AND SUBTITLE Benchmarking attosecond physics with atomic hydrogen 5a...AND SUBTITLE Benchmarking attosecond physics with atomic hydrogen 5a. CONTRACT NUMBER FA2386-12-1-4025 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER...attosecond physics with atomic hydrogen ” May 25, 2015 PI information: David Kielpinski, dave.kielpinski@gmail.com Griffith University Centre

  7. Aerodynamic Benchmarking of the Deepwind Design

    DEFF Research Database (Denmark)

    Bedona, Gabriele; Schmidt Paulsen, Uwe; Aagaard Madsen, Helge;

    2015-01-01

    The aerodynamic benchmarking for the DeepWind rotor is conducted comparing different rotor geometries and solutions and keeping the comparison as fair as possible. The objective for the benchmarking is to find the most suitable configuration in order to maximize the power production and minimize...... NACA airfoil family. (C) 2015 Published by Elsevier Ltd. This is an open access article under the CC BY-NC-ND license...

  8. Benchmarking Danish Vocational Education and Training Programmes

    DEFF Research Database (Denmark)

    Bogetoft, Peter; Wittrup, Jesper

    This study paper discusses methods whereby Danish vocational education and training colleges can be benchmarked, and presents results from a number of models. It is conceptually complicated to benchmark vocational colleges, as the various colleges in Denmark offer a wide range of course programmes...... attempt to summarise the various effects that the colleges have in two relevant figures, namely retention rates of students and employment rates among students who have completed training programmes....

  9. Implementation of NAS Parallel Benchmarks in Java

    Science.gov (United States)

    Frumkin, Michael; Schultz, Matthew; Jin, Hao-Qiang; Yan, Jerry

    2000-01-01

    A number of features make Java an attractive but a debatable choice for High Performance Computing (HPC). In order to gauge the applicability of Java to the Computational Fluid Dynamics (CFD) we have implemented NAS Parallel Benchmarks in Java. The performance and scalability of the benchmarks point out the areas where improvement in Java compiler technology and in Java thread implementation would move Java closer to Fortran in the competition for CFD applications.

  10. The MCNP6 Analytic Criticality Benchmark Suite

    Energy Technology Data Exchange (ETDEWEB)

    Brown, Forrest B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Monte Carlo Codes Group

    2016-06-16

    Analytical benchmarks provide an invaluable tool for verifying computer codes used to simulate neutron transport. Several collections of analytical benchmark problems [1-4] are used routinely in the verification of production Monte Carlo codes such as MCNP® [5,6]. Verification of a computer code is a necessary prerequisite to the more complex validation process. The verification process confirms that a code performs its intended functions correctly. The validation process involves determining the absolute accuracy of code results vs. nature. In typical validations, results are computed for a set of benchmark experiments using a particular methodology (code, cross-section data with uncertainties, and modeling) and compared to the measured results from the set of benchmark experiments. The validation process determines bias, bias uncertainty, and possibly additional margins. Verification is generally performed by the code developers, while validation is generally performed by code users for a particular application space. The VERIFICATION_KEFF suite of criticality problems [1,2] was originally a set of 75 criticality problems found in the literature for which exact analytical solutions are available. Even though the spatial and energy detail is necessarily limited in analytical benchmarks, typically to a few regions or energy groups, the exact solutions obtained can be used to verify that the basic algorithms, mathematics, and methods used in complex production codes perform correctly. The present work has focused on revisiting this benchmark suite. A thorough review of the problems resulted in discarding some of them as not suitable for MCNP benchmarking. For the remaining problems, many of them were reformulated to permit execution in either multigroup mode or in the normal continuous-energy mode for MCNP. Execution of the benchmarks in continuous-energy mode provides a significant advance to MCNP verification methods.

  11. Simple Benchmark Specifications for Space Radiation Protection

    Science.gov (United States)

    Singleterry, Robert C. Jr.; Aghara, Sukesh K.

    2013-01-01

    This report defines space radiation benchmark specifications. This specification starts with simple, monoenergetic, mono-directional particles on slabs and progresses to human models in spacecraft. This report specifies the models and sources needed to what the team performing the benchmark needs to produce in a report. Also included are brief descriptions of how OLTARIS, the NASA Langley website for space radiation analysis, performs its analysis.

  12. Benchmarking for Cost Improvement. Final report

    Energy Technology Data Exchange (ETDEWEB)

    1993-09-01

    The US Department of Energy`s (DOE) Office of Environmental Restoration and Waste Management (EM) conducted the Benchmarking for Cost Improvement initiative with three objectives: Pilot test benchmarking as an EM cost improvement tool; identify areas for cost improvement and recommend actions to address these areas; provide a framework for future cost improvement. The benchmarking initiative featured the use of four principal methods (program classification, nationwide cost improvement survey, paired cost comparison and component benchmarking). Interested parties contributed during both the design and execution phases. The benchmarking initiative was conducted on an accelerated basis. Of necessity, it considered only a limited set of data that may not be fully representative of the diverse and complex conditions found at the many DOE installations. The initiative generated preliminary data about cost differences and it found a high degree of convergence on several issues. Based on this convergence, the report recommends cost improvement strategies and actions. This report describes the steps taken as part of the benchmarking initiative and discusses the findings and recommended actions for achieving cost improvement. The results and summary recommendations, reported below, are organized by the study objectives.

  13. SPOC Benchmark Case: SNRE Model

    Energy Technology Data Exchange (ETDEWEB)

    Vishal Patel; Michael Eades; Claude Russel Joyner II

    2016-02-01

    The Small Nuclear Rocket Engine (SNRE) was modeled in the Center for Space Nuclear Research’s (CSNR) Space Propulsion Optimization Code (SPOC). SPOC aims to create nuclear thermal propulsion (NTP) geometries quickly to perform parametric studies on design spaces of historic and new NTP designs. The SNRE geometry was modeled in SPOC and a critical core with a reasonable amount of criticality margin was found. The fuel, tie-tubes, reflector, and control drum masses were predicted rather well. These are all very important for neutronics calculations so the active reactor geometries created with SPOC can continue to be trusted. Thermal calculations of the average and hot fuel channels agreed very well. The specific impulse calculations used historically and in SPOC disagree so mass flow rates and impulses differed. Modeling peripheral and power balance components that do not affect nuclear characteristics of the core is not a feature of SPOC and as such, these components should continue to be designed using other tools. A full paper detailing the available SNRE data and comparisons with SPOC outputs will be submitted as a follow-up to this abstract.

  14. Kvalitative analyser ..

    DEFF Research Database (Denmark)

    Boolsen, Merete Watt

    bogen forklarer de fundamentale trin i forskningsprocessen og applikerer dem på udvalgte kvalitative analyser: indholdsanalyse, Grounded Theory, argumentationsanalyse og diskursanalyse......bogen forklarer de fundamentale trin i forskningsprocessen og applikerer dem på udvalgte kvalitative analyser: indholdsanalyse, Grounded Theory, argumentationsanalyse og diskursanalyse...

  15. Power distributions in fresh and depleted LEU and HEU cores of the MITR reactor.

    Energy Technology Data Exchange (ETDEWEB)

    Wilson, E.H.; Horelik, N.E.; Dunn, F.E.; Newton, T.H., Jr.; Hu, L.; Stevens, J.G. (Nuclear Engineering Division); (2MIT Nuclear Reactor Laboratory and Nuclear Science and Engineering Department)

    2012-04-04

    The Massachusetts Institute of Technology Reactor (MITR-II) is a research reactor in Cambridge, Massachusetts designed primarily for experiments using neutron beam and in-core irradiation facilities. It delivers a neutron flux comparable to current LWR power reactors in a compact 6 MW core using Highly Enriched Uranium (HEU) fuel. In the framework of its non-proliferation policies, the international community presently aims to minimize the amount of nuclear material available that could be used for nuclear weapons. In this geopolitical context, most research and test reactors both domestic and international have started a program of conversion to the use of Low Enriched Uranium (LEU) fuel. A new type of LEU fuel based on an alloy of uranium and molybdenum (UMo) is expected to allow the conversion of U.S. domestic high performance reactors like the MITR-II reactor. Toward this goal, core geometry and power distributions are presented. Distributions of power are calculated for LEU cores depleted with MCODE using an MCNP5 Monte Carlo model. The MCNP5 HEU and LEU MITR models were previously compared to experimental benchmark data for the MITR-II. This same model was used with a finer spatial depletion in order to generate power distributions for the LEU cores. The objective of this work is to generate and characterize a series of fresh and depleted core peak power distributions, and provide a thermal hydraulic evaluation of the geometry which should be considered for subsequent thermal hydraulic safety analyses.

  16. Clean Energy Manufacturing Analysis Center Benchmark Report: Framework and Methodologies

    Energy Technology Data Exchange (ETDEWEB)

    Sandor, Debra [National Renewable Energy Lab. (NREL), Golden, CO (United States); Chung, Donald [National Renewable Energy Lab. (NREL), Golden, CO (United States); Keyser, David [National Renewable Energy Lab. (NREL), Golden, CO (United States); Mann, Margaret [National Renewable Energy Lab. (NREL), Golden, CO (United States); Engel-Cox, Jill [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2017-05-23

    This report documents the CEMAC methodologies for developing and reporting annual global clean energy manufacturing benchmarks. The report reviews previously published manufacturing benchmark reports and foundational data, establishes a framework for benchmarking clean energy technologies, describes the CEMAC benchmark analysis methodologies, and describes the application of the methodologies to the manufacturing of four specific clean energy technologies.

  17. Ice cores

    DEFF Research Database (Denmark)

    Svensson, Anders

    2014-01-01

    Ice cores from Antarctica, from Greenland, and from a number of smaller glaciers around the world yield a wealth of information on past climates and environments. Ice cores offer unique records on past temperatures, atmospheric composition (including greenhouse gases), volcanism, solar activity......, dustiness, and biomass burning, among others. In Antarctica, ice cores extend back more than 800,000 years before present (Jouzel et al. 2007), whereas. Greenland ice cores cover the last 130,000 years...

  18. Ice cores

    DEFF Research Database (Denmark)

    Svensson, Anders

    2014-01-01

    Ice cores from Antarctica, from Greenland, and from a number of smaller glaciers around the world yield a wealth of information on past climates and environments. Ice cores offer unique records on past temperatures, atmospheric composition (including greenhouse gases), volcanism, solar activity......, dustiness, and biomass burning, among others. In Antarctica, ice cores extend back more than 800,000 years before present (Jouzel et al. 2007), whereas. Greenland ice cores cover the last 130,000 years...

  19. Criteria of benchmark selection for efficient flexible multibody system formalisms

    Directory of Open Access Journals (Sweden)

    Valášek M.

    2007-10-01

    Full Text Available The paper deals with the selection process of benchmarks for testing and comparing efficient flexible multibody formalisms. The existing benchmarks are briefly summarized. The purposes for benchmark selection are investigated. The result of this analysis is the formulation of the criteria of benchmark selection for flexible multibody formalisms. Based on them the initial set of suitable benchmarks is described. Besides that the evaluation measures are revised and extended.

  20. Test Nationally, Benchmark Locally: Using Local DIBELS Benchmarks to Predict Performance on the Pssa

    Science.gov (United States)

    Ferchalk, Matthew R.

    2013-01-01

    The Dynamic Indicators of Basic Early Literacy Skills (DIBELS) benchmarks are frequently used to make important decision regarding student performance. More information, however, is needed to understand if the nationally-derived benchmarks created by the DIBELS system provide the most accurate criterion for evaluating reading proficiency. The…

  1. Benchmarking local healthcare-associated infections: available benchmarks and interpretation challenges.

    Science.gov (United States)

    El-Saed, Aiman; Balkhy, Hanan H; Weber, David J

    2013-10-01

    Growing numbers of healthcare facilities are routinely collecting standardized data on healthcare-associated infection (HAI), which can be used not only to track internal performance but also to compare local data to national and international benchmarks. Benchmarking overall (crude) HAI surveillance metrics without accounting or adjusting for potential confounders can result in misleading conclusions. Methods commonly used to provide risk-adjusted metrics include multivariate logistic regression analysis, stratification, indirect standardization, and restrictions. The characteristics of recognized benchmarks worldwide, including the advantages and limitations are described. The choice of the right benchmark for the data from the Gulf Cooperation Council (GCC) states is challenging. The chosen benchmark should have similar data collection and presentation methods. Additionally, differences in surveillance environments including regulations should be taken into consideration when considering such a benchmark. The GCC center for infection control took some steps to unify HAI surveillance systems in the region. GCC hospitals still need to overcome legislative and logistic difficulties in sharing data to create their own benchmark. The availability of a regional GCC benchmark may better enable health care workers and researchers to obtain more accurate and realistic comparisons.

  2. The Concepts "Benchmarks and Benchmarking" Used in Education Planning: Teacher Education as Example

    Science.gov (United States)

    Steyn, H. J.

    2015-01-01

    Planning in education is a structured activity that includes several phases and steps that take into account several kinds of information (Steyn, Steyn, De Waal & Wolhuter, 2002: 146). One of the sets of information that are usually considered is the (so-called) "benchmarks" and "benchmarking" regarding the focus of a…

  3. Atmospheric circulation of tidally-locked exoplanets: a suite of benchmark tests for dynamical solvers

    CERN Document Server

    Heng, Kevin; Phillipps, Peter J

    2010-01-01

    The complexity of atmospheric modelling and its inherent non-linearity, together with the limited amount of data of exoplanets available, motivate model intercomparisons and benchmark tests. In the geophysical community, the Held-Suarez test is a standard benchmark for comparing dynamical core simulations of the Earth's atmosphere with different solvers, based on statistically-averaged flow quantities. In the present study, we perform analogues of the Held-Suarez test for tidally-locked exoplanets with the GFDL-Princeton Flexible Modeling System (FMS) by subjecting both the spectral and finite difference dynamical cores to a suite of tests, including the standard benchmark for Earth, a hypothetical tidally-locked Earth, a "shallow" hot Jupiter model and a "deep" model of HD 209458b. We find qualitative and quantitative agreement between the solvers for the Earth, tidally-locked Earth and shallow hot Jupiter benchmarks, but the agreement is less than satisfactory for the deep model of HD 209458b. Further inves...

  4. Features and technology of enterprise internal benchmarking

    Directory of Open Access Journals (Sweden)

    A.V. Dubodelova

    2013-06-01

    Full Text Available The aim of the article. The aim of the article is to generalize characteristics, objectives, advantages of internal benchmarking. The stages sequence of internal benchmarking technology is formed. It is focused on continuous improvement of process of the enterprise by implementing existing best practices.The results of the analysis. Business activity of domestic enterprises in crisis business environment has to focus on the best success factors of their structural units by using standard research assessment of their performance and their innovative experience in practice. Modern method of those needs satisfying is internal benchmarking. According to Bain & Co internal benchmarking is one the three most common methods of business management.The features and benefits of benchmarking are defined in the article. The sequence and methodology of implementation of individual stages of benchmarking technology projects are formulated.The authors define benchmarking as a strategic orientation on the best achievement by comparing performance and working methods with the standard. It covers the processes of researching, organization of production and distribution, management and marketing methods to reference objects to identify innovative practices and its implementation in a particular business.Benchmarking development at domestic enterprises requires analysis of theoretical bases and practical experience. Choice best of experience helps to develop recommendations for their application in practice.Also it is essential to classificate species, identify characteristics, study appropriate areas of use and development methodology of implementation. The structure of internal benchmarking objectives includes: promoting research and establishment of minimum acceptable levels of efficiency processes and activities which are available at the enterprise; identification of current problems and areas that need improvement without involvement of foreign experience

  5. Toxicological benchmarks for wildlife: 1994 Revision

    Energy Technology Data Exchange (ETDEWEB)

    Opresko, D.M.; Sample, B.E.; Suter, G.W. II

    1994-09-01

    The process by which ecological risks of environmental contaminants are evaluated is two-tiered. The first tier is a screening assessment where concentrations of contaminants in the environment are compared to toxicological benchmarks which represent concentrations of chemicals in environmental media (water, sediment, soil, food, etc.) that are presumed to be nonhazardous to the surrounding biota. The second tier is a baseline ecological risk assessment where toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. The report presents toxicological benchmarks for assessment of effects of 76 chemicals on 8 representative mammalian wildlife species and 31 chemicals on 9 avian wildlife species. The chemicals are some of those that occur at United States Department of Energy waste sites; the wildlife species were chosen because they are widely distributed and provide a representative range of body sizes and diets. Further descriptions of the chosen wildlife species and chemicals are provided in the report. The benchmarks presented in this report represent values believed to be nonhazardous for the listed wildlife species. These benchmarks only consider contaminant exposure through oral ingestion of contaminated media; exposure through inhalation or direct dermal exposure are not considered in this report.

  6. Verification of ARES transport code system with TAKEDA benchmarks

    Science.gov (United States)

    Zhang, Liang; Zhang, Bin; Zhang, Penghe; Chen, Mengteng; Zhao, Jingchang; Zhang, Shun; Chen, Yixue

    2015-10-01

    Neutron transport modeling and simulation are central to many areas of nuclear technology, including reactor core analysis, radiation shielding and radiation detection. In this paper the series of TAKEDA benchmarks are modeled to verify the critical calculation capability of ARES, a discrete ordinates neutral particle transport code system. SALOME platform is coupled with ARES to provide geometry modeling and mesh generation function. The Koch-Baker-Alcouffe parallel sweep algorithm is applied to accelerate the traditional transport calculation process. The results show that the eigenvalues calculated by ARES are in excellent agreement with the reference values presented in NEACRP-L-330, with a difference less than 30 pcm except for the first case of model 3. Additionally, ARES provides accurate fluxes distribution compared to reference values, with a deviation less than 2% for region-averaged fluxes in all cases. All of these confirms the feasibility of ARES-SALOME coupling and demonstrate that ARES has a good performance in critical calculation.

  7. Reviewing and Benchmarking Adventure Therapy Outcomes: Applications of Meta-Analysis.

    Science.gov (United States)

    Neill, James T.

    2003-01-01

    Findings from meta-analyses of outdoor education, psychotherapy, and educational innovations are presented to help determine the relative efficacy of adventure therapy programs. While adventure therapy effects are stronger than those of outdoor education, they are not nearly as strong as those of individual psychotherapy. Benchmarks are derived…

  8. Designing a Supply Chain Management Academic Curriculum Using QFD and Benchmarking

    Science.gov (United States)

    Gonzalez, Marvin E.; Quesada, Gioconda; Gourdin, Kent; Hartley, Mark

    2008-01-01

    Purpose: The purpose of this paper is to utilize quality function deployment (QFD), Benchmarking analyses and other innovative quality tools to develop a new customer-centered undergraduate curriculum in supply chain management (SCM). Design/methodology/approach: The researchers used potential employers as the source for data collection. Then,…

  9. Designing a Supply Chain Management Academic Curriculum Using QFD and Benchmarking

    Science.gov (United States)

    Gonzalez, Marvin E.; Quesada, Gioconda; Gourdin, Kent; Hartley, Mark

    2008-01-01

    Purpose: The purpose of this paper is to utilize quality function deployment (QFD), Benchmarking analyses and other innovative quality tools to develop a new customer-centered undergraduate curriculum in supply chain management (SCM). Design/methodology/approach: The researchers used potential employers as the source for data collection. Then,…

  10. Coral benchmarks in the center of biodiversity.

    Science.gov (United States)

    Licuanan, W Y; Robles, R; Dygico, M; Songco, A; van Woesik, R

    2017-01-30

    There is an urgent need to quantify coral reef benchmarks that assess changes and recovery rates through time and serve as goals for management. Yet, few studies have identified benchmarks for hard coral cover and diversity in the center of marine diversity. In this study, we estimated coral cover and generic diversity benchmarks on the Tubbataha reefs, the largest and best-enforced no-take marine protected area in the Philippines. The shallow (2-6m) reef slopes of Tubbataha were monitored annually, from 2012 to 2015, using hierarchical sampling. Mean coral cover was 34% (σ±1.7) and generic diversity was 18 (σ±0.9) per 75m by 25m station. The southeastern leeward slopes supported on average 56% coral cover, whereas the northeastern windward slopes supported 30%, and the western slopes supported 18% coral cover. Generic diversity was more spatially homogeneous than coral cover. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. The national hydrologic bench-mark network

    Science.gov (United States)

    Cobb, Ernest D.; Biesecker, J.E.

    1971-01-01

    The United States is undergoing a dramatic growth of population and demands on its natural resources. The effects are widespread and often produce significant alterations of the environment. The hydrologic bench-mark network was established to provide data on stream basins which are little affected by these changes. The network is made up of selected stream basins which are not expected to be significantly altered by man. Data obtained from these basins can be used to document natural changes in hydrologic characteristics with time, to provide a better understanding of the hydrologic structure of natural basins, and to provide a comparative base for studying the effects of man on the hydrologic environment. There are 57 bench-mark basins in 37 States. These basins are in areas having a wide variety of climate and topography. The bench-mark basins and the types of data collected in the basins are described.

  12. DWEB: A Data Warehouse Engineering Benchmark

    CERN Document Server

    Darmont, Jérôme; Boussaïd, Omar

    2005-01-01

    Data warehouse architectural choices and optimization techniques are critical to decision support query performance. To facilitate these choices, the performance of the designed data warehouse must be assessed. This is usually done with the help of benchmarks, which can either help system users comparing the performances of different systems, or help system engineers testing the effect of various design choices. While the TPC standard decision support benchmarks address the first point, they are not tuneable enough to address the second one and fail to model different data warehouse schemas. By contrast, our Data Warehouse Engineering Benchmark (DWEB) allows to generate various ad-hoc synthetic data warehouses and workloads. DWEB is fully parameterized to fulfill data warehouse design needs. However, two levels of parameterization keep it relatively easy to tune. Finally, DWEB is implemented as a Java free software that can be interfaced with most existing relational database management systems. A sample usag...

  13. Benchmarking optimization solvers for structural topology optimization

    DEFF Research Database (Denmark)

    Rojas Labanda, Susana; Stolpe, Mathias

    2015-01-01

    The purpose of this article is to benchmark different optimization solvers when applied to various finite element based structural topology optimization problems. An extensive and representative library of minimum compliance, minimum volume, and mechanism design problem instances for different...... sizes is developed for this benchmarking. The problems are based on a material interpolation scheme combined with a density filter. Different optimization solvers including Optimality Criteria (OC), the Method of Moving Asymptotes (MMA) and its globally convergent version GCMMA, the interior point...... profiles conclude that general solvers are as efficient and reliable as classical structural topology optimization solvers. Moreover, the use of the exact Hessians in SAND formulations, generally produce designs with better objective function values. However, with the benchmarked implementations solving...

  14. Energy benchmarking of South Australian WWTPs.

    Science.gov (United States)

    Krampe, J

    2013-01-01

    Optimising the energy consumption and energy generation of wastewater treatment plants (WWTPs) is a topic with increasing importance for water utilities in times of rising energy costs and pressures to reduce greenhouse gas (GHG) emissions. Assessing the energy efficiency and energy optimisation of a WWTP are difficult tasks as most plants vary greatly in size, process layout and other influencing factors. To overcome these limits it is necessary to compare energy efficiency with a statistically relevant base to identify shortfalls and optimisation potential. Such energy benchmarks have been successfully developed and used in central Europe over the last two decades. This paper demonstrates how the latest available energy benchmarks from Germany have been applied to 24 WWTPs in South Australia. It shows how energy benchmarking can be used to identify shortfalls in current performance, prioritise detailed energy assessments and help inform decisions on capital investment.

  15. Confidential benchmarking based on multiparty computation

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Damgård, Kasper Lyneborg; Nielsen, Kurt;

    We report on the design and implementation of a system that uses multiparty computation to enable banks to benchmark their customers' confidential performance data against a large representative set of confidential performance data from a consultancy house. The system ensures that both the banks......' and the consultancy house's data stays confidential, the banks as clients learn nothing but the computed benchmarking score. In the concrete business application, the developed prototype help Danish banks to find the most efficient customers among a large and challenging group of agricultural customers with too much...... debt. We propose a model based on linear programming for doing the benchmarking and implement it using the SPDZ protocol by Damgård et al., which we modify using a new idea that allows clients to supply data and get output without having to participate in the preprocessing phase and without keeping...

  16. FGK Benchmark Stars A new metallicity scale

    CERN Document Server

    Jofre, Paula; Soubiran, C; Blanco-Cuaresma, S; Pancino, E; Bergemann, M; Cantat-Gaudin, T; Hernandez, J I Gonzalez; Hill, V; Lardo, C; de Laverny, P; Lind, K; Magrini, L; Masseron, T; Montes, D; Mucciarelli, A; Nordlander, T; Recio-Blanco, A; Sobeck, J; Sordo, R; Sousa, S G; Tabernero, H; Vallenari, A; Van Eck, S; Worley, C C

    2013-01-01

    In the era of large spectroscopic surveys of stars of the Milky Way, atmospheric parameter pipelines require reference stars to evaluate and homogenize their values. We provide a new metallicity scale for the FGK benchmark stars based on their corresponding fundamental effective temperature and surface gravity. This was done by analyzing homogeneously with up to seven different methods a spectral library of benchmark stars. Although our direct aim was to provide a reference metallicity to be used by the Gaia-ESO Survey, the fundamental effective temperatures and surface gravities of benchmark stars of Heiter et al. 2013 (in prep) and their metallicities obtained in this work can also be used as reference parameters for other ongoing surveys, such as Gaia, HERMES, RAVE, APOGEE and LAMOST.

  17. Professional Performance and Bureaucratic Benchmarking Information

    DEFF Research Database (Denmark)

    Schneider, Melanie L.; Mahlendorf, Matthias D.; Schäffer, Utz

    Professionals are often expected to be reluctant with regard to bureaucratic controls because of assumed conflicting values and goals of the organization vis-à-vis the profession. We suggest however, that the provision of bureaucratic benchmarking information is positively associated with profess......Professionals are often expected to be reluctant with regard to bureaucratic controls because of assumed conflicting values and goals of the organization vis-à-vis the profession. We suggest however, that the provision of bureaucratic benchmarking information is positively associated...... for 191 orthopaedics departments of German hospitals matched with survey data on bureaucratic benchmarking information provision to the chief physician of the respective department. Professional performance is publicly disclosed due to regulatory requirements. At the same time, chief physicians typically...

  18. Shielding Integral Benchmark Archive and Database (SINBAD)

    Energy Technology Data Exchange (ETDEWEB)

    Kirk, Bernadette Lugue [ORNL; Grove, Robert E [ORNL; Kodeli, I. [International Atomic Energy Agency (IAEA); Sartori, Enrico [ORNL; Gulliford, J. [OECD Nuclear Energy Agency

    2011-01-01

    The Shielding Integral Benchmark Archive and Database (SINBAD) collection of benchmarks was initiated in the early 1990 s. SINBAD is an international collaboration between the Organization for Economic Cooperation and Development s Nuclear Energy Agency Data Bank (OECD/NEADB) and the Radiation Safety Information Computational Center (RSICC) at Oak Ridge National Laboratory (ORNL). SINBAD is a major attempt to compile experiments and corresponding computational models with the goal of preserving institutional knowledge and expertise that need to be handed down to future scientists. SINBAD is also a learning tool for university students and scientists who need to design experiments or gain expertise in modeling and simulation. The SINBAD database is currently divided into three categories fission, fusion, and accelerator benchmarks. Where possible, each experiment is described and analyzed using deterministic or probabilistic (Monte Carlo) radiation transport software.

  19. A Benchmarking System for Domestic Water Use

    Directory of Open Access Journals (Sweden)

    Dexter V. L. Hunt

    2014-05-01

    Full Text Available The national demand for water in the UK is predicted to increase, exacerbated by a growing UK population, and home-grown demands for energy and food. When set against the context of overstretched existing supply sources vulnerable to droughts, particularly in increasingly dense city centres, the delicate balance of matching minimal demands with resource secure supplies becomes critical. When making changes to "internal" demands the role of technological efficiency and user behaviour cannot be ignored, yet existing benchmarking systems traditionally do not consider the latter. This paper investigates the practicalities of adopting a domestic benchmarking system (using a band rating that allows individual users to assess their current water use performance against what is possible. The benchmarking system allows users to achieve higher benchmarks through any approach that reduces water consumption. The sensitivity of water use benchmarks are investigated by making changes to user behaviour and technology. The impact of adopting localised supplies (i.e., Rainwater harvesting—RWH and Grey water—GW and including "external" gardening demands are investigated. This includes the impacts (in isolation and combination of the following: occupancy rates (1 to 4; roof size (12.5 m2 to 100 m2; garden size (25 m2 to 100 m2 and geographical location (North West, Midlands and South East, UK with yearly temporal effects (i.e., rainfall and temperature. Lessons learnt from analysis of the proposed benchmarking system are made throughout this paper, in particular its compatibility with the existing Code for Sustainable Homes (CSH accreditation system. Conclusions are subsequently drawn for the robustness of the proposed system.

  20. Criticality Safety Code Validation with LWBR’s SB Cores

    Energy Technology Data Exchange (ETDEWEB)

    Putman, Valerie Lee

    2003-01-01

    The first set of critical experiments from the Shippingport Light Water Breeder Reactor Program included eight, simple geometry critical cores built with 233UO2-ZrO2, 235UO2-ZrO2, ThO2, and ThO2-233UO2 nuclear materials. These cores are evaluated, described, and modeled to provide benchmarks and validation information for INEEL criticality safety calculation methodology. In addition to consistency with INEEL methodology, benchmark development and nuclear data are consistent with International Criticality Safety Benchmark Evaluation Project methodology.Section 1 of this report introduces the experiments and the reason they are useful for validating some INEEL criticality safety calculations. Section 2 provides detailed experiment descriptions based on currently available experiment reports. Section 3 identifies criticality safety validation requirement sources and summarizes requirements that most affect this report. Section 4 identifies relevant hand calculation and computer code calculation methodologies used in the experiment evaluation, benchmark development, and validation calculations. Section 5 provides a detailed experiment evaluation. This section identifies resolutions for currently unavailable and discrepant information. Section 5 also reports calculated experiment uncertainty effects. Section 6 describes the developed benchmarks. Section 6 includes calculated sensitivities to various benchmark features and parameters. Section 7 summarizes validation results. Appendices describe various assumptions and their bases, list experimenter calculations results for items that were independently calculated for this validation work, report other information gathered and developed by SCIENTEC personnel while evaluating these same experiments, and list benchmark sample input and miscellaneous supplementary data.

  1. CFD Simulation of Thermal-Hydraulic Benchmark V1000CT-2 Using ANSYS CFX

    OpenAIRE

    2009-01-01

    Plant measured data from VVER-1000 coolant mixing experiments were used within the OECD/NEA and AER coupled code benchmarks for light water reactors to test and validate computational fluid dynamic (CFD) codes. The task is to compare the various calculations with measured data, using specified boundary conditions and core power distributions. The experiments, which are provided for CFD validation, include single loop cooling down or heating-up by disturbing the heat transfer in the steam gene...

  2. Reducing maternal mortality: better monitoring, indicators and benchmarks needed to improve emergency obstetric care. Research summary for policymakers.

    Science.gov (United States)

    Collender, Guy; Gabrysch, Sabine; Campbell, Oona M R

    2012-06-01

    Several limitations of emergency obstetric care (EmOC) indicators and benchmarks are analysed in this short paper, which synthesises recent research on this topic. A comparison between Sri Lanka and Zambia is used to highlight the inconsistencies and shortcomings in current methods of monitoring EmOC. Recommendations are made to improve the usefulness and accuracy of EmOC indicators and benchmarks in the future.

  3. Benchmarking Danish Vocational Education and Training Programmes

    DEFF Research Database (Denmark)

    Bogetoft, Peter; Wittrup, Jesper

    This study paper discusses methods whereby Danish vocational education and training colleges can be benchmarked, and presents results from a number of models. It is conceptually complicated to benchmark vocational colleges, as the various colleges in Denmark offer a wide range of course programmes....... This makes it difficult to compare the resources used, since some programmes by their nature require more classroom time and equipment than others. It is also far from straightforward to compare college effects with respect to grades, since the various programmes apply very different forms of assessment...

  4. Confidential benchmarking based on multiparty computation

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Damgård, Kasper Lyneborg; Nielsen, Kurt

    We report on the design and implementation of a system that uses multiparty computation to enable banks to benchmark their customers' confidential performance data against a large representative set of confidential performance data from a consultancy house. The system ensures that both the banks......' and the consultancy house's data stays confidential, the banks as clients learn nothing but the computed benchmarking score. In the concrete business application, the developed prototype help Danish banks to find the most efficient customers among a large and challenging group of agricultural customers with too much...

  5. Benchmarking af kommunernes førtidspensionspraksis

    DEFF Research Database (Denmark)

    Gregersen, Ole

    Hvert år udgiver Den Sociale Ankestyrelse statistikken over afgørelser i sager om førtidspension. I forbindelse med årsstatistikken udgives resultater fra en benchmarking model, hvor antal tilkendelser i den enkelte kommune sammenlignes med et forventet antal tilkendelser, hvis kommunen havde haft...... samme afgørelsespraksis, som den "gennemsnitlige kommune", når vi korrigerer for den sociale struktur i kommunen. Den hidtil anvendte benchmarking model er dokumenteret i Ole Gregersen (1994): Kommunernes Pensionspraksis, Servicerapport, Socialforskningsinstituttet. I dette notat dokumenteres en...

  6. Benchmarking of Heavy Ion Transport Codes

    Energy Technology Data Exchange (ETDEWEB)

    Remec, Igor [ORNL; Ronningen, Reginald M. [Michigan State University, East Lansing; Heilbronn, Lawrence [University of Tennessee, Knoxville (UTK)

    2011-01-01

    Accurate prediction of radiation fields generated by heavy ion interactions is important in medical applications, space missions, and in designing and operation of rare isotope research facilities. In recent years, several well-established computer codes in widespread use for particle and radiation transport calculations have been equipped with the capability to simulate heavy ion transport and interactions. To assess and validate these capabilities, we performed simulations of a series of benchmark-quality heavy ion experiments with the computer codes FLUKA, MARS15, MCNPX, and PHITS. We focus on the comparisons of secondary neutron production. Results are encouraging; however, further improvements in models and codes and additional benchmarking are required.

  7. Toxicological benchmarks for wildlife: 1996 Revision

    Energy Technology Data Exchange (ETDEWEB)

    Sample, B.E.; Opresko, D.M.; Suter, G.W., II

    1996-06-01

    The purpose of this report is to present toxicological benchmarks for assessment of effects of certain chemicals on mammalian and avian wildlife species. Publication of this document meets a milestone for the Environmental Restoration (ER) Risk Assessment Program. This document provides the ER Program with toxicological benchmarks that may be used as comparative tools in screening assessments as well as lines of evidence to support or refute the presence of ecological effects in ecological risk assessments. The chemicals considered in this report are some that occur at US DOE waste sites, and the wildlife species evaluated herein were chosen because they represent a range of body sizes and diets.

  8. Gaia benchmark stars and their twins in the Gaia-ESO Survey

    Science.gov (United States)

    Jofré, P.

    2016-09-01

    The Gaia benchmark stars are stars with very precise stellar parameters that cover a wide range in the HR diagram at various metallicities. They are meant to be good representative of typical FGK stars in the Milky Way. Currently, they are used by several spectroscopic surveys to validate and calibrate the methods that analyse the data. I review our recent activities done for these stars. Additionally, by applying our new method to find stellar twins on the Gaia-ESO Survey, I discuss how good representatives of Milky Way stars the benchmark stars are and how they distribute in space.

  9. Efficiency gains in Danish district heating. Is there anything to learn from benchmarking?

    DEFF Research Database (Denmark)

    Munksgaard, Jesper; Pade, Lise-Lotte; Fristrup, P.

    2005-01-01

    for increasing productivity in Danish district heating production and (2) to examine whether benchmarking has a role to play.Using data envelopment analysis our analyses show that by assuming variable returns to scale a potential exists to reduce production costs by 5–27% depending on the portfolio of inputs......Facing a market structure of independent heating systems and cost-of-service regulation the regulator considers ways to create incentives for increasing efficiency in heat production.One way is to implement benchmark regulation. The aim of this paper is twofold: (1) To investigate the potential...

  10. Gaia Benchmark stars and their twins in the Gaia-ESO Survey

    CERN Document Server

    Jofre, Paula

    2015-01-01

    The Gaia benchmark stars are stars with very precise stellar parameters that cover a wide range in the HR diagram at various metallicities. They are meant to be good representative of typical FGK stars in the Milky Way. Currently, they are used by several spectroscopic surveys to validate and calibrate the methods that analyse the data. I review our recent activities done for these stars. Additionally, by applying our new method to find stellar twins on the Gaia-ESO Survey, I discuss how good representatives of Milky Way stars the benchmark stars are and how they distribute in space.

  11. Schumpeter's core works revisited

    DEFF Research Database (Denmark)

    Andersen, Esben Sloth

    2012-01-01

    This paper organises Schumpeter’s core books in three groups: the programmatic duology,the evolutionaryeconomic duology,and the socioeconomic synthesis. By analysing these groups and their interconnections from the viewpoint of modern evolutionaryeconomics,the paper summarises resolved problems...

  12. Jendl-3.1 iron validation on the PCA-REPLICA (H{sub 2}O/Fe) shielding benchmark experiment

    Energy Technology Data Exchange (ETDEWEB)

    Pescarini, M.; Borgia, M. G. [ENEA, Centro Ricerche ``Ezio Clementel``, Bologna (Italy). Dipt. Energia

    1997-03-01

    The PCA-REPLICA (H{sub 2}O/Fe) neutron shielding benchmarks experiment is analysed using the SN 2-D DOT 3.5-E code and the 3-D-equivalent flux synthesis method. This engineering benchmark reproduces the ex-core radial geometry of a PWR, including a mild steel reactor pressure vessel (RPV) simulator, and is designed to test the accuracy of the calculation of the in-vessel neutron exposure parameters. This accuracy is strongly dependent on the quality of the iron neutron cross sections used to describe the nuclear reactions within the RPV simulator. In particular, in this report, the cross sections based on the JENDL-3.1 iron data files are tested, through a comparison of the calculated integral and spectral results with the corresponding experimental data. In addition, the present results are compared, on the same benchmark experiment, with those of a preceding ENEA-Bologna validation of the ENDF/B VI iron cross sections. The integral result comparison indicates that, for all the threshold detectors considered (Rh-103 (n, n`) Rh-103m, In-115 (n, n`) In-115m and S-32 (n, p) P-32), the JENDL-3.1 natural iron data produce satisfactory results similar to those obtained with the ENDF/B VI iron data. On the contrary, when the JENDL/3.1 Fe-56 data file is used, strongly underestimated results are obtained for the lower energy threshold detectors, Rh-103 and In-115. This fact, in particular, becomes more evident with increasing the neutron penetration depth in the RPV simulator.

  13. Algorithm and Architecture Independent Benchmarking with SEAK

    Energy Technology Data Exchange (ETDEWEB)

    Tallent, Nathan R.; Manzano Franco, Joseph B.; Gawande, Nitin A.; Kang, Seung-Hwa; Kerbyson, Darren J.; Hoisie, Adolfy; Cross, Joseph

    2016-05-23

    Many applications of high performance embedded computing are limited by performance or power bottlenecks. We have designed the Suite for Embedded Applications & Kernels (SEAK), a new benchmark suite, (a) to capture these bottlenecks in a way that encourages creative solutions; and (b) to facilitate rigorous, objective, end-user evaluation for their solutions. To avoid biasing solutions toward existing algorithms, SEAK benchmarks use a mission-centric (abstracted from a particular algorithm) and goal-oriented (functional) specification. To encourage solutions that are any combination of software or hardware, we use an end-user black-box evaluation that can capture tradeoffs between performance, power, accuracy, size, and weight. The tradeoffs are especially informative for procurement decisions. We call our benchmarks future proof because each mission-centric interface and evaluation remains useful despite shifting algorithmic preferences. It is challenging to create both concise and precise goal-oriented specifications for mission-centric problems. This paper describes the SEAK benchmark suite and presents an evaluation of sample solutions that highlights power and performance tradeoffs.

  14. A human benchmark for language recognition

    NARCIS (Netherlands)

    Orr, R.; Leeuwen, D.A. van

    2009-01-01

    In this study, we explore a human benchmark in language recognition, for the purpose of comparing human performance to machine performance in the context of the NIST LRE 2007. Humans are categorised in terms of language proficiency, and performance is presented per proficiency. Themain challenge in

  15. Benchmarking Year Five Students' Reading Abilities

    Science.gov (United States)

    Lim, Chang Kuan; Eng, Lin Siew; Mohamed, Abdul Rashid

    2014-01-01

    Reading and understanding a written text is one of the most important skills in English learning.This study attempts to benchmark Year Five students' reading abilities of fifteen rural schools in a district in Malaysia. The objectives of this study are to develop a set of standardised written reading comprehension and a set of indicators to inform…

  16. Benchmark Generation and Simulation at Extreme Scale

    Energy Technology Data Exchange (ETDEWEB)

    Lagadapati, Mahesh [North Carolina State University (NCSU), Raleigh; Mueller, Frank [North Carolina State University (NCSU), Raleigh; Engelmann, Christian [ORNL

    2016-01-01

    The path to extreme scale high-performance computing (HPC) poses several challenges related to power, performance, resilience, productivity, programmability, data movement, and data management. Investigating the performance of parallel applications at scale on future architectures and the performance impact of different architectural choices is an important component of HPC hardware/software co-design. Simulations using models of future HPC systems and communication traces from applications running on existing HPC systems can offer an insight into the performance of future architectures. This work targets technology developed for scalable application tracing of communication events. It focuses on extreme-scale simulation of HPC applications and their communication behavior via lightweight parallel discrete event simulation for performance estimation and evaluation. Instead of simply replaying a trace within a simulator, this work promotes the generation of a benchmark from traces. This benchmark is subsequently exposed to simulation using models to reflect the performance characteristics of future-generation HPC systems. This technique provides a number of benefits, such as eliminating the data intensive trace replay and enabling simulations at different scales. The presented work features novel software co-design aspects, combining the ScalaTrace tool to generate scalable trace files, the ScalaBenchGen tool to generate the benchmark, and the xSim tool to assess the benchmark characteristics within a simulator.

  17. A Benchmark and Simulator for UAV Tracking

    KAUST Repository

    Mueller, Matthias

    2016-09-16

    In this paper, we propose a new aerial video dataset and benchmark for low altitude UAV target tracking, as well as, a photorealistic UAV simulator that can be coupled with tracking methods. Our benchmark provides the first evaluation of many state-of-the-art and popular trackers on 123 new and fully annotated HD video sequences captured from a low-altitude aerial perspective. Among the compared trackers, we determine which ones are the most suitable for UAV tracking both in terms of tracking accuracy and run-time. The simulator can be used to evaluate tracking algorithms in real-time scenarios before they are deployed on a UAV “in the field”, as well as, generate synthetic but photo-realistic tracking datasets with automatic ground truth annotations to easily extend existing real-world datasets. Both the benchmark and simulator are made publicly available to the vision community on our website to further research in the area of object tracking from UAVs. (https://ivul.kaust.edu.sa/Pages/pub-benchmark-simulator-uav.aspx.). © Springer International Publishing AG 2016.

  18. Thermodynamic benchmark study using Biacore technology

    NARCIS (Netherlands)

    Navratilova, I.; Papalia, G.A.; Rich, R.L.; Bedinger, D.; Brophy, S.; Condon, B.; Deng, T.; Emerick, A.W.; Guan, H.W.; Hayden, T.; Heutmekers, T.; Hoorelbeke, B.; McCroskey, M.C.; Murphy, M.M.; Nakagawa, T.; Parmeggiani, F.; Xiaochun, Q.; Rebe, S.; Nenad, T.; Tsang, T.; Waddell, M.B.; Zhang, F.F.; Leavitt, S.; Myszka, D.G.

    2007-01-01

    A total of 22 individuals participated in this benchmark study to characterize the thermodynamics of small-molecule inhibitor-enzyme interactions using Biacore instruments. Participants were provided with reagents (the enzyme carbonic anhydrase II, which was immobilized onto the sensor surface, and

  19. Benchmarking European Gas Transmission System Operators

    DEFF Research Database (Denmark)

    Agrell, Per J.; Bogetoft, Peter; Trinkner, Urs

    This is the final report for the pan-European efficiency benchmarking of gas transmission system operations commissioned by the Netherlands Authority for Consumers and Markets (ACM), Den Haag, on behalf of the Council of European Energy Regulators (CEER) under the supervision of the authors....

  20. Alberta K-12 ESL Proficiency Benchmarks

    Science.gov (United States)

    Salmon, Kathy; Ettrich, Mike

    2012-01-01

    The Alberta K-12 ESL Proficiency Benchmarks are organized by division: kindergarten, grades 1-3, grades 4-6, grades 7-9, and grades 10-12. They are descriptors of language proficiency in listening, speaking, reading, and writing. The descriptors are arranged in a continuum of seven language competences across five proficiency levels. Several…

  1. Seven Benchmarks for Information Technology Investment.

    Science.gov (United States)

    Smallen, David; Leach, Karen

    2002-01-01

    Offers benchmarks to help campuses evaluate their efforts in supplying information technology (IT) services. The first three help understand the IT budget, the next three provide insight into staffing levels and emphases, and the seventh relates to the pervasiveness of institutional infrastructure. (EV)

  2. Benchmarking Peer Production Mechanisms, Processes & Practices

    Science.gov (United States)

    Fischer, Thomas; Kretschmer, Thomas

    2008-01-01

    This deliverable identifies key approaches for quality management in peer production by benchmarking peer production practices and processes in other areas. (Contains 29 footnotes, 13 figures and 2 tables.)[This report has been authored with contributions of: Kaisa Honkonen-Ratinen, Matti Auvinen, David Riley, Jose Pinzon, Thomas Fischer, Thomas…

  3. Operational benchmarking of Japanese and Danish hopsitals

    DEFF Research Database (Denmark)

    Traberg, Andreas; Itoh, Kenji; Jacobsen, Peter

    2010-01-01

    This benchmarking model is designed as an integration of three organizational dimensions suited for the healthcare sector. The model incorporates posterior operational indicators, and evaluates upon aggregation of performance. The model is tested upon seven cases from Japan and Denmark. Japanese...

  4. Simple benchmark for complex dose finding studies.

    Science.gov (United States)

    Cheung, Ying Kuen

    2014-06-01

    While a general goal of early phase clinical studies is to identify an acceptable dose for further investigation, modern dose finding studies and designs are highly specific to individual clinical settings. In addition, as outcome-adaptive dose finding methods often involve complex algorithms, it is crucial to have diagnostic tools to evaluate the plausibility of a method's simulated performance and the adequacy of the algorithm. In this article, we propose a simple technique that provides an upper limit, or a benchmark, of accuracy for dose finding methods for a given design objective. The proposed benchmark is nonparametric optimal in the sense of O'Quigley et al. (2002, Biostatistics 3, 51-56), and is demonstrated by examples to be a practical accuracy upper bound for model-based dose finding methods. We illustrate the implementation of the technique in the context of phase I trials that consider multiple toxicities and phase I/II trials where dosing decisions are based on both toxicity and efficacy, and apply the benchmark to several clinical examples considered in the literature. By comparing the operating characteristics of a dose finding method to that of the benchmark, we can form quick initial assessments of whether the method is adequately calibrated and evaluate its sensitivity to the dose-outcome relationships.

  5. Benchmarking 2010: Trends in Education Philanthropy

    Science.gov (United States)

    Bearman, Jessica

    2010-01-01

    "Benchmarking 2010" offers insights into the current priorities, practices and concerns of education grantmakers. The report is divided into five sections: (1) Mapping the Education Grantmaking Landscape; (2) 2010 Funding Priorities; (3) Strategies for Leveraging Greater Impact; (4) Identifying Significant Trends in Education Funding; and (5)…

  6. Benchmark Experiment for Beryllium Slab Samples

    Institute of Scientific and Technical Information of China (English)

    NIE; Yang-bo; BAO; Jie; HAN; Rui; RUAN; Xi-chao; REN; Jie; HUANG; Han-xiong; ZHOU; Zu-ying

    2015-01-01

    In order to validate the evaluated nuclear data on beryllium,a benchmark experiment has been performed at China Institution of Atomic Energy(CIAE).Neutron leakage spectra from pure beryllium slab samples(10cm×10cm×11cm)were measured at 61°and 121°using timeof-

  7. Benchmarking 2011: Trends in Education Philanthropy

    Science.gov (United States)

    Grantmakers for Education, 2011

    2011-01-01

    The analysis in "Benchmarking 2011" is based on data from an unduplicated sample of 184 education grantmaking organizations--approximately two-thirds of Grantmakers for Education's (GFE's) network of grantmakers--who responded to an online survey consisting of fixed-choice and open-ended questions. Because a different subset of funders elects to…

  8. Cleanroom Energy Efficiency: Metrics and Benchmarks

    Energy Technology Data Exchange (ETDEWEB)

    International SEMATECH Manufacturing Initiative; Mathew, Paul A.; Tschudi, William; Sartor, Dale; Beasley, James

    2010-07-07

    Cleanrooms are among the most energy-intensive types of facilities. This is primarily due to the cleanliness requirements that result in high airflow rates and system static pressures, as well as process requirements that result in high cooling loads. Various studies have shown that there is a wide range of cleanroom energy efficiencies and that facility managers may not be aware of how energy efficient their cleanroom facility can be relative to other cleanroom facilities with the same cleanliness requirements. Metrics and benchmarks are an effective way to compare one facility to another and to track the performance of a given facility over time. This article presents the key metrics and benchmarks that facility managers can use to assess, track, and manage their cleanroom energy efficiency or to set energy efficiency targets for new construction. These include system-level metrics such as air change rates, air handling W/cfm, and filter pressure drops. Operational data are presented from over 20 different cleanrooms that were benchmarked with these metrics and that are part of the cleanroom benchmark dataset maintained by Lawrence Berkeley National Laboratory (LBNL). Overall production efficiency metrics for cleanrooms in 28 semiconductor manufacturing facilities in the United States and recorded in the Fabs21 database are also presented.

  9. Issues in Benchmarking and Assessing Institutional Engagement

    Science.gov (United States)

    Furco, Andrew; Miller, William

    2009-01-01

    The process of assessing and benchmarking community engagement can take many forms. To date, more than two dozen assessment tools for measuring community engagement institutionalization have been published. These tools vary substantially in purpose, level of complexity, scope, process, structure, and focus. While some instruments are designed to…

  10. Benchmarking European Gas Transmission System Operators

    DEFF Research Database (Denmark)

    Agrell, Per J.; Bogetoft, Peter; Trinkner, Urs

    This is the final report for the pan-European efficiency benchmarking of gas transmission system operations commissioned by the Netherlands Authority for Consumers and Markets (ACM), Den Haag, on behalf of the Council of European Energy Regulators (CEER) under the supervision of the authors....

  11. Transformer core

    NARCIS (Netherlands)

    Mehendale, A.; Hagedoorn, Wouter; Lötters, Joost Conrad

    2010-01-01

    A transformer core includes a stack of a plurality of planar core plates of a magnetically permeable material, which plates each consist of a first and a second sub-part that together enclose at least one opening. The sub-parts can be fitted together via contact faces that are located on either side

  12. Transformer core

    NARCIS (Netherlands)

    Mehendale, A.; Hagedoorn, Wouter; Lötters, Joost Conrad

    2008-01-01

    A transformer core includes a stack of a plurality of planar core plates of a magnetically permeable material, which plates each consist of a first and a second sub-part that together enclose at least one opening. The sub-parts can be fitted together via contact faces that are located on either side

  13. Depollution benchmarks for capacitors, batteries and printed wiring boards from waste electrical and electronic equipment (WEEE)

    Energy Technology Data Exchange (ETDEWEB)

    Savi, Daniel, E-mail: d.savi@umweltchemie.ch [Dipl. Environmental Sci. ETH, büro für umweltchemie, Zurich (Switzerland); Kasser, Ueli [Lic. Phil. Nat. (Chemist), büro für umweltchemie, Zurich (Switzerland); Ott, Thomas [Dipl. Phys. ETH, Institute of Applied Simulation, Zurich University of Applied Sciences, Wädenswil (Switzerland)

    2013-12-15

    Highlights: • We’ve analysed data on the dismantling of electronic and electrical appliances. • Ten years of mass balance data of more than recycling companies have been considered. • Percentages of dismantled batteries, capacitors and PWB have been studied. • Threshold values and benchmarks for batteries and capacitors have been identified. • No benchmark for the dismantling of printed wiring boards should be set. - Abstract: The article compiles and analyses sample data for toxic components removed from waste electronic and electrical equipment (WEEE) from more than 30 recycling companies in Switzerland over the past ten years. According to European and Swiss legislation, toxic components like batteries, capacitors and printed wiring boards have to be removed from WEEE. The control bodies of the Swiss take back schemes have been monitoring the activities of WEEE recyclers in Switzerland for about 15 years. All recyclers have to provide annual mass balance data for every year of operation. From this data, percentage shares of removed batteries and capacitors are calculated in relation to the amount of each respective WEEE category treated. A rationale is developed, why such an indicator should not be calculated for printed wiring boards. The distributions of these de-pollution indicators are analysed and their suitability for defining lower threshold values and benchmarks for the depollution of WEEE is discussed. Recommendations for benchmarks and threshold values for the removal of capacitors and batteries are given.

  14. Benchmarking health IT among OECD countries: better data for better policy.

    Science.gov (United States)

    Adler-Milstein, Julia; Ronchi, Elettra; Cohen, Genna R; Winn, Laura A Pannella; Jha, Ashish K

    2014-01-01

    To develop benchmark measures of health information and communication technology (ICT) use to facilitate cross-country comparisons and learning. The effort is led by the Organisation for Economic Co-operation and Development (OECD). Approaches to definition and measurement within four ICT domains were compared across seven OECD countries in order to identify functionalities in each domain. These informed a set of functionality-based benchmark measures, which were refined in collaboration with representatives from more than 20 OECD and non-OECD countries. We report on progress to date and remaining work to enable countries to begin to collect benchmark data. The four benchmarking domains include provider-centric electronic record, patient-centric electronic record, health information exchange, and tele-health. There was broad agreement on functionalities in the provider-centric electronic record domain (eg, entry of core patient data, decision support), and less agreement in the other three domains in which country representatives worked to select benchmark functionalities. Many countries are working to implement ICTs to improve healthcare system performance. Although many countries are looking to others as potential models, the lack of consistent terminology and approach has made cross-national comparisons and learning difficult. As countries develop and implement strategies to increase the use of ICTs to promote health goals, there is a historic opportunity to enable cross-country learning. To facilitate this learning and reduce the chances that individual countries flounder, a common understanding of health ICT adoption and use is needed. The OECD-led benchmarking process is a crucial step towards achieving this.

  15. Benchmarking transaction and analytical processing systems the creation of a mixed workload benchmark and its application

    CERN Document Server

    Bog, Anja

    2014-01-01

    This book introduces a new benchmark for hybrid database systems, gauging the effect of adding OLAP to an OLTP workload and analyzing the impact of commonly used optimizations in historically separate OLTP and OLAP domains in mixed-workload scenarios.

  16. Electricity consumption in school buildings - benchmark and web tools; Elforbrug i skoler - benchmark og webvaerktoej

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2006-07-01

    The aim of this project has been to produce benchmarks for electricity consumption in Danish schools in order to encourage electricity conservation. An internet programme has been developed with the aim of facilitating schools' access to benchmarks and to evaluate energy consumption. The overall purpose is to create increased attention to the electricity consumption of each separate school by publishing benchmarks which take the schools' age and number of pupils as well as after school activities into account. Benchmarks can be used to make green accounts and work as markers in e.g. energy conservation campaigns, energy management and for educational purposes. The internet tool can be found on www.energiguiden.dk. (BA)

  17. Piping benchmark problems for the ABB/CE System 80+ Standardized Plant

    Energy Technology Data Exchange (ETDEWEB)

    Bezler, P.; DeGrassi, G.; Braverman, J.; Wang, Y.K. [Brookhaven National Lab., Upton, NY (United States)

    1994-07-01

    To satisfy the need for verification of the computer programs and modeling techniques that will be used to perform the final piping analyses for the ABB/Combustion Engineering System 80+ Standardized Plant, three benchmark problems were developed. The problems are representative piping systems subjected to representative dynamic loads with solutions developed using the methods being proposed for analysis for the System 80+ standard design. It will be required that the combined license licensees demonstrate that their solution to these problems are in agreement with the benchmark problem set. The first System 80+ piping benchmark is a uniform support motion response spectrum solution for one section of the feedwater piping subjected to safe shutdown seismic loads. The second System 80+ piping benchmark is a time history solution for the feedwater piping subjected to the transient loading induced by a water hammer. The third System 80+ piping benchmark is a time history solution of the pressurizer surge line subjected to the accelerations induced by a main steam line pipe break. The System 80+ reactor is an advanced PWR type.

  18. CompaRNA: a server for continuous benchmarking of automated methods for RNA secondary structure prediction.

    Science.gov (United States)

    Puton, Tomasz; Kozlowski, Lukasz P; Rother, Kristian M; Bujnicki, Janusz M

    2013-04-01

    We present a continuous benchmarking approach for the assessment of RNA secondary structure prediction methods implemented in the CompaRNA web server. As of 3 October 2012, the performance of 28 single-sequence and 13 comparative methods has been evaluated on RNA sequences/structures released weekly by the Protein Data Bank. We also provide a static benchmark generated on RNA 2D structures derived from the RNAstrand database. Benchmarks on both data sets offer insight into the relative performance of RNA secondary structure prediction methods on RNAs of different size and with respect to different types of structure. According to our tests, on the average, the most accurate predictions obtained by a comparative approach are generated by CentroidAlifold, MXScarna, RNAalifold and TurboFold. On the average, the most accurate predictions obtained by single-sequence analyses are generated by CentroidFold, ContextFold and IPknot. The best comparative methods typically outperform the best single-sequence methods if an alignment of homologous RNA sequences is available. This article presents the results of our benchmarks as of 3 October 2012, whereas the rankings presented online are continuously updated. We will gladly include new prediction methods and new measures of accuracy in the new editions of CompaRNA benchmarks.

  19. Overview and Discussion of the OECD/NRC Benchmark Based on NUPEC PWR Subchannel and Bundle Tests

    Directory of Open Access Journals (Sweden)

    M. Avramova

    2013-01-01

    Full Text Available The Pennsylvania State University (PSU under the sponsorship of the US Nuclear Regulatory Commission (NRC has prepared, organized, conducted, and summarized the Organisation for Economic Co-operation and Development/US Nuclear Regulatory Commission (OECD/NRC benchmark based on the Nuclear Power Engineering Corporation (NUPEC pressurized water reactor (PWR subchannel and bundle tests (PSBTs. The international benchmark activities have been conducted in cooperation with the Nuclear Energy Agency (NEA of OECD and the Japan Nuclear Energy Safety Organization (JNES, Japan. The OECD/NRC PSBT benchmark was organized to provide a test bed for assessing the capabilities of various thermal-hydraulic subchannel, system, and computational fluid dynamics (CFDs codes. The benchmark was designed to systematically assess and compare the participants’ numerical models for prediction of detailed subchannel void distribution and department from nucleate boiling (DNB, under steady-state and transient conditions, to full-scale experimental data. This paper provides an overview of the objectives of the benchmark along with a definition of the benchmark phases and exercises. The NUPEC PWR PSBT facility and the specific methods used in the void distribution measurements are discussed followed by a summary of comparative analyses of submitted final results for the exercises of the two benchmark phases.

  20. A benchmark for comparison of dental radiography analysis algorithms.

    Science.gov (United States)

    Wang, Ching-Wei; Huang, Cheng-Ta; Lee, Jia-Hong; Li, Chung-Hsing; Chang, Sheng-Wei; Siao, Ming-Jhih; Lai, Tat-Ming; Ibragimov, Bulat; Vrtovec, Tomaž; Ronneberger, Olaf; Fischer, Philipp; Cootes, Tim F; Lindner, Claudia

    2016-07-01

    Dental radiography plays an important role in clinical diagnosis, treatment and surgery. In recent years, efforts have been made on developing computerized dental X-ray image analysis systems for clinical usages. A novel framework for objective evaluation of automatic dental radiography analysis algorithms has been established under the auspices of the IEEE International Symposium on Biomedical Imaging 2015 Bitewing Radiography Caries Detection Challenge and Cephalometric X-ray Image Analysis Challenge. In this article, we present the datasets, methods and results of the challenge and lay down the principles for future uses of this benchmark. The main contributions of the challenge include the creation of the dental anatomy data repository of bitewing radiographs, the creation of the anatomical abnormality classification data repository of cephalometric radiographs, and the definition of objective quantitative evaluation for comparison and ranking of the algorithms. With this benchmark, seven automatic methods for analysing cephalometric X-ray image and two automatic methods for detecting bitewing radiography caries have been compared, and detailed quantitative evaluation results are presented in this paper. Based on the quantitative evaluation results, we believe automatic dental radiography analysis is still a challenging and unsolved problem. The datasets and the evaluation software will be made available to the research community, further encouraging future developments in this field. (http://www-o.ntust.edu.tw/~cweiwang/ISBI2015/).

  1. The ACRV Picking Benchmark (APB): A Robotic Shelf Picking Benchmark to Foster Reproducible Research

    OpenAIRE

    Leitner, Jürgen; Tow, Adam W.; Dean, Jake E.; Suenderhauf, Niko; Durham, Joseph W.; Cooper, Matthew; Eich, Markus; Lehnert, Christopher; Mangels, Ruben; McCool, Christopher; Kujala, Peter; Nicholson, Lachlan; Van Pham, Trung; Sergeant, James; Wu, Liao

    2016-01-01

    Robotic challenges like the Amazon Picking Challenge (APC) or the DARPA Challenges are an established and important way to drive scientific progress. They make research comparable on a well-defined benchmark with equal test conditions for all participants. However, such challenge events occur only occasionally, are limited to a small number of contestants, and the test conditions are very difficult to replicate after the main event. We present a new physical benchmark challenge for robotic pi...

  2. Benchmark 1 - Failure Prediction after Cup Drawing, Reverse Redrawing and Expansion Part A: Benchmark Description

    Science.gov (United States)

    Watson, Martin; Dick, Robert; Huang, Y. Helen; Lockley, Andrew; Cardoso, Rui; Santos, Abel

    2016-08-01

    This Benchmark is designed to predict the fracture of a food can after drawing, reverse redrawing and expansion. The aim is to assess different sheet metal forming difficulties such as plastic anisotropic earing and failure models (strain and stress based Forming Limit Diagrams) under complex nonlinear strain paths. To study these effects, two distinct materials, TH330 steel (unstoved) and AA5352 aluminum alloy are considered in this Benchmark. Problem description, material properties, and simulation reports with experimental data are summarized.

  3. Revaluering benchmarking - A topical theme for the construction industry

    DEFF Research Database (Denmark)

    Rasmussen, Grane Mikael Gregaard

    2011-01-01

    Over the past decade, benchmarking has increasingly gained foothold in the construction industry. The predominant research, perceptions and uses of benchmarking are valued so strongly and uniformly, that what may seem valuable, is actually abstaining researchers and practitioners from studying an...... organizational relations, behaviors and actions. In closing it is briefly considered how to study the calculative practices of benchmarking....... and questioning the concept objectively. This paper addresses the underlying nature of benchmarking, and accounts for the importance of focusing attention on the sociological impacts benchmarking has in organizations. To understand these sociological impacts, benchmarking research needs to transcend...... this perspective develops more thorough knowledge about benchmarking and challenges the current dominating rationales. Hereby, it is argued that benchmarking is not a neutral practice. On the contrary it is highly influenced by organizational ambitions and strategies, with the potentials to transform...

  4. Effects of Exposure Imprecision on Estimation of the Benchmark Dose

    DEFF Research Database (Denmark)

    Budtz-Jørgensen, Esben; Keiding, Niels; Grandjean, Philippe

    Environmental epidemiology; exposure measurement error; effect of prenatal mercury exposure; exposure standards; benchmark dose......Environmental epidemiology; exposure measurement error; effect of prenatal mercury exposure; exposure standards; benchmark dose...

  5. Regression Benchmarking: An Approach to Quality Assurance in Performance

    OpenAIRE

    2005-01-01

    The paper presents a short summary of our work in the area of regression benchmarking and its application to software development. Specially, we explain the concept of regression benchmarking, the requirements for employing regression testing in a software project, and methods used for analyzing the vast amounts of data resulting from repeated benchmarking. We present the application of regression benchmarking on a real software project and conclude with a glimpse at the challenges for the fu...

  6. Benchmarking of corporate social responsibility: Methodological problems and robustness

    OpenAIRE

    2004-01-01

    This paper investigates the possibilities and problems of benchmarking Corporate Social Responsibility (CSR). After a methodological analysis of the advantages and problems of benchmarking, we develop a benchmark method that includes economic, social and environmental aspects as well as national and international aspects of CSR. The overall benchmark is based on a weighted average of these aspects. The weights are based on the opinions of companies and NGO’s. Using different me...

  7. AMS analyses at ANSTO

    Energy Technology Data Exchange (ETDEWEB)

    Lawson, E.M. [Australian Nuclear Science and Technology Organisation, Lucas Heights, NSW (Australia). Physics Division

    1998-03-01

    The major use of ANTARES is Accelerator Mass Spectrometry (AMS) with {sup 14}C being the most commonly analysed radioisotope - presently about 35 % of the available beam time on ANTARES is used for {sup 14}C measurements. The accelerator measurements are supported by, and dependent on, a strong sample preparation section. The ANTARES AMS facility supports a wide range of investigations into fields such as global climate change, ice cores, oceanography, dendrochronology, anthropology, and classical and Australian archaeology. Described here are some examples of the ways in which AMS has been applied to support research into the archaeology, prehistory and culture of this continent`s indigenous Aboriginal peoples. (author)

  8. Code assessment and modelling for Design Basis Accident analysis of the European Sodium Fast Reactor design. Part II: Optimised core and representative transients analysis

    Energy Technology Data Exchange (ETDEWEB)

    Lazaro, A., E-mail: aulach@iqn.upv.es [JRC-IET European Commission, Westerduinweg 3, PO BOX 2, 1755 ZG Petten (Netherlands); Schikorr, M. [KIT, Institute for Neutron Physics and Reactor Technology, Hermann-von-Helmholtz-Platz 1, 76344 Eggenstein-Leopoldshafen (Germany); Mikityuk, K. [PSI, Paul Scherrer Institut, 5232 Villigen (Switzerland); Ammirabile, L. [JRC-IET European Commission, Westerduinweg 3, PO BOX 2, 1755 ZG Petten (Netherlands); Bandini, G. [ENEA, Via Martiri di Monte Sole 4, 40129 Bologna (Italy); Darmet, G.; Schmitt, D. [EDF, 1 Avenue du Général de Gaulle, 92141 Clamart (France); Dufour, Ph.; Tosello, A. [CEA, St. Paul lez Durance, 13108 Cadarache (France); Gallego, E.; Jimenez, G. [UPM, José Gutiérrez Abascal, 2, 28006 Madrid (Spain); Bubelis, E.; Ponomarev, A.; Kruessmann, R.; Struwe, D. [KIT, Institute for Neutron Physics and Reactor Technology, Hermann-von-Helmholtz-Platz 1, 76344 Eggenstein-Leopoldshafen (Germany); Stempniewicz, M. [NRG, Utrechtseweg 310, P.O. Box-9034, 6800 ES Arnhem (Netherlands)

    2014-10-01

    Highlights: • Benchmarked models have been applied for the analysis of DBA transients of the ESFR design. • Two system codes are able to simulate the behavior of the system beyond sodium boiling. • The optimization of the core design and its influence in the transients’ evolution is described. • The analysis has identified peak values and grace times for the protection system design. - Abstract: The new reactor concepts proposed in the Generation IV International Forum require the development and validation of computational tools able to assess their safety performance. In the first part of this paper the models of the ESFR design developed by several organisations in the framework of the CP-ESFR project were presented and their reliability validated via a benchmarking exercise. This second part of the paper includes the application of those tools for the analysis of design basis accident (DBC) scenarios of the reference design. Further, this paper also introduces the main features of the core optimisation process carried out within the project with the objective to enhance the core safety performance through the reduction of the positive coolant density reactivity effect. The influence of this optimised core design on the reactor safety performance during the previously analysed transients is also discussed. The conclusion provides an overview of the work performed by the partners involved in the project towards the development and enhancement of computational tools specifically tailored to the evaluation of the safety performance of the Generation IV innovative nuclear reactor designs.

  9. Benchmarking a signpost to excellence in quality and productivity

    CERN Document Server

    Karlof, Bengt

    1993-01-01

    According to the authors, benchmarking exerts a powerful leverage effect on an organization and they consider some of the factors which justify their claim. Describes how to implement benchmarking and exactly what to benchmark. Explains benchlearning which integrates education, leadership development and organizational dynamics with the actual work being done and how to make it work more efficiently in terms of quality and productivity.

  10. Taking Stock of Corporate Benchmarking Practices: Panacea or Pandora's Box?

    Science.gov (United States)

    Fleisher, Craig S.; Burton, Sara

    1995-01-01

    Discusses why corporate communications/public relations (cc/pr) should be benchmarked (an approach used by cc/pr managers to demonstrate the value of their activities to skeptical organizational executives). Discusses myths about cc/pr benchmarking; types, targets, and focus of cc/pr benchmarking; a process model; and critical decisions about…

  11. 47 CFR 69.108 - Transport rate benchmark.

    Science.gov (United States)

    2010-10-01

    ... with this subpart, the DS3-to-DS1 benchmark ratio shall be calculated as follows: the telephone company... benchmark ratio of 9.6 to 1 or higher. (c) If a telephone company's initial transport rates are based on... 47 Telecommunication 3 2010-10-01 2010-10-01 false Transport rate benchmark. 69.108 Section...

  12. Discovering and Implementing Best Practices to Strengthen SEAs: Collaborative Benchmarking

    Science.gov (United States)

    Building State Capacity and Productivity Center, 2013

    2013-01-01

    This paper is written for state educational agency (SEA) leaders who are considering the benefits of collaborative benchmarking, and it addresses the following questions: (1) What does benchmarking of best practices entail?; (2) How does "collaborative benchmarking" enhance the process?; (3) How do SEAs control the process so that "their" needs…

  13. 29 CFR 1952.323 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.323 Section 1952.323... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  14. 29 CFR 1952.343 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.343 Section 1952.343... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, Compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  15. 29 CFR 1952.213 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.213 Section 1952.213... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  16. 29 CFR 1952.373 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.373 Section 1952.373... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  17. 29 CFR 1952.163 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.163 Section 1952.163... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  18. 29 CFR 1952.203 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.203 Section 1952.203... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  19. 29 CFR 1952.293 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.293 Section 1952.293... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  20. 29 CFR 1952.223 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.223 Section 1952.223... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  1. 29 CFR 1952.233 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.233 Section 1952.233... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  2. 29 CFR 1952.113 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.113 Section 1952.113... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  3. 29 CFR 1952.93 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.93 Section 1952.93....93 Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were...

  4. 29 CFR 1952.353 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.353 Section 1952.353... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  5. Comparison and validation of HEU and LEU modeling results to HEU experimental benchmark data for the Massachusetts Institute of Technology MITR reactor.

    Energy Technology Data Exchange (ETDEWEB)

    Newton, T. H.; Wilson, E. H; Bergeron, A.; Horelik, N.; Stevens, J. (Nuclear Engineering Division); (MIT Nuclear Reactor Lab.)

    2011-03-02

    The Massachusetts Institute of Technology Reactor (MITR-II) is a research reactor in Cambridge, Massachusetts designed primarily for experiments using neutron beam and in-core irradiation facilities. It delivers a neutron flux comparable to current LWR power reactors in a compact 6 MW core using Highly Enriched Uranium (HEU) fuel. In the framework of its non-proliferation policies, the international community presently aims to minimize the amount of nuclear material available that could be used for nuclear weapons. In this geopolitical context, most research and test reactors both domestic and international have started a program of conversion to the use of Low Enriched Uranium (LEU) fuel. A new type of LEU fuel based on an alloy of uranium and molybdenum (UMo) is expected to allow the conversion of U.S. domestic high performance reactors like the MITR-II reactor. Towards this goal, comparisons of MCNP5 Monte Carlo neutronic modeling results for HEU and LEU cores have been performed. Validation of the model has been based upon comparison to HEU experimental benchmark data for the MITR-II. The objective of this work was to demonstrate a model which could represent the experimental HEU data, and therefore could provide a basis to demonstrate LEU core performance. This report presents an overview of MITR-II model geometry and material definitions which have been verified, and updated as required during the course of validation to represent the specifications of the MITR-II reactor. Results of calculations are presented for comparisons to historical HEU start-up data from 1975-1976, and to other experimental benchmark data available for the MITR-II Reactor through 2009. This report also presents results of steady state neutronic analysis of an all-fresh LEU fueled core. Where possible, HEU and LEU calculations were performed for conditions equivalent to HEU experiments, which serves as a starting point for safety analyses for conversion of MITR-II from the use of HEU

  6. Ice Cores

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Records of past temperature, precipitation, atmospheric trace gases, and other aspects of climate and environment derived from ice cores drilled on glaciers and ice...

  7. Core BPEL

    DEFF Research Database (Denmark)

    Hallwyl, Tim; Højsgaard, Espen

    extensions. Combined with the fact that the language definition does not provide a formal semantics, it is an arduous task to work formally with the language (e.g. to give an implementation). In this paper we identify a core subset of the language, called Core BPEL, which has fewer and simpler constructs......, does not allow omissions, and does not contain ignorable elements. We do so by identifying syntactic sugar, including default values, and ignorable elements in WS-BPEL. The analysis results in a translation from the full language to the core subset. Thus, we reduce the effort needed for working...... formally with WS-BPEL, as one, without loss of generality, need only consider the much simpler Core BPEL. This report may also be viewed as an addendum to the WS-BPEL standard specification, which clarifies the WS-BPEL syntax and presents the essential elements of the language in a more concise way...

  8. Core BPEL

    DEFF Research Database (Denmark)

    Hallwyl, Tim; Højsgaard, Espen

    extensions. Combined with the fact that the language definition does not provide a formal semantics, it is an arduous task to work formally with the language (e.g. to give an implementation). In this paper we identify a core subset of the language, called Core BPEL, which has fewer and simpler constructs......, does not allow omissions, and does not contain ignorable elements. We do so by identifying syntactic sugar, including default values, and ignorable elements in WS-BPEL. The analysis results in a translation from the full language to the core subset. Thus, we reduce the effort needed for working...... formally with WS-BPEL, as one, without loss of generality, need only consider the much simpler Core BPEL. This report may also be viewed as an addendum to the WS-BPEL standard specification, which clarifies the WS-BPEL syntax and presents the essential elements of the language in a more concise way...

  9. Core benefits

    National Research Council Canada - National Science Library

    Keith, Brian W

    2010-01-01

    This SPEC Kit explores the core employment benefits of retirement, and life, health, and other insurance -benefits that are typically decided by the parent institution and often have significant governmental regulation...

  10. The Gaia FGK Benchmark Stars - High resolution spectral library

    CERN Document Server

    Blanco-Cuaresma, S; Jofré, P; Heiter, U

    2014-01-01

    Context. An increasing number of high resolution stellar spectra is available today thanks to many past and ongoing spectroscopic surveys. Consequently, numerous methods have been developed in order to perform an automatic spectral analysis on a massive amount of data. When reviewing published results, biases arise and they need to be addressed and minimized. Aims. We are providing a homogeneous library with a common set of calibration stars (known as the Gaia FGK Benchmark Stars) that will allow to assess stellar analysis methods and calibrate spectroscopic surveys. Methods. High resolution and signal-to-noise spectra were compiled from different instruments. We developed an automatic process in order to homogenize the observed data and assess the quality of the resulting library. Results. We built a high quality library that will facilitate the assessment of spectral analyses and the calibration of present and future spectroscopic surveys. The automation of the process minimizes the human subjectivity and e...

  11. FDNS CFD Code Benchmark for RBCC Ejector Mode Operation

    Science.gov (United States)

    Holt, James B.; Ruf, Joe

    1999-01-01

    Computational Fluid Dynamics (CFD) analysis results are compared with benchmark quality test data from the Propulsion Engineering Research Center's (PERC) Rocket Based Combined Cycle (RBCC) experiments to verify fluid dynamic code and application procedures. RBCC engine flowpath development will rely on CFD applications to capture the multi-dimensional fluid dynamic interactions and to quantify their effect on the RBCC system performance. Therefore, the accuracy of these CFD codes must be determined through detailed comparisons with test data. The PERC experiments build upon the well-known 1968 rocket-ejector experiments of Odegaard and Stroup by employing advanced optical and laser based diagnostics to evaluate mixing and secondary combustion. The Finite Difference Navier Stokes (FDNS) code was used to model the fluid dynamics of the PERC RBCC ejector mode configuration. Analyses were performed for both Diffusion and Afterburning (DAB) and Simultaneous Mixing and Combustion (SMC) test conditions. Results from both the 2D and the 3D models are presented.

  12. Benchmark values for forest soil carbon stocks in Europe

    DEFF Research Database (Denmark)

    De Vos, Bruno; Cools, Nathalie; Ilvesniemi, Hannu;

    2015-01-01

    to the UN/ECE ICP Forests 16 × 16 km Level I network. Plots were sampled and analysed according to harmonized methods during the 2nd European Forest Soil Condition Survey. Using continuous carbon density depth functions, we estimated SOC stocks to 30-cm and 1-m depth, and stratified these stocks according...... to 22 WRB Reference Soil Groups (RSGs) and 8 humus forms to provide European scale benchmark values. Average SOC stocks amounted to 22.1 t C ha− 1 in forest floors, 108 t C ha− 1 in mineral soils and 578 t C ha− 1 in peat soils, to 1 m depth. Relative to 1-m stocks, the vertical SOC distribution...

  13. Characterization of addressability by simultaneous randomized benchmarking

    CERN Document Server

    Gambetta, Jay M; Merkel, S T; Johnson, B R; Smolin, John A; Chow, Jerry M; Ryan, Colm A; Rigetti, Chad; Poletto, S; Ohki, Thomas A; Ketchen, Mark B; Steffen, M

    2012-01-01

    The control and handling of errors arising from cross-talk and unwanted interactions in multi-qubit systems is an important issue in quantum information processing architectures. We introduce a benchmarking protocol that provides information about the amount of addressability present in the system and implement it on coupled superconducting qubits. The protocol consists of randomized benchmarking each qubit individually and then simultaneously, and the amount of addressability is related to the difference of the average gate fidelities of those experiments. We present the results on two similar samples with different amounts of cross-talk and unwanted interactions, which agree with predictions based on simple models for the amount of residual coupling.

  14. Specification for the VERA Depletion Benchmark Suite

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Kang Seog [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2015-12-17

    CASL-X-2015-1014-000 iii Consortium for Advanced Simulation of LWRs EXECUTIVE SUMMARY The CASL neutronics simulator MPACT is under development for the neutronics and T-H coupled simulation for the pressurized water reactor. MPACT includes the ORIGEN-API and internal depletion module to perform depletion calculations based upon neutron-material reaction and radioactive decay. It is a challenge to validate the depletion capability because of the insufficient measured data. One of the detoured methods to validate it is to perform a code-to-code comparison for benchmark problems. In this study a depletion benchmark suite has been developed and a detailed guideline has been provided to obtain meaningful computational outcomes which can be used in the validation of the MPACT depletion capability.

  15. HPL and STREAM Benchmarks on SANAM Supercomputer

    KAUST Repository

    Bin Sulaiman, Riman A.

    2017-03-13

    SANAM supercomputer was jointly built by KACST and FIAS in 2012 ranking second that year in the Green500 list with a power efficiency of 2.3 GFLOPS/W (Rohr et al., 2014). It is a heterogeneous accelerator-based HPC system that has 300 compute nodes. Each node includes two Intel Xeon E5?2650 CPUs, two AMD FirePro S10000 dual GPUs and 128 GiB of main memory. In this work, the seven benchmarks of HPCC were installed and configured to reassess the performance of SANAM, as part of an unpublished master thesis, after it was reassembled in the Kingdom of Saudi Arabia. We present here detailed results of HPL and STREAM benchmarks.

  16. The PROOF benchmark suite measuring PROOF performance

    Science.gov (United States)

    Ryu, S.; Ganis, G.

    2012-06-01

    The PROOF benchmark suite is a new utility suite of PROOF to measure performance and scalability. The primary goal of the benchmark suite is to determine optimal configuration parameters for a set of machines to be used as PROOF cluster. The suite measures the performance of the cluster for a set of standard tasks as a function of the number of effective processes. Cluster administrators can use the suite to measure the performance of the cluster and find optimal configuration parameters. PROOF developers can also utilize the suite to help them measure, identify problems and improve their software. In this paper, the new tool is explained in detail and use cases are presented to illustrate the new tool.

  17. Non-judgemental Dynamic Fuel Cycle Benchmarking

    CERN Document Server

    Scopatz, Anthony Michael

    2015-01-01

    This paper presents a new fuel cycle benchmarking analysis methodology by coupling Gaussian process regression, a popular technique in Machine Learning, to dynamic time warping, a mechanism widely used in speech recognition. Together they generate figures-of-merit that are applicable to any time series metric that a benchmark may study. The figures-of-merit account for uncertainty in the metric itself, utilize information across the whole time domain, and do not require that the simulators use a common time grid. Here, a distance measure is defined that can be used to compare the performance of each simulator for a given metric. Additionally, a contribution measure is derived from the distance measure that can be used to rank order the importance of fuel cycle metrics. Lastly, this paper warns against using standard signal processing techniques for error reduction. This is because it is found that error reduction is better handled by the Gaussian process regression itself.

  18. Argonne Code Center: benchmark problem book

    Energy Technology Data Exchange (ETDEWEB)

    1977-06-01

    This report is a supplement to the original report, published in 1968, as revised. The Benchmark Problem Book is intended to serve as a source book of solutions to mathematically well-defined problems for which either analytical or very accurate approximate solutions are known. This supplement contains problems in eight new areas: two-dimensional (R-z) reactor model; multidimensional (Hex-z) HTGR model; PWR thermal hydraulics--flow between two channels with different heat fluxes; multidimensional (x-y-z) LWR model; neutron transport in a cylindrical ''black'' rod; neutron transport in a BWR rod bundle; multidimensional (x-y-z) BWR model; and neutronic depletion benchmark problems. This supplement contains only the additional pages and those requiring modification. (RWR)

  19. Assessing and benchmarking multiphoton microscopes for biologists.

    Science.gov (United States)

    Corbin, Kaitlin; Pinkard, Henry; Peck, Sebastian; Beemiller, Peter; Krummel, Matthew F

    2014-01-01

    Multiphoton microscopy has become staple tool for tracking cells within tissues and organs due to superior depth of penetration, low excitation volumes, and reduced phototoxicity. Many factors, ranging from laser pulse width to relay optics to detectors and electronics, contribute to the overall ability of these microscopes to excite and detect fluorescence deep within tissues. However, we have found that there are few standard ways already described in the literature to distinguish between microscopes or to benchmark existing microscopes to measure the overall quality and efficiency of these instruments. Here, we discuss some simple parameters and methods that can either be used within a multiphoton facility or by a prospective purchaser to benchmark performance. This can both assist in identifying decay in microscope performance and in choosing features of a scope that are suited to experimental needs.

  20. Active vibration control of nonlinear benchmark buildings

    Institute of Scientific and Technical Information of China (English)

    ZHOU Xing-de; CHEN Dao-zheng

    2007-01-01

    The present nonlinear model reduction methods unfit the nonlinear benchmark buildings as their vibration equations belong to a non-affine system. Meanwhile,the controllers designed directly by the nonlinear control strategy have a high order, and they are difficult to be applied actually. Therefore, a new active vibration control way which fits the nonlinear buildings is proposed. The idea of the proposed way is based on the model identification and structural model linearization, and exerting the control force to the built model according to the force action principle. This proposed way has a better practicability as the built model can be reduced by the balance reduction method based on the empirical Grammian matrix. A three-story benchmark structure is presented and the simulation results illustrate that the proposed method is viable for the civil engineering structures.

  1. Direct data access protocols benchmarking on DPM

    CERN Document Server

    Furano, Fabrizio; Keeble, Oliver; Mancinelli, Valentina

    2015-01-01

    The Disk Pool Manager is an example of a multi-protocol, multi-VO system for data access on the Grid that went though a considerable technical evolution in the last years. Among other features, its architecture offers the opportunity of testing its different data access frontends under exactly the same conditions, including hardware and backend software. This characteristic inspired the idea of collecting monitoring information from various testbeds in order to benchmark the behaviour of the HTTP and Xrootd protocols for the use case of data analysis, batch or interactive. A source of information is the set of continuous tests that are run towards the worldwide endpoints belonging to the DPM Collaboration, which accumulated relevant statistics in its first year of activity. On top of that, the DPM releases are based on multiple levels of automated testing that include performance benchmarks of various kinds, executed regularly every day. At the same time, the recent releases of DPM can report monitoring infor...

  2. Physics benchmarks of the VELO upgrade

    CERN Document Server

    Eklund, Lars

    2017-01-01

    The LHCb Experiment at the LHC is successfully performing precision measurements primarily in the area of flavour physics. The collaboration is preparing an upgrade that will start taking data in 2021 with a trigger-less readout at five times the current luminosity. The vertex locator has been crucial in the success of the experiment and will continue to be so for the upgrade. It will be replaced by a hybrid pixel detector and this paper discusses the performance benchmarks of the upgraded detector. Despite the challenging experimental environment, the vertex locator will maintain or improve upon its benchmark figures compared to the current detector. Finally the long term plans for LHCb, beyond those of the upgrade currently in preparation, are discussed.

  3. Experiences in Benchmarking of Autonomic Systems

    Science.gov (United States)

    Etchevers, Xavier; Coupaye, Thierry; Vachet, Guy

    Autonomic computing promises improvements of systems quality of service in terms of availability, reliability, performance, security, etc. However, little research and experimental results have so far demonstrated this assertion, nor provided proof of the return on investment stemming from the efforts that introducing autonomic features requires. Existing works in the area of benchmarking of autonomic systems can be characterized by their qualitative and fragmented approaches. Still a crucial need is to provide generic (i.e. independent from business, technology, architecture and implementation choices) autonomic computing benchmarking tools for evaluating and/or comparing autonomic systems from a technical and, ultimately, an economical point of view. This article introduces a methodology and a process for defining and evaluating factors, criteria and metrics in order to qualitatively and quantitatively assess autonomic features in computing systems. It also discusses associated experimental results on three different autonomic systems.

  4. Toxicological benchmarks for wildlife. Environmental Restoration Program

    Energy Technology Data Exchange (ETDEWEB)

    Opresko, D.M.; Sample, B.E.; Suter, G.W.

    1993-09-01

    This report presents toxicological benchmarks for assessment of effects of 55 chemicals on six representative mammalian wildlife species (short-tailed shrew, white-footed mouse, cottontail ink, red fox, and whitetail deer) and eight avian wildlife species (American robin, woodcock, wild turkey, belted kingfisher, great blue heron, barred owl, Cooper`s hawk, and redtailed hawk) (scientific names are presented in Appendix C). These species were chosen because they are widely distributed and provide a representative range of body sizes and diets. The chemicals are some of those that occur at United States Department of Energy (DOE) waste sites. The benchmarks presented in this report are values believed to be nonhazardous for the listed wildlife species.

  5. Benchmarking Nature Tourism between Zhangjiajie and Repovesi

    OpenAIRE

    Wu, Zhou

    2014-01-01

    Since nature tourism became a booming business in modern society, more and more tourists choose nature-based tourism destination for their holidays. To find ways to promote Repovesi national park is quite significant, in a bid to reinforce the competitiveness of Repovesi national park. The topic of this thesis is both to find good marketing strategies used by the Zhangjiajie national park, via benchmarking and to provide some suggestions to Repovesi national park. The Method used in t...

  6. Benchmarking Performance of Web Service Operations

    OpenAIRE

    Zhang, Shuai

    2011-01-01

    Web services are often used for retrieving data from servers providing information of different kinds. A data providing web service operation returns collections of objects for a given set of arguments without any side effects. In this project a web service benchmark (WSBENCH) is developed to simulate the performance of web service calls. Web service operations are specified as SQL statements. The function generator of WSBENCH converts user specified SQL queries into functions and automatical...

  7. Felix Stub Generator and Benchmarks Generator

    CERN Document Server

    Valenciano, Jose Jaime

    2014-01-01

    This report discusses two projects I have been working on during my summer studentship period in the context of the FELIX upgrade for ATLAS. The first project concerns the automated code generation needed to support and speed-up the FELIX firmware and software development cycle. The second project required the execution and analysis of benchmarks of the FELIX data-decoding software as a function of data sizes, number of threads and number of data blocks.

  8. Benchmarking polish basic metal manufacturing companies

    Directory of Open Access Journals (Sweden)

    P. Pomykalski

    2014-01-01

    Full Text Available Basic metal manufacturing companies are undergoing substantial strategic changes resulting from global changes in demand. During such periods managers should closely monitor and benchmark the financial results of companies operating in their section. Proper and timely identification of the consequences of changes in these areas may be crucial as managers seek to exploit opportunities and avoid threats. The paper examines changes in financial ratios of basic metal manufacturing companies operating in Poland in the period 2006-2011.

  9. BENCHMARK AS INSTRUMENT OF CRISIS MANAGEMENT

    OpenAIRE

    Haievskyi, Vladyslav

    2017-01-01

    In the article is determined the essence of a question's benchmark through synthesis of such concepts as “benchmark”, “crisis management” as an instrument of crisis management, the powerful tool which the entity carries out the comparative analysis of processes and effective activities and allows to reduce costs for production's of products in case of limitation's resources, to raise profit and to achieve success in optimization of strategy's activities of the entity.

  10. Self-interacting Dark Matter Benchmarks

    OpenAIRE

    Kaplinghat, M.; Tulin, S.; Yu, H-B

    2017-01-01

    Dark matter self-interactions have important implications for the distributions of dark matter in the Universe, from dwarf galaxies to galaxy clusters. We present benchmark models that illustrate characteristic features of dark matter that is self-interacting through a new light mediator. These models have self-interactions large enough to change dark matter densities in the centers of galaxies in accord with observations, while remaining compatible with large-scale structur...

  11. Aeroelasticity Benchmark Assessment: Subsonic Fixed Wing Program

    Science.gov (United States)

    Florance, Jennifer P.; Chwalowski, Pawel; Wieseman, Carol D.

    2010-01-01

    The fundamental technical challenge in computational aeroelasticity is the accurate prediction of unsteady aerodynamic phenomena and the effect on the aeroelastic response of a vehicle. Currently, a benchmarking standard for use in validating the accuracy of computational aeroelasticity codes does not exist. Many aeroelastic data sets have been obtained in wind-tunnel and flight testing throughout the world; however, none have been globally presented or accepted as an ideal data set. There are numerous reasons for this. One reason is that often, such aeroelastic data sets focus on the aeroelastic phenomena alone (flutter, for example) and do not contain associated information such as unsteady pressures and time-correlated structural dynamic deflections. Other available data sets focus solely on the unsteady pressures and do not address the aeroelastic phenomena. Other discrepancies can include omission of relevant data, such as flutter frequency and / or the acquisition of only qualitative deflection data. In addition to these content deficiencies, all of the available data sets present both experimental and computational technical challenges. Experimental issues include facility influences, nonlinearities beyond those being modeled, and data processing. From the computational perspective, technical challenges include modeling geometric complexities, coupling between the flow and the structure, grid issues, and boundary conditions. The Aeroelasticity Benchmark Assessment task seeks to examine the existing potential experimental data sets and ultimately choose the one that is viewed as the most suitable for computational benchmarking. An initial computational evaluation of that configuration will then be performed using the Langley-developed computational fluid dynamics (CFD) software FUN3D1 as part of its code validation process. In addition to the benchmarking activity, this task also includes an examination of future research directions. Researchers within the

  12. A PWR Thorium Pin Cell Burnup Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Weaver, Kevan Dean; Zhao, X.; Pilat, E. E; Hejzlar, P.

    2000-05-01

    As part of work to evaluate the potential benefits of using thorium in LWR fuel, a thorium fueled benchmark comparison was made in this study between state-of-the-art codes, MOCUP (MCNP4B + ORIGEN2), and CASMO-4 for burnup calculations. The MOCUP runs were done individually at MIT and INEEL, using the same model but with some differences in techniques and cross section libraries. Eigenvalue and isotope concentrations were compared on a PWR pin cell model up to high burnup. The eigenvalue comparison as a function of burnup is good: the maximum difference is within 2% and the average absolute difference less than 1%. The isotope concentration comparisons are better than a set of MOX fuel benchmarks and comparable to a set of uranium fuel benchmarks reported in the literature. The actinide and fission product data sources used in the MOCUP burnup calculations for a typical thorium fuel are documented. Reasons for code vs code differences are analyzed and discussed.

  13. Perspective: Selected benchmarks from commercial CFD codes

    Energy Technology Data Exchange (ETDEWEB)

    Freitas, C.J. [Southwest Research Inst., San Antonio, TX (United States). Computational Mechanics Section

    1995-06-01

    This paper summarizes the results of a series of five benchmark simulations which were completed using commercial Computational Fluid Dynamics (CFD) codes. These simulations were performed by the vendors themselves, and then reported by them in ASME`s CFD Triathlon Forum and CFD Biathlon Forum. The first group of benchmarks consisted of three laminar flow problems. These were the steady, two-dimensional flow over a backward-facing step, the low Reynolds number flow around a circular cylinder, and the unsteady three-dimensional flow in a shear-driven cubical cavity. The second group of benchmarks consisted of two turbulent flow problems. These were the two-dimensional flow around a square cylinder with periodic separated flow phenomena, and the stead, three-dimensional flow in a 180-degree square bend. All simulation results were evaluated against existing experimental data nd thereby satisfied item 10 of the Journal`s policy statement for numerical accuracy. The objective of this exercise was to provide the engineering and scientific community with a common reference point for the evaluation of commercial CFD codes.

  14. Benchmarks of support in internal medicine residency training programs.

    Science.gov (United States)

    Wolfsthal, Susan D; Beasley, Brent W; Kopelman, Richard; Stickley, William; Gabryel, Timothy; Kahn, Marc J

    2002-01-01

    To identify benchmarks of financial and staff support in internal medicine residency training programs and their correlation with indicators of quality. A survey instrument to determine characteristics of support of residency training programs was mailed to each member program of the Association of Program Directors of Internal Medicine. Results were correlated with the three-year running average of the pass rates on the American Board of Internal Medicine certifying examination using bivariate and multivariate analyses. Of 394 surveys, 287 (73%) were completed: 74% of respondents were program directors and 20% were both chair and program director. The mean duration as program director was 7.5 years (median = 5), but it was significantly lower for women than for men (4.9 versus 8.1; p =.001). Respondents spent 62% of their time in educational and administrative duties, 30% in clinical activities, 5% in research, and 2% in other activities. Most chief residents were PGY4s, with 72% receiving compensation additional to base salary. On average, there was one associate program director for every 33 residents, one chief resident for every 27 residents, and one staff person for every 21 residents. Most programs provided trainees with incremental educational stipends, meals while oncall, travel and meeting expenses, and parking. Support from pharmaceutical companies was used for meals, books, and meeting expenses. Almost all programs provided meals for applicants, with 15% providing travel allowances and 37% providing lodging. The programs' board pass rates significantly correlated with the numbers of faculty fulltime equivalents (FTEs), the numbers of resident FTEs per office staff FTEs, and the numbers of categorical and preliminary applications received and ranked by the programs in 1998 and 1999. Regression analyses demonstrated three independent predictors of the programs' board pass rates: number of faculty (a positive predictor), percentage of clinical work

  15. Supply chain integration scales validation and benchmark values

    Directory of Open Access Journals (Sweden)

    Juan A. Marin-Garcia

    2013-06-01

    Full Text Available Purpose: The clarification of the constructs of the supply chain integration (clients, suppliers, external and internal, the creation of a measurement instrument based on a list of items taken from earlier papers, the validation of these scales and a preliminary benchmark to interpret the scales by percentiles based on a set of control variables (size of the plant, country, sector and degree of vertical integration. Design/methodology/approach: Our empirical analysis is based on the HPM project database (2005-2007 timeframe. The international sample is made up of 266 plants across ten countries: Austria, Canada, Finland, Germany, Italy, Japan, Korea, Spain, Sweden and the USA. In each country. We analized the descriptive statistics, internal consistency testing to purify the items (inter-item correlations, Cronbach’s alpha, squared multiple correlation, corrected item-total correlation, exploratory factor analysis, and finally, a confirmatory factor analysis to check the convergent and discriminant validity of the scales. The analyses will be done with the SPSS and EQS programme using the maximum likelihood parameter estimation method. Findings: The four proposed scales show excellent psychometric properties. Research limitations/implications: with a clearer and more concise designation of the supply chain integration measurement scales more reliable and accurate data could be taken to analyse the relations between these constructs with other variables of interest to the academic l fields. Practical implications: providing scales that are valid as a diagnostic tool for best practices, as well as providing a benchmark with which to compare the score for each individual plant against a collection of industrial companies from the machinery, electronics and transportation sectors. Originality/value: supply chain integration may be a major factor in explaining the performance of companies. The results are nevertheless inconclusive, the vast range

  16. SPICE benchmark for global tomographic methods

    Science.gov (United States)

    Qin, Yilong; Capdeville, Yann; Maupin, Valerie; Montagner, Jean-Paul; Lebedev, Sergei; Beucler, Eric

    2008-11-01

    The existing global tomographic methods result in different models due to different parametrization, scale resolution and theoretical approach. To test how current imaging techniques are limited by approximations in theory and by the inadequacy of data quality and coverage, it is necessary to perform a global-scale benchmark to understand the resolving properties of each specific imaging algorithm. In the framework of the Seismic wave Propagation and Imaging in Complex media: a European network (SPICE) project, it was decided to perform a benchmark experiment of global inversion algorithms. First, a preliminary benchmark with a simple isotropic model is carried out to check the feasibility in terms of acquisition geometry and numerical accuracy. Then, to fully validate tomographic schemes with a challenging synthetic data set, we constructed one complex anisotropic global model, which is characterized by 21 elastic constants and includes 3-D heterogeneities in velocity, anisotropy (radial and azimuthal anisotropy), attenuation, density, as well as surface topography and bathymetry. The intermediate-period (>32 s), high fidelity anisotropic modelling was performed by using state-of-the-art anisotropic anelastic modelling code, that is, coupled spectral element method (CSEM), on modern massively parallel computing resources. The benchmark data set consists of 29 events and three-component seismograms are recorded by 256 stations. Because of the limitation of the available computing power, synthetic seismograms have a minimum period of 32 s and a length of 10 500 s. The inversion of the benchmark data set demonstrates several well-known problems of classical surface wave tomography, such as the importance of crustal correction to recover the shallow structures, the loss of resolution with depth, the smearing effect, both horizontal and vertical, the inaccuracy of amplitude of isotropic S-wave velocity variation, the difficulty of retrieving the magnitude of azimuthal

  17. State of the art: benchmarking microprocessors for embedded automotive applications

    Directory of Open Access Journals (Sweden)

    Adnan Shaout

    2016-09-01

    Full Text Available Benchmarking microprocessors provides a way for consumers to evaluate the performance of the processors. This is done by using either synthetic or real world applications. There are a number of benchmarks that exist today to assist consumers in evaluating the vast number of microprocessors that are available in the market. In this paper an investigation of the various benchmarks available for evaluating microprocessors for embedded automotive applications will be performed. We will provide an overview of the following benchmarks: Whetstone, Dhrystone, Linpack, standard performance evaluation corporation (SPEC CPU2006, embedded microprocessor benchmark consortium (EEMBC AutoBench and MiBench. A comparison of existing benchmarks will be given based on relevant characteristics of automotive applications which will give the proper recommendation when benchmarking processors for automotive applications.

  18. Energy saving in WWTP: Daily benchmarking under uncertainty and data availability limitations.

    Science.gov (United States)

    Torregrossa, D; Schutz, G; Cornelissen, A; Hernández-Sancho, F; Hansen, J

    2016-07-01

    Efficient management of Waste Water Treatment Plants (WWTPs) can produce significant environmental and economic benefits. Energy benchmarking can be used to compare WWTPs, identify targets and use these to improve their performance. Different authors have performed benchmark analysis on monthly or yearly basis but their approaches suffer from a time lag between an event, its detection, interpretation and potential actions. The availability of on-line measurement data on many WWTPs should theoretically enable the decrease of the management response time by daily benchmarking. Unfortunately this approach is often impossible because of limited data availability. This paper proposes a methodology to perform a daily benchmark analysis under database limitations. The methodology has been applied to the Energy Online System (EOS) developed in the framework of the project "INNERS" (INNovative Energy Recovery Strategies in the urban water cycle). EOS calculates a set of Key Performance Indicators (KPIs) for the evaluation of energy and process performances. In EOS, the energy KPIs take in consideration the pollutant load in order to enable the comparison between different plants. For example, EOS does not analyse the energy consumption but the energy consumption on pollutant load. This approach enables the comparison of performances for plants with different loads or for a single plant under different load conditions. The energy consumption is measured by on-line sensors, while the pollutant load is measured in the laboratory approximately every 14 days. Consequently, the unavailability of the water quality parameters is the limiting factor in calculating energy KPIs. In this paper, in order to overcome this limitation, the authors have developed a methodology to estimate the required parameters and manage the uncertainty in the estimation. By coupling the parameter estimation with an interval based benchmark approach, the authors propose an effective, fast and reproducible

  19. Validation of VHTRC calculation benchmark of critical experiment using the MCB code

    Directory of Open Access Journals (Sweden)

    Stanisz Przemysław

    2016-01-01

    Full Text Available The calculation benchmark problem Very High Temperature Reactor Critical (VHTR a pin-in-block type core critical assembly has been investigated with the Monte Carlo Burnup (MCB code in order to validate the latest version of Nuclear Data Library based on ENDF format. Executed benchmark has been made on the basis of VHTR benchmark available from the International Handbook of Evaluated Reactor Physics Benchmark Experiments. This benchmark is useful for verifying the discrepancies in keff values between various libraries and experimental values. This allows to improve accuracy of the neutron transport calculations that may help in designing the high performance commercial VHTRs. Almost all safety parameters depend on the accuracy of neutron transport calculation results that, in turn depend on the accuracy of nuclear data libraries. Thus, evaluation of the libraries applicability to VHTR modelling is one of the important subjects. We compared the numerical experiment results with experimental measurements using two versions of available nuclear data (ENDF-B-VII.1 and JEFF-3.2 prepared for required temperatures. Calculations have been performed with the MCB code which allows to obtain very precise representation of complex VHTR geometry, including the double heterogeneity of a fuel element. In this paper, together with impact of nuclear data, we discuss also the impact of different lattice modelling inside the fuel pins. The discrepancies of keff have been successfully observed and show good agreement with each other and with the experimental data within the 1 σ range of the experimental uncertainty. Because some propagated discrepancies observed, we proposed appropriate corrections in experimental constants which can improve the reactivity coefficient dependency. Obtained results confirm the accuracy of the new Nuclear Data Libraries.

  20. NEUTRON RADIOGRAPHY (NRAD) REACTOR 64-ELEMENT CORE UPGRADE

    Energy Technology Data Exchange (ETDEWEB)

    John D. Bess

    2014-03-01

    The neutron radiography (NRAD) reactor is a 250 kW TRIGA (registered) (Training, Research, Isotopes, General Atomics) Mark II , tank-type research reactor currently located in the basement, below the main hot cell, of the Hot Fuel Examination Facility (HFEF) at the Idaho National Laboratory (INL). It is equipped with two beam tubes with separate radiography stations for the performance of neutron radiography irradiation on small test components. The interim critical configuration developed during the core upgrade, which contains only 62 fuel elements, has been evaluated as an acceptable benchmark experiment. The final 64-fuel-element operational core configuration of the NRAD LEU TRIGA reactor has also been evaluated as an acceptable benchmark experiment. Calculated eigenvalues differ significantly (approximately +/-1%) from the benchmark eigenvalue and have demonstrated sensitivity to the thermal scattering treatment of hydrogen in the U-Er-Zr-H fuel.

  1. Pressure Core Characterization

    Science.gov (United States)

    Santamarina, J. C.

    2014-12-01

    Natural gas hydrates form under high fluid pressure and low temperature, and are found in permafrost, deep lakes or ocean sediments. Hydrate dissociation by depressurization and/or heating is accompanied by a multifold hydrate volume expansion and host sediments with low permeability experience massive destructuration. Proper characterization requires coring, recovery, manipulation and testing under P-T conditions within the stability field. Pressure core technology allows for the reliable characterization of hydrate bearing sediments within the stability field in order to address scientific and engineering needs, including the measurement of parameters used in hydro-thermo-mechanical analyses, and the monitoring of hydrate dissociation under controlled pressure, temperature, effective stress and chemical conditions. Inherent sampling effects remain and need to be addressed in test protocols and data interpretation. Pressure core technology has been deployed to study hydrate bearing sediments at several locations around the world. In addition to pressure core testing, a comprehensive characterization program should include sediment analysis, testing of reconstituted specimens (with and without synthetic hydrate), and in situ testing. Pressure core characterization technology can be used to study other gas-charged formations such as deep sea sediments, coal bed methane and gas shales.

  2. INL Results for Phases I and III of the OECD/NEA MHTGR-350 Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Gerhard Strydom; Javier Ortensi; Sonat Sen; Hans Hammer

    2013-09-01

    The Idaho National Laboratory (INL) Very High Temperature Reactor (VHTR) Technology Development Office (TDO) Methods Core Simulation group led the construction of the Organization for Economic Cooperation and Development (OECD) Modular High Temperature Reactor (MHTGR) 350 MW benchmark for comparing and evaluating prismatic VHTR analysis codes. The benchmark is sponsored by the OECD's Nuclear Energy Agency (NEA), and the project will yield a set of reference steady-state, transient, and lattice depletion problems that can be used by the Department of Energy (DOE), the Nuclear Regulatory Commission (NRC), and vendors to assess their code suits. The Methods group is responsible for defining the benchmark specifications, leading the data collection and comparison activities, and chairing the annual technical workshops. This report summarizes the latest INL results for Phase I (steady state) and Phase III (lattice depletion) of the benchmark. The INSTANT, Pronghorn and RattleSnake codes were used for the standalone core neutronics modeling of Exercise 1, and the results obtained from these codes are compared in Section 4. Exercise 2 of Phase I requires the standalone steady-state thermal fluids modeling of the MHTGR-350 design, and the results for the systems code RELAP5-3D are discussed in Section 5. The coupled neutronics and thermal fluids steady-state solution for Exercise 3 are reported in Section 6, utilizing the newly developed Parallel and Highly Innovative Simulation for INL Code System (PHISICS)/RELAP5-3D code suit. Finally, the lattice depletion models and results obtained for Phase III are compared in Section 7. The MHTGR-350 benchmark proved to be a challenging simulation set of problems to model accurately, and even with the simplifications introduced in the benchmark specification this activity is an important step in the code-to-code verification of modern prismatic VHTR codes. A final OECD/NEA comparison report will compare the Phase I and III

  3. Performance benchmarking and incentive regulation. Considerations of directing signals for electricity distribution companies

    Energy Technology Data Exchange (ETDEWEB)

    Honkapuro, S.

    2008-07-01

    After the restructuring process of the power supply industry, which for instance in Finland took place in the mid-1990s, free competition was introduced for the production and sale of electricity. Nevertheless, natural monopolies are found to be the most efficient form of production in the transmission and distribution of electricity, and therefore such companies remained franchised monopolies. To prevent the misuse of the monopoly position and to guarantee the rights of the customers, regulation of these monopoly companies is required. One of the main objectives of the restructuring process has been to increase the cost efficiency of the industry. Simultaneously, demands for the service quality are increasing. Therefore, many regulatory frameworks are being, or have been, reshaped so that companies are provided with stronger incentives for efficiency and quality improvements. Performance benchmarking has in many cases a central role in the practical implementation of such incentive schemes. Economic regulation with performance benchmarking attached to it provides companies with directing signals that tend to affect their investment and maintenance strategies. Since the asset lifetimes in the electricity distribution are typically many decades, investment decisions have far-reaching technical and economic effects. This doctoral thesis addresses the directing signals of incentive regulation and performance benchmarking in the field of electricity distribution. The theory of efficiency measurement and the most common regulation models are presented. The chief contributions of this work are (1) a new kind of analysis of the regulatory framework, so that the actual directing signals of the regulation and benchmarking for the electricity distribution companies are evaluated, (2) developing the methodology and a software tool for analysing the directing signals of the regulation and benchmarking in the electricity distribution sector, and (3) analysing the real

  4. Towards Systematic Benchmarking of Climate Model Performance

    Science.gov (United States)

    Gleckler, P. J.

    2014-12-01

    The process by which climate models are evaluated has evolved substantially over the past decade, with the Coupled Model Intercomparison Project (CMIP) serving as a centralizing activity for coordinating model experimentation and enabling research. Scientists with a broad spectrum of expertise have contributed to the CMIP model evaluation process, resulting in many hundreds of publications that have served as a key resource for the IPCC process. For several reasons, efforts are now underway to further systematize some aspects of the model evaluation process. First, some model evaluation can now be considered routine and should not require "re-inventing the wheel" or a journal publication simply to update results with newer models. Second, the benefit of CMIP research to model development has not been optimal because the publication of results generally takes several years and is usually not reproducible for benchmarking newer model versions. And third, there are now hundreds of model versions and many thousands of simulations, but there is no community-based mechanism for routinely monitoring model performance changes. An important change in the design of CMIP6 can help address these limitations. CMIP6 will include a small set standardized experiments as an ongoing exercise (CMIP "DECK": ongoing Diagnostic, Evaluation and Characterization of Klima), so that modeling groups can submit them at any time and not be overly constrained by deadlines. In this presentation, efforts to establish routine benchmarking of existing and future CMIP simulations will be described. To date, some benchmarking tools have been made available to all CMIP modeling groups to enable them to readily compare with CMIP5 simulations during the model development process. A natural extension of this effort is to make results from all CMIP simulations widely available, including the results from newer models as soon as the simulations become available for research. Making the results from routine

  5. A Uranium Bioremediation Reactive Transport Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Yabusaki, Steven B.; Sengor, Sevinc; Fang, Yilin

    2015-06-01

    A reactive transport benchmark problem set has been developed based on in situ uranium bio-immobilization experiments that have been performed at a former uranium mill tailings site in Rifle, Colorado, USA. Acetate-amended groundwater stimulates indigenous microorganisms to catalyze the reduction of U(VI) to a sparingly soluble U(IV) mineral. The interplay between the flow, acetate loading periods and rates, microbially-mediated and geochemical reactions leads to dynamic behavior in metal- and sulfate-reducing bacteria, pH, alkalinity, and reactive mineral surfaces. The benchmark is based on an 8.5 m long one-dimensional model domain with constant saturated flow and uniform porosity. The 159-day simulation introduces acetate and bromide through the upgradient boundary in 14-day and 85-day pulses separated by a 10 day interruption. Acetate loading is tripled during the second pulse, which is followed by a 50 day recovery period. Terminal electron accepting processes for goethite, phyllosilicate Fe(III), U(VI), and sulfate are modeled using Monod-type rate laws. Major ion geochemistry modeled includes mineral reactions, as well as aqueous and surface complexation reactions for UO2++, Fe++, and H+. In addition to the dynamics imparted by the transport of the acetate pulses, U(VI) behavior involves the interplay between bioreduction, which is dependent on acetate availability, and speciation-controlled surface complexation, which is dependent on pH, alkalinity and available surface complexation sites. The general difficulty of this benchmark is the large number of reactions (74), multiple rate law formulations, a multisite uranium surface complexation model, and the strong interdependency and sensitivity of the reaction processes. Results are presented for three simulators: HYDROGEOCHEM, PHT3D, and PHREEQC.

  6. Flexible Tagged Architecture for Trustworthy Multi-core Platforms

    Science.gov (United States)

    2015-06-01

    and pre-processing: The core-fabric interface reduces the workload of the recon- figurable fabric by filtering out unnecessary types of instructions...embedded, automotive , and industrial applications. For our experiments, we used a FIFO of 16 entries connecting the main and monitoring cores. The...Mibench: A free, commercially representative embedded benchmark suite. Workload Characterization, Annual IEEE International Workshop, 0:3–14, 2001. [16

  7. Benchmarks in Tacit Knowledge Skills Instruction

    DEFF Research Database (Denmark)

    Tackney, Charles T.; Strömgren, Ole; Sato, Toyoko

    2006-01-01

    While the knowledge management literature has addressed the explicit and tacit skills needed for successful performance in the modern enterprise, little attention has been paid to date in this particular literature as to how these wide-ranging skills may be suitably acquired during the course...... experience more empowering of essential tacit knowledge skills than that found in educational institutions in other national settings. We specify the program forms and procedures for consensus-based governance and group work (as benchmarks) that demonstrably instruct undergraduates in the tacit skill...

  8. An OpenMP Compiler Benchmark

    Directory of Open Access Journals (Sweden)

    Matthias S. Müller

    2003-01-01

    Full Text Available The purpose of this benchmark is to propose several optimization techniques and to test their existence in current OpenMP compilers. Examples are the removal of redundant synchronization constructs, effective constructs for alternative code and orphaned directives. The effectiveness of the compiler generated code is measured by comparing different OpenMP constructs and compilers. If possible, we also compare with the hand coded "equivalent" solution. Six out of seven proposed optimization techniques are already implemented in different compilers. However, most compilers implement only one or two of them.

  9. Benchmarks of Global Clean Energy Manufacturing

    Energy Technology Data Exchange (ETDEWEB)

    Sandor, Debra [National Renewable Energy Lab. (NREL), Golden, CO (United States); Chung, Donald [National Renewable Energy Lab. (NREL), Golden, CO (United States); Keyser, David [National Renewable Energy Lab. (NREL), Golden, CO (United States); Mann, Margaret [National Renewable Energy Lab. (NREL), Golden, CO (United States); Engel-Cox, Jill [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2017-01-01

    The Clean Energy Manufacturing Analysis Center (CEMAC), sponsored by the U.S. Department of Energy (DOE) Office of Energy Efficiency and Renewable Energy (EERE), provides objective analysis and up-to-date data on global supply chains and manufacturing of clean energy technologies. Benchmarks of Global Clean Energy Manufacturing sheds light on several fundamental questions about the global clean technology manufacturing enterprise: How does clean energy technology manufacturing impact national economies? What are the economic opportunities across the manufacturing supply chain? What are the global dynamics of clean energy technology manufacturing?

  10. International Benchmarking of Electricity Transmission System Operators

    DEFF Research Database (Denmark)

    Agrell, Per J.; Bogetoft, Peter

    2014-01-01

    Electricity transmission system operators (TSO) in Europe are increasing subject to high-powered performance-based regulation, such as revenue-cap regimes. The determination of the parameters in such regimes is challenging for national regulatory authorities (NRA), since there is normally a single...... TSO operating in each jurisdiction. The solution for European regulators has been found in international regulatory benchmarking, organized in collaboration with the Council of European Energy Regulators (CEER) in 2008 and 2012 for 22 and 23 TSOs, respectively. The frontier study provides static cost...... weight restrictions and a correction method for opening balances....

  11. Benchmarking of methods for genomic taxonomy

    DEFF Research Database (Denmark)

    Larsen, Mette Voldby; Cosentino, Salvatore; Lukjancenko, Oksana

    2014-01-01

    . Nevertheless, the method has been found to have a number of shortcomings. In the current study, we trained and benchmarked five methods for whole-genome sequence-based prokaryotic species identification on a common data set of complete genomes: (i) SpeciesFinder, which is based on the complete 16S rRNA gene......-specific functional protein domain profiles; and finally (v) KmerFinder, which examines the number of cooccurring k-mers (substrings of k nucleotides in DNA sequence data). The performances of the methods were subsequently evaluated on three data sets of short sequence reads or draft genomes from public databases...

  12. Robust randomized benchmarking of quantum processes

    CERN Document Server

    Magesan, Easwar; Emerson, Joseph

    2010-01-01

    We describe a simple randomized benchmarking protocol for quantum information processors and obtain a sequence of models for the observable fidelity decay as a function of a perturbative expansion of the errors. We are able to prove that the protocol provides an efficient and reliable estimate of an average error-rate for a set operations (gates) under a general noise model that allows for both time and gate-dependent errors. We determine the conditions under which this estimate remains valid and illustrate the protocol through numerical examples.

  13. Benchmarking result diversification in social image retrieval

    DEFF Research Database (Denmark)

    Ionescu, Bogdan; Popescu, Adrian; Müller, Henning

    2014-01-01

    This article addresses the issue of retrieval result diversification in the context of social image retrieval and discusses the results achieved during the MediaEval 2013 benchmarking. 38 runs and their results are described and analyzed in this text. A comparison of the use of expert vs....... crowdsourcing annotations shows that crowdsourcing results are slightly different and have higher inter observer differences but results are comparable at lower cost. Multimodal approaches have best results in terms of cluster recall. Manual approaches can lead to high precision but often lower diversity....... With this detailed results analysis we give future insights on this matter....

  14. Benchmarks for multicomponent diffusion and electrochemical migration

    DEFF Research Database (Denmark)

    Rasouli, Pejman; Steefel, Carl I.; Mayer, K. Ulrich

    2015-01-01

    In multicomponent electrolyte solutions, the tendency of ions to diffuse at different rates results in a charge imbalance that is counteracted by the electrostatic coupling between charged species leading to a process called “electrochemical migration” or “electromigration.” Although not commonly...... not been published to date. This contribution provides a set of three benchmark problems that demonstrate the effect of electric coupling during multicomponent diffusion and electrochemical migration and at the same time facilitate the intercomparison of solutions from existing reactive transport codes...

  15. ABM11 parton distributions and benchmarks

    Energy Technology Data Exchange (ETDEWEB)

    Alekhin, Sergey [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Institut Fiziki Vysokikh Ehnergij, Protvino (Russian Federation); Bluemlein, Johannes; Moch, Sven-Olaf [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany)

    2012-08-15

    We present a determination of the nucleon parton distribution functions (PDFs) and of the strong coupling constant {alpha}{sub s} at next-to-next-to-leading order (NNLO) in QCD based on the world data for deep-inelastic scattering and the fixed-target data for the Drell-Yan process. The analysis is performed in the fixed-flavor number scheme for n{sub f}=3,4,5 and uses the MS scheme for {alpha}{sub s} and the heavy quark masses. The fit results are compared with other PDFs and used to compute the benchmark cross sections at hadron colliders to the NNLO accuracy.

  16. Benchmarks in Tacit Knowledge Skills Instruction

    DEFF Research Database (Denmark)

    Tackney, Charles T.; Strömgren, Ole; Sato, Toyoko

    2006-01-01

    While the knowledge management literature has addressed the explicit and tacit skills needed for successful performance in the modern enterprise, little attention has been paid to date in this particular literature as to how these wide-ranging skills may be suitably acquired during the course...... experience more empowering of essential tacit knowledge skills than that found in educational institutions in other national settings. We specify the program forms and procedures for consensus-based governance and group work (as benchmarks) that demonstrably instruct undergraduates in the tacit skill...

  17. A Benchmark Construction of Positron Crystal Undulator

    CERN Document Server

    Tikhomirov, Victor V

    2015-01-01

    Optimization of a positron crystal undulator (CU) is addressed. The ways to assure both the maximum intensity and minimum spectral width of positron CU radiation are outlined. We claim that the minimum CU spectrum width of 3 -- 4% is reached at the positron energies of a few GeV and that the optimal bending radius of crystals planes in CU ranges from 3 to 5 critical bending radii for channeled particles. Following suggested approach a benchmark positron CU construction is devised and its functioning is illustrated using the simulation method widely tested by experimental data.

  18. Measurement Methods in the field of benchmarking

    Directory of Open Access Journals (Sweden)

    István Szűts

    2004-05-01

    Full Text Available In benchmarking we often come across with parameters being difficultto measure while executing comparisons or analyzing performance, yet they haveto be compared and measured so as to be able to choose the best practices. Thesituation is similar in the case of complex, multidimensional evaluation as well,when the relative importance and order of different dimensions, parameters to beevaluated have to be determined or when the range of similar performanceindicators have to be decreased with regard to simpler comparisons. In suchcases we can use the ordinal or interval scales of measurement elaborated by S.S.Stevens.

  19. Benchmarking research of steel companies in Europe

    Directory of Open Access Journals (Sweden)

    M. Antošová

    2013-07-01

    Full Text Available In present time steelworks are at a stage of permanent changes that are marked with still stronger competition pressure. Therefore managers must solve questions of how to decrease production costs, how to overcome competition and how to survive in the world market. Still more attention should be paid to the modern managerial methods of market research and comparison with competition. Benchmarking research is one of the effective tools for such research. The goal of this contribution is to compare chosen steelworks and to indicate new directions for their development with the possibility of increasing the productivity of steel production.

  20. Monitoring and Evaluating Projects : A Step-by-Step Primer on Monitoring, Benchmarking, and Impact Evaluation

    OpenAIRE

    Rebekka E. Grun

    2006-01-01

    This paper attempts to be a practical step-by-step guide to prepare and carry out benchmarking and impact analyses of projects. It's purpose is to attempt to present analytical tools solidly grounded in economic theory all while focusing on the practical questions of evaluations. Little space is given to theory, in order to spend more time on the actual steps involved, trying to make this ...

  1. The Benchmark Testing of ~9Be of CENDL-3.0

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    The 9Be data of CENDL-3.0 were updated again recently by Prof.Zhang Jing-shang et al. by using anew approach. In order to test the reliability of 9Be data of CENDL-3.0, some benchmarks were used. Inaddition to the values of Keff, the leakage spectrum of Be sphere was calculated. The data processing wascarried out by using the NJOY nuclear data processing code system. The calculations and analyses of

  2. Benchmark duration of work hours for development of fatigue symptoms in Japanese workers with adjustment for job-related stress.

    Science.gov (United States)

    Suwazono, Yasushi; Dochi, Mirei; Kobayashi, Etsuko; Oishi, Mitsuhiro; Okubo, Yasushi; Tanaka, Kumihiko; Sakata, Kouichi

    2008-12-01

    The objective of this study was to calculate benchmark durations and lower 95% confidence limits for benchmark durations of working hours associated with subjective fatigue symptoms by applying the benchmark dose approach while adjusting for job-related stress using multiple logistic regression analyses. A self-administered questionnaire was completed by 3,069 male and 412 female daytime workers (age 18-67 years) in a Japanese steel company. The eight dependent variables in the Cumulative Fatigue Symptoms Index were decreased vitality, general fatigue, physical disorders, irritability, decreased willingness to work, anxiety, depressive feelings, and chronic tiredness. Independent variables were daily working hours, four subscales (job demand, job control, interpersonal relationship, and job suitability) of the Brief Job Stress Questionnaire, and other potential covariates. Using significant parameters for working hours and those for other covariates, the benchmark durations of working hours were calculated for the corresponding Index property. Benchmark response was set at 5% or 10%. Assuming a condition of worst job stress, the benchmark duration/lower 95% confidence limit for benchmark duration of working hours per day with a benchmark response of 5% or 10% were 10.0/9.4 or 11.7/10.7 (irritability) and 9.2/8.9 or 10.4/9.8 (chronic tiredness) in men and 8.9/8.4 or 9.8/8.9 (chronic tiredness) in women. The threshold amounts of working hours for fatigue symptoms under the worst job-related stress were very close to the standard daily working hours in Japan. The results strongly suggest that special attention should be paid to employees whose working hours exceed threshold amounts based on individual levels of job-related stress.

  3. HR 8799: The Benchmark Directly-Imaged Planetary System

    CERN Document Server

    Currie, Thayne

    2016-01-01

    HR 8799 harbors arguably the first and best-studied directly-imaged planets. In this brief article, I describe how the HR 8799 planetary system is a benchmark system for studying the atmospheres, orbital properties, dynamical stability, and formation of young superjovian planets. Multi-wavelength photometry and spectroscopy show that HR 8799 bcde appear to have thicker clouds than do field brown dwarfs of similar effective temperatures and exhibit evidence for non-equilibrium carbon chemistry, features that are likely connected to the planets' low surface gravities. Over 17 years of astrometric data constrain the planets' orbits to not be face on but possibly in multiple orbital resonances. At orbital separations of 15--70 au and with masses of $\\approx$ 5--7 $M_{\\rm J}$, HR 8799 bcde probe the extremes of jovian planet formation by core accretion: medium-resolution spectroscopy may provide clues about these planets' formation conditions. Data from the next generation of 30 m-class telescopes should better co...

  4. Operating Room Efficiency before and after Entrance in a Benchmarking Program for Surgical Process Data.

    Science.gov (United States)

    Pedron, Sara; Winter, Vera; Oppel, Eva-Maria; Bialas, Enno

    2017-08-23

    Operating room (OR) efficiency continues to be a high priority for hospitals. In this context the concept of benchmarking has gained increasing importance as a means to improve OR performance. The aim of this study was to investigate whether and how participation in a benchmarking and reporting program for surgical process data was associated with a change in OR efficiency, measured through raw utilization, turnover times, and first-case tardiness. The main analysis is based on panel data from 202 surgical departments in German hospitals, which were derived from the largest database for surgical process data in Germany. Panel regression modelling was applied. Results revealed no clear and univocal trend of participation in a benchmarking and reporting program for surgical process data. The largest trend was observed for first-case tardiness. In contrast to expectations, turnover times showed a generally increasing trend during participation. For raw utilization no clear and statistically significant trend could be evidenced. Subgroup analyses revealed differences in effects across different hospital types and department specialties. Participation in a benchmarking and reporting program and thus the availability of reliable, timely and detailed analysis tools to support the OR management seemed to be correlated especially with an increase in the timeliness of staff members regarding first-case starts. The increasing trend in turnover time revealed the absence of effective strategies to improve this aspect of OR efficiency in German hospitals and could have meaningful consequences for the medium- and long-run capacity planning in the OR.

  5. Application of the coupled code COBAYA3/SUBCHANFLOW to the simulation of the Exercise 2 of the OECD/NEA Kalinin-3 Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Jimenez, J.; Calleja, M.; Sanchez, V.

    2013-07-01

    The OECD/NEA Kalinin-3 Coolant Transient Benchmark is based on a real transient test that took place on 2nd October 2005 in the Unit 3 of the Russian Kalinin NPP. The reactor type is a VVER-1000/320 and the transient was caused by the intentional switching-off of one of the four operating main coolant pumps at nominal reactor power. A big amount of data was recorded during the transient by the core monitoring system. These data have been made available to the international community through an OECD/NEA benchmark. Thanks to the good quality of the data available, this benchmark is very useful for the validation of coupled neutron kinetics and thermal-hydraulic codes. This paper describes the results obtained with the 3D neutron diffusion code COBAYA3 coupled with the sub-channel thermal-hydraulic code SUBCHANFLOW for the Exercise 2 of the Kalinin-3 Benchmark.

  6. Development of a California commercial building benchmarking database

    Energy Technology Data Exchange (ETDEWEB)

    Kinney, Satkartar; Piette, Mary Ann

    2002-05-17

    Building energy benchmarking is a useful starting point for commercial building owners and operators to target energy savings opportunities. There are a number of tools and methods for benchmarking energy use. Benchmarking based on regional data can provides more relevant information for California buildings than national tools such as Energy Star. This paper discusses issues related to benchmarking commercial building energy use and the development of Cal-Arch, a building energy benchmarking database for California. Currently Cal-Arch uses existing survey data from California's Commercial End Use Survey (CEUS), a largely underutilized wealth of information collected by California's major utilities. Doe's Commercial Building Energy Consumption Survey (CBECS) is used by a similar tool, Arch, and by a number of other benchmarking tools. Future versions of Arch/Cal-Arch will utilize additional data sources including modeled data and individual buildings to expand the database.

  7. Core Java

    CERN Document Server

    Horstmann, Cay S

    2013-01-01

    Fully updated to reflect Java SE 7 language changes, Core Java™, Volume I—Fundamentals, Ninth Edition, is the definitive guide to the Java platform. Designed for serious programmers, this reliable, unbiased, no-nonsense tutorial illuminates key Java language and library features with thoroughly tested code examples. As in previous editions, all code is easy to understand, reflects modern best practices, and is specifically designed to help jumpstart your projects. Volume I quickly brings you up-to-speed on Java SE 7 core language enhancements, including the diamond operator, improved resource handling, and catching of multiple exceptions. All of the code examples have been updated to reflect these enhancements, and complete descriptions of new SE 7 features are integrated with insightful explanations of fundamental Java concepts.

  8. Transparency benchmarking on audio watermarks and steganography

    Science.gov (United States)

    Kraetzer, Christian; Dittmann, Jana; Lang, Andreas

    2006-02-01

    The evaluation of transparency plays an important role in the context of watermarking and steganography algorithms. This paper introduces a general definition of the term transparency in the context of steganography, digital watermarking and attack based evaluation of digital watermarking algorithms. For this purpose the term transparency is first considered individually for each of the three application fields (steganography, digital watermarking and watermarking algorithm evaluation). From the three results a general definition for the overall context is derived in a second step. The relevance and applicability of the definition given is evaluated in practise using existing audio watermarking and steganography algorithms (which work in time, frequency and wavelet domain) as well as an attack based evaluation suite for audio watermarking benchmarking - StirMark for Audio (SMBA). For this purpose selected attacks from the SMBA suite are modified by adding transparency enhancing measures using a psychoacoustic model. The transparency and robustness of the evaluated audio watermarking algorithms by using the original and modifid attacks are compared. The results of this paper show hat transparency benchmarking will lead to new information regarding the algorithms under observation and their usage. This information can result in concrete recommendations for modification, like the ones resulting from the tests performed here.

  9. Simple mathematical law benchmarks human confrontations

    Science.gov (United States)

    Johnson, Neil F.; Medina, Pablo; Zhao, Guannan; Messinger, Daniel S.; Horgan, John; Gill, Paul; Bohorquez, Juan Camilo; Mattson, Whitney; Gangi, Devon; Qi, Hong; Manrique, Pedro; Velasquez, Nicolas; Morgenstern, Ana; Restrepo, Elvira; Johnson, Nicholas; Spagat, Michael; Zarama, Roberto

    2013-12-01

    Many high-profile societal problems involve an individual or group repeatedly attacking another - from child-parent disputes, sexual violence against women, civil unrest, violent conflicts and acts of terror, to current cyber-attacks on national infrastructure and ultrafast cyber-trades attacking stockholders. There is an urgent need to quantify the likely severity and timing of such future acts, shed light on likely perpetrators, and identify intervention strategies. Here we present a combined analysis of multiple datasets across all these domains which account for >100,000 events, and show that a simple mathematical law can benchmark them all. We derive this benchmark and interpret it, using a minimal mechanistic model grounded by state-of-the-art fieldwork. Our findings provide quantitative predictions concerning future attacks; a tool to help detect common perpetrators and abnormal behaviors; insight into the trajectory of a `lone wolf' identification of a critical threshold for spreading a message or idea among perpetrators; an intervention strategy to erode the most lethal clusters; and more broadly, a quantitative starting point for cross-disciplinary theorizing about human aggression at the individual and group level, in both real and online worlds.

  10. BENCHMARKING LEARNER EDUCATION USING ONLINE BUSINESS SIMULATION

    Directory of Open Access Journals (Sweden)

    Alfred H. Miller

    2016-06-01

    Full Text Available For programmatic accreditation by the Accreditation Council of Business Schools and Programs (ACBSP, business programs are required to meet STANDARD #4, Measurement and Analysis of Student Learning and Performance. Business units must demonstrate that outcome assessment systems are in place using documented evidence that shows how the results are being used to further develop or improve the academic business program. The Higher Colleges of Technology, a 17 campus federal university in the United Arab Emirates, differentiates its applied degree programs through a ‘learning by doing ethos,’ which permeates the entire curricula. This paper documents benchmarking of education for managing innovation. Using business simulation for Bachelors of Business, Year 3 learners, in a business strategy class; learners explored through a simulated environment the following functional areas; research and development, production, and marketing of a technology product. Student teams were required to use finite resources and compete against other student teams in the same universe. The study employed an instrument developed in a 60-sample pilot study of business simulation learners against which subsequent learners participating in online business simulation could be benchmarked. The results showed incremental improvement in the program due to changes made in assessment strategies, including the oral defense.

  11. Baseline and benchmark model development for hotels

    Science.gov (United States)

    Hooks, Edward T., Jr.

    The hotel industry currently faces rising energy costs and requires the tools to maximize energy efficiency. In order to achieve this goal a clear definition of the current methods used to measure and monitor energy consumption is made. Uncovering the limitations to the most common practiced analysis strategies and presenting methods that can potentially overcome those limitations is the main purpose. Techniques presented can be used for measurement and verification of energy efficiency plans and retrofits. Also, modern energy modeling tool are introduced to demonstrate how they can be utilized for benchmarking and baseline models. This will provide the ability to obtain energy saving recommendations and parametric analysis to explore energy savings potential. These same energy models can be used in design decisions for new construction. An energy model is created of a resort style hotel that over one million square feet and has over one thousand rooms. A simulation and detailed analysis is performed on a hotel room. The planning process for creating the model and acquiring data from the hotel room to calibrate and verify the simulation will be explained. An explanation as to how this type of modeling can potentially be beneficial for future baseline and benchmarking strategies for the hotel industry. Ultimately the conclusion will address some common obstacles the hotel industry has in reaching their full potential of energy efficiency and how these techniques can best serve them.

  12. Benchmarking database performance for genomic data.

    Science.gov (United States)

    Khushi, Matloob

    2015-06-01

    Genomic regions represent features such as gene annotations, transcription factor binding sites and epigenetic modifications. Performing various genomic operations such as identifying overlapping/non-overlapping regions or nearest gene annotations are common research needs. The data can be saved in a database system for easy management, however, there is no comprehensive database built-in algorithm at present to identify overlapping regions. Therefore I have developed a novel region-mapping (RegMap) SQL-based algorithm to perform genomic operations and have benchmarked the performance of different databases. Benchmarking identified that PostgreSQL extracts overlapping regions much faster than MySQL. Insertion and data uploads in PostgreSQL were also better, although general searching capability of both databases was almost equivalent. In addition, using the algorithm pair-wise, overlaps of >1000 datasets of transcription factor binding sites and histone marks, collected from previous publications, were reported and it was found that HNF4G significantly co-locates with cohesin subunit STAG1 (SA1).Inc.

  13. EVA Health and Human Performance Benchmarking Study

    Science.gov (United States)

    Abercromby, A. F.; Norcross, J.; Jarvis, S. L.

    2016-01-01

    Multiple HRP Risks and Gaps require detailed characterization of human health and performance during exploration extravehicular activity (EVA) tasks; however, a rigorous and comprehensive methodology for characterizing and comparing the health and human performance implications of current and future EVA spacesuit designs does not exist. This study will identify and implement functional tasks and metrics, both objective and subjective, that are relevant to health and human performance, such as metabolic expenditure, suit fit, discomfort, suited postural stability, cognitive performance, and potentially biochemical responses for humans working inside different EVA suits doing functional tasks under the appropriate simulated reduced gravity environments. This study will provide health and human performance benchmark data for humans working in current EVA suits (EMU, Mark III, and Z2) as well as shirtsleeves using a standard set of tasks and metrics with quantified reliability. Results and methodologies developed during this test will provide benchmark data against which future EVA suits, and different suit configurations (eg, varied pressure, mass, CG) may be reliably compared in subsequent tests. Results will also inform fitness for duty standards as well as design requirements and operations concepts for future EVA suits and other exploration systems.

  14. Multisensor benchmark data for riot control

    Science.gov (United States)

    Jäger, Uwe; Höpken, Marc; Dürr, Bernhard; Metzler, Jürgen; Willersinn, Dieter

    2008-10-01

    Quick and precise response is essential for riot squads when coping with escalating violence in crowds. Often it is just a single person, known as the leader of the gang, who instigates other people and thus is responsible of excesses. Putting this single person out of action in most cases leads to a de-escalating situation. Fostering de-escalations is one of the main tasks of crowd and riot control. To do so, extensive situation awareness is mandatory for the squads and can be promoted by technical means such as video surveillance using sensor networks. To develop software tools for situation awareness appropriate input data with well-known quality is needed. Furthermore, the developer must be able to measure algorithm performance and ongoing improvements. Last but not least, after algorithm development has finished and marketing aspects emerge, meeting of specifications must be proved. This paper describes a multisensor benchmark which exactly serves this purpose. We first define the underlying algorithm task. Then we explain details about data acquisition and sensor setup and finally we give some insight into quality measures of multisensor data. Currently, the multisensor benchmark described in this paper is applied to the development of basic algorithms for situational awareness, e.g. tracking of individuals in a crowd.

  15. Remarks on a benchmark nonlinear constrained optimization problem

    Institute of Scientific and Technical Information of China (English)

    Luo Yazhong; Lei Yongjun; Tang Guojin

    2006-01-01

    Remarks on a benchmark nonlinear constrained optimization problem are made. Due to a citation error, two absolutely different results for the benchmark problem are obtained by independent researchers. Parallel simulated annealing using simplex method is employed in our study to solve the benchmark nonlinear constrained problem with mistaken formula and the best-known solution is obtained, whose optimality is testified by the Kuhn-Tucker conditions.

  16. DEVELOPMENT OF A MARKET BENCHMARK PRICE FOR AGMAS PERFORMANCE EVALUATIONS

    OpenAIRE

    Good, Darrel L.; Irwin, Scott H.; Jackson, Thomas E.

    1998-01-01

    The purpose of this research report is to identify the appropriate market benchmark price to use to evaluate the pricing performance of market advisory services that are included in the annual AgMAS pricing performance evaluations. Five desirable properties of market benchmark prices are identified. Three potential specifications of the market benchmark price are considered: the average price received by Illinois farmers, the harvest cash price, and the average cash price over a two-year crop...

  17. 42 CFR 422.258 - Calculation of benchmarks.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 3 2010-10-01 2010-10-01 false Calculation of benchmarks. 422.258 Section 422.258... and Plan Approval § 422.258 Calculation of benchmarks. (a) The term “MA area-specific non-drug monthly benchmark amount” means, for a month in a year: (1) For MA local plans with service areas entirely within...

  18. Neutronics Benchmarks for the Utilization of Mixed-Oxide Fuel: Joint US/Russian Progress Report for Fiscal 1997. Volume 3 - Calculations Performed in the Russian Federation

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-06-01

    This volume of the progress report provides documentation of reactor physics and criticality safety studies conducted in the Russian Federation during fiscal year 1997 and sponsored by the Fissile Materials Disposition Program of the US Department of Energy. Descriptions of computational and experimental benchmarks for the verification and validation of computer programs for neutron physics analyses are included. All benchmarks include either plutonium, uranium, or mixed uranium and plutonium fuels. Calculated physics parameters are reported for all of the contaminated benchmarks that the United States and Russia mutually agreed in November 1996 were applicable to mixed-oxide fuel cycles for light-water reactors.

  19. Hospital Energy Benchmarking Guidance - Version 1.0

    Energy Technology Data Exchange (ETDEWEB)

    Singer, Brett C.

    2009-09-08

    This document describes an energy benchmarking framework for hospitals. The document is organized as follows. The introduction provides a brief primer on benchmarking and its application to hospitals. The next two sections discuss special considerations including the identification of normalizing factors. The presentation of metrics is preceded by a description of the overall framework and the rationale for the grouping of metrics. Following the presentation of metrics, a high-level protocol is provided. The next section presents draft benchmarks for some metrics; benchmarks are not available for many metrics owing to a lack of data. This document ends with a list of research needs for further development.

  20. Benchmark for Evaluating Moving Object Indexes

    DEFF Research Database (Denmark)

    Chen, Su; Jensen, Christian Søndergaard; Lin, Dan

    2008-01-01

    that targets techniques for the indexing of the current and near-future positions of moving objects. This benchmark enables the comparison of existing and future indexing techniques. It covers important aspects of such indexes that have not previously been covered by any benchmark. Notable aspects covered...... include update efficiency, query efficiency, concurrency control, and storage requirements. Next, the paper applies the benchmark to half a dozen notable moving-object indexes, thus demonstrating the viability of the benchmark and offering new insight into the performance properties of the indexes....

  1. Benchmarking Central Banks in Latin America, 1990-2010

    National Research Council Canada - National Science Library

    Germán Alarco Tosoni

    2013-01-01

      This benchmarking exercise analyzes the effectiveness of central banks in Latin America between 1900 and 2010, considering the monetary authority's primary and secondary functions in the countries...

  2. Development of 3D ferromagnetic model of tokamak core with strong toroidal asymmetry

    DEFF Research Database (Denmark)

    Markovič, Tomáš; Gryaznevich, Mikhail; Ďuran, Ivan;

    2015-01-01

    Fully 3D model of strongly asymmetric tokamak core, based on boundary integral method approach (i.e. characterization of ferromagnet by its surface) is presented. The model is benchmarked on measurements on tokamak GOLEM, as well as compared to 2D axisymmetric core equivalent for this tokamak...

  3. Modeling of Phenix End-of-Life control rod withdrawal benchmark with DYN3D SFR version

    Energy Technology Data Exchange (ETDEWEB)

    Nikitin, Evgeny; Fridman, Emil [Helmholtz-Zentrum Dresden-Rossendorf e.V., Dresden (Germany). Reactor Safety

    2017-06-01

    The reactor dynamics code DYN3D is currently under extension for Sodium cooled Fast Reactor applications. The control rod withdrawal benchmark from the Phenix End-of-Life experiments was selected for verification and validation purposes. This report presents some selected results to demonstrate the feasibility of using DYN3D for steady-state Sodium cooled Fast Reactor analyses.

  4. Neutron Reference Benchmark Field Specification: ACRR Free-Field Environment (ACRR-FF-CC-32-CL).

    Energy Technology Data Exchange (ETDEWEB)

    Vega, Richard Manuel [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Parma, Edward J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Griffin, Patrick J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Vehar, David W. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-07-01

    This report was put together to support the International Atomic Energy Agency (IAEA) REAL- 2016 activity to validate the dosimetry community’s ability to use a consistent set of activation data and to derive consistent spectral characterizations. The report captures details of integral measurements taken in the Annular Core Research Reactor (ACRR) central cavity free-field reference neutron benchmark field. The field is described and an “a priori” calculated neutron spectrum is reported, based on MCNP6 calculations, and a subject matter expert (SME) based covariance matrix is given for this “a priori” spectrum. The results of 31 integral dosimetry measurements in the neutron field are reported.

  5. Neutronics Benchmarks for the Utilization of Mixed-Oxide Fuel: Joint U.S./Russian Progress Report for Fiscal Year 1997 Volume 2-Calculations Performed in the United States

    Energy Technology Data Exchange (ETDEWEB)

    Primm III, RT

    2002-05-29

    This volume of the progress report provides documentation of reactor physics and criticality safety studies conducted in the US during fiscal year 1997 and sponsored by the Fissile Materials Disposition Program of the US Department of Energy. Descriptions of computational and experimental benchmarks for the verification and validation of computer programs for neutron physics analyses are included. All benchmarks include either plutonium, uranium, or mixed uranium and plutonium fuels. Calculated physics parameters are reported for all of the computational benchmarks and for those experimental benchmarks that the US and Russia mutually agreed in November 1996 were applicable to mixed-oxide fuel cycles for light-water reactors.

  6. IAEA GT-MHR Benchmark Calculations Using the HELIOS/MASTER Two-Step Procedure

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Kyung Hoon; Kim, Kang Seog; Cho, Jin Young; Song, Jae Seung; Noh, Jae Man; Lee, Chung Chan; Zee, Sung Quun

    2007-05-15

    A new two-step procedure based on the HELISO/MASTER code system has been developed for the prismatic VHTR physics analysis. This procedure employs the HELIOS code for the transport lattice calculation to generate a few group constants, and the MASTER code for the 3-dimensional core calculation to perform the reactor physics analysis. Double heterogeneity effect due to the random distribution of the particulate fuel could be dealt with the recently developed reactivity-equivalent physical transformation (RPT) method. The strong spectral effects of the graphite moderated reactor core could be solved both by optimizing the number of energy groups and group boundaries, and by employing a partial core model instead of a single block one to generate a few group cross sections. Burnable poisons in the inner reflector and asymmetrically located large control rod can be treated by adopting the equivalence theory applied for the multi-block models to generate surface dependent discontinuity factors. Effective reflector cross sections were generated by using a simple mini-core model and an equivalence theory. In this study the IAEA GT-MHR benchmark problems with a plutonium fuel were analyzed by using the HELIOS/MASTER code package and the Monte Carlo code MCNP. Benchmark problems include pin, block and core models. The computational results of the HELIOS/MASTER code system were compared with those of MCNP and other participants. The results show that the 2-step procedure using HELIOS/MASTER can be applied to the reactor physics analysis for the prismatic VHTR with a good accuracy.

  7. Conceptual study of advanced PWR core design. Development of advanced PWR core neutronics analysis system

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Chang Hyo; Kim, Seung Cho; Kim, Taek Kyum; Cho, Jin Young; Lee, Hyun Cheol; Lee, Jung Hun; Jung, Gu Young [Seoul National University, Seoul (Korea, Republic of)

    1995-08-01

    The neutronics design system of the advanced PWR consists of (i) hexagonal cell and fuel assembly code for generation of homogenized few-group cross sections and (ii) global core neutronics analysis code for computations of steady-state pin-wise or assembly-wise core power distribution, core reactivity with fuel burnup, control rod worth and reactivity coefficients, transient core power, etc.. The major research target of the first year is to establish the numerical method and solution of multi-group diffusion equations for neutronics code development. Specifically, the following studies are planned; (i) Formulation of various numerical methods such as finite element method(FEM), analytical nodal method(ANM), analytic function expansion nodal(AFEN) method, polynomial expansion nodal(PEN) method that can be applicable for the hexagonal core geometry. (ii) Comparative evaluation of the numerical effectiveness of these methods based on numerical solutions to various hexagonal core neutronics benchmark problems. Results are follows: (i) Formulation of numerical solutions to multi-group diffusion equations based on numerical methods. (ii) Numerical computations by above methods for the hexagonal neutronics benchmark problems such as -VVER-1000 Problem Without Reflector -VVER-440 Problem I With Reflector -Modified IAEA PWR Problem Without Reflector -Modified IAEA PWR Problem With Reflector -ANL Large Heavy Water Reactor Problem -Small HTGR Problem -VVER-440 Problem II With Reactor (iii) Comparative evaluation on the numerical effectiveness of various numerical methods. (iv) Development of HEXFEM code, a multi-dimensional hexagonal core neutronics analysis code based on FEM. In the target year of this research, the spatial neutronics analysis code for hexagonal core geometry(called NEMSNAP-H temporarily) will be completed. Combination of NEMSNAP-H with hexagonal cell and assembly code will then equip us with hexagonal core neutronics design system. (Abstract Truncated)

  8. Plasma Waves as a Benchmark Problem

    CERN Document Server

    Kilian, Patrick; Schreiner, Cedric; Spanier, Felix

    2016-01-01

    A large number of wave modes exist in a magnetized plasma. Their properties are determined by the interaction of particles and waves. In a simulation code, the correct treatment of field quantities and particle behavior is essential to correctly reproduce the wave properties. Consequently, plasma waves provide test problems that cover a large fraction of the simulation code. The large number of possible wave modes and the freedom to choose parameters make the selection of test problems time consuming and comparison between different codes difficult. This paper therefore aims to provide a selection of test problems, based on different wave modes and with well defined parameter values, that is accessible to a large number of simulation codes to allow for easy benchmarking and cross validation. Example results are provided for a number of plasma models. For all plasma models and wave modes that are used in the test problems, a mathematical description is provided to clarify notation and avoid possible misunderst...

  9. Numerical simulation of the RAMAC benchmark test

    Energy Technology Data Exchange (ETDEWEB)

    Leblanc, J.E.; Sugihara, M.; Fujiwara, T. [Nagoya Univ. (Japan). Dept. of Aerospace Engineering; Nusca, M. [Nagoya Univ. (Japan). Dept. of Aerospace Engineering; U.S. Army Research Lab., Ballistics and Weapons Concepts Div., AMSRL-WM-BE, Aberdeen Proving Ground, MD (United States); Wang, X. [Nagoya Univ. (Japan). Dept. of Aerospace Engineering; School of Mechanical and Production Engineering, Nanyang Technological Univ. (Singapore); Seiler, F. [Nagoya Univ. (Japan). Dept. of Aerospace Engineering; French-German Research Inst. of Saint-Louis, ISL, Saint-Louis (France)

    2000-11-01

    Numerical simulations of the same ramac geometry and boundary conditions by different numerical and physical models highlight the variety of solutions possible and the strong effect of the chemical kinetics model on the solution. The benchmark test was defined and announced within the community of ramac researchers. Three laboratories undertook the project. The numerical simulations include Navier-Stokes and Euler simulations with various levels of physical models and equations of state. The non-reactive part of the simulation produced similar steady state results in the three simulations. The chemically reactive part of the simulation produced widely different outcomes. The original experimental data and experimental conditions are presented. A description of each computer code and the resulting flowfield is included. A comparison between codes and results is achieved. The most critical choice for the simulation was the chemical kinetics model. (orig.)

  10. Development of solutions to benchmark piping problems

    Energy Technology Data Exchange (ETDEWEB)

    Reich, M; Chang, T Y; Prachuktam, S; Hartzman, M

    1977-12-01

    Benchmark problems and their solutions are presented. The problems consist in calculating the static and dynamic response of selected piping structures subjected to a variety of loading conditions. The structures range from simple pipe geometries to a representative full scale primary nuclear piping system, which includes the various components and their supports. These structures are assumed to behave in a linear elastic fashion only, i.e., they experience small deformations and small displacements with no existing gaps, and remain elastic through their entire response. The solutions were obtained by using the program EPIPE, which is a modification of the widely available program SAP IV. A brief outline of the theoretical background of this program and its verification is also included.

  11. FRIB driver linac vacuum model and benchmarks

    CERN Document Server

    Durickovic, Bojan; Kersevan, Roberto; Machicoane, Guillaume

    2014-01-01

    The Facility for Rare Isotope Beams (FRIB) is a superconducting heavy-ion linear accelerator that is to produce rare isotopes far from stability for low energy nuclear science. In order to achieve this, its driver linac needs to achieve a very high beam current (up to 400 kW beam power), and this requirement makes vacuum levels of critical importance. Vacuum calculations have been carried out to verify that the vacuum system design meets the requirements. The modeling procedure was benchmarked by comparing models of an existing facility against measurements. In this paper, we present an overview of the methods used for FRIB vacuum calculations and simulation results for some interesting sections of the accelerator. (C) 2013 Elsevier Ltd. All rights reserved.

  12. NASA Indexing Benchmarks: Evaluating Text Search Engines

    Science.gov (United States)

    Esler, Sandra L.; Nelson, Michael L.

    1997-01-01

    The current proliferation of on-line information resources underscores the requirement for the ability to index collections of information and search and retrieve them in a convenient manner. This study develops criteria for analytically comparing the index and search engines and presents results for a number of freely available search engines. A product of this research is a toolkit capable of automatically indexing, searching, and extracting performance statistics from each of the focused search engines. This toolkit is highly configurable and has the ability to run these benchmark tests against other engines as well. Results demonstrate that the tested search engines can be grouped into two levels. Level one engines are efficient on small to medium sized data collections, but show weaknesses when used for collections 100MB or larger. Level two search engines are recommended for data collections up to and beyond 100MB.

  13. Shielding integral benchmark archive and database (SINBAD)

    Energy Technology Data Exchange (ETDEWEB)

    Kirk, B.L.; Grove, R.E. [Radiation Safety Information Computational Center RSICC, Oak Ridge National Laboratory, P.O. Box 2008, Oak Ridge, TN 37831-6171 (United States); Kodeli, I. [Josef Stefan Inst., Jamova 39, 1000 Ljubljana (Slovenia); Gulliford, J.; Sartori, E. [OECD NEA Data Bank, Bd des Iles, 92130 Issy-les-Moulineaux (France)

    2011-07-01

    The shielding integral benchmark archive and database (SINBAD) collection of experiments descriptions was initiated in the early 1990s. SINBAD is an international collaboration between the Organization for Economic Cooperation and Development's Nuclear Energy Agency Data Bank (OECD/NEADB) and the Radiation Safety Information Computational Center (RSICC) at Oak Ridge National Laboratory (ORNL). SINBAD was designed to compile experiments and corresponding computational models with the goal of preserving institutional knowledge and expertise that need to be handed down to future scientists. SINBAD can serve as a learning tool for university students and scientists who need to design experiments or gain expertise in modeling and simulation. The SINBAD database is currently divided into three categories - fission, fusion, and accelerator experiments. Many experiments are described and analyzed using deterministic or stochastic (Monte Carlo) radiation transport software. The nuclear cross sections also play an important role as they are necessary in performing computational analysis. (authors)

  14. Benchmarking management practices in Australian public healthcare.

    Science.gov (United States)

    Agarwal, Renu; Green, Roy; Agarwal, Neeru; Randhawa, Krithika

    2016-01-01

    The purpose of this paper is to investigate the quality of management practices of public hospitals in the Australian healthcare system, specifically those in the state-managed health systems of Queensland and New South Wales (NSW). Further, the authors assess the management practices of Queensland and NSW public hospitals jointly and globally benchmark against those in the health systems of seven other countries, namely, USA, UK, Sweden, France, Germany, Italy and Canada. In this study, the authors adapt the unique and globally deployed Bloom et al. (2009) survey instrument that uses a "double blind, double scored" methodology and an interview-based scoring grid to measure and internationally benchmark the management practices in Queensland and NSW public hospitals based on 21 management dimensions across four broad areas of management - operations, performance monitoring, targets and people management. The findings reveal the areas of strength and potential areas of improvement in the Queensland and NSW Health hospital management practices when compared with public hospitals in seven countries, namely, USA, UK, Sweden, France, Germany, Italy and Canada. Together, Queensland and NSW Health hospitals perform best in operations management followed by performance monitoring. While target management presents scope for improvement, people management is the sphere where these Australian hospitals lag the most. This paper is of interest to both hospital administrators and health care policy-makers aiming to lift management quality at the hospital level as well as at the institutional level, as a vehicle to consistently deliver sustainable high-quality health services. This study provides the first internationally comparable robust measure of management capability in Australian public hospitals, where hospitals are run independently by the state-run healthcare systems. Additionally, this research study contributes to the empirical evidence base on the quality of

  15. Ground truth and benchmarks for performance evaluation

    Science.gov (United States)

    Takeuchi, Ayako; Shneier, Michael; Hong, Tsai Hong; Chang, Tommy; Scrapper, Christopher; Cheok, Geraldine S.

    2003-09-01

    Progress in algorithm development and transfer of results to practical applications such as military robotics requires the setup of standard tasks, of standard qualitative and quantitative measurements for performance evaluation and validation. Although the evaluation and validation of algorithms have been discussed for over a decade, the research community still faces a lack of well-defined and standardized methodology. The range of fundamental problems include a lack of quantifiable measures of performance, a lack of data from state-of-the-art sensors in calibrated real-world environments, and a lack of facilities for conducting realistic experiments. In this research, we propose three methods for creating ground truth databases and benchmarks using multiple sensors. The databases and benchmarks will provide researchers with high quality data from suites of sensors operating in complex environments representing real problems of great relevance to the development of autonomous driving systems. At NIST, we have prototyped a High Mobility Multi-purpose Wheeled Vehicle (HMMWV) system with a suite of sensors including a Riegl ladar, GDRS ladar, stereo CCD, several color cameras, Global Position System (GPS), Inertial Navigation System (INS), pan/tilt encoders, and odometry . All sensors are calibrated with respect to each other in space and time. This allows a database of features and terrain elevation to be built. Ground truth for each sensor can then be extracted from the database. The main goal of this research is to provide ground truth databases for researchers and engineers to evaluate algorithms for effectiveness, efficiency, reliability, and robustness, thus advancing the development of algorithms.

  16. Benchmarking analogue models of brittle thrust wedges

    Science.gov (United States)

    Schreurs, Guido; Buiter, Susanne J. H.; Boutelier, Jennifer; Burberry, Caroline; Callot, Jean-Paul; Cavozzi, Cristian; Cerca, Mariano; Chen, Jian-Hong; Cristallini, Ernesto; Cruden, Alexander R.; Cruz, Leonardo; Daniel, Jean-Marc; Da Poian, Gabriela; Garcia, Victor H.; Gomes, Caroline J. S.; Grall, Céline; Guillot, Yannick; Guzmán, Cecilia; Hidayah, Triyani Nur; Hilley, George; Klinkmüller, Matthias; Koyi, Hemin A.; Lu, Chia-Yu; Maillot, Bertrand; Meriaux, Catherine; Nilfouroushan, Faramarz; Pan, Chang-Chih; Pillot, Daniel; Portillo, Rodrigo; Rosenau, Matthias; Schellart, Wouter P.; Schlische, Roy W.; Take, Andy; Vendeville, Bruno; Vergnaud, Marine; Vettori, Matteo; Wang, Shih-Hsien; Withjack, Martha O.; Yagupsky, Daniel; Yamada, Yasuhiro

    2016-11-01

    We performed a quantitative comparison of brittle thrust wedge experiments to evaluate the variability among analogue models and to appraise the reproducibility and limits of model interpretation. Fifteen analogue modeling laboratories participated in this benchmark initiative. Each laboratory received a shipment of the same type of quartz and corundum sand and all laboratories adhered to a stringent model building protocol and used the same type of foil to cover base and sidewalls of the sandbox. Sieve structure, sifting height, filling rate, and details on off-scraping of excess sand followed prescribed procedures. Our analogue benchmark shows that even for simple plane-strain experiments with prescribed stringent model construction techniques, quantitative model results show variability, most notably for surface slope, thrust spacing and number of forward and backthrusts. One of the sources of the variability in model results is related to slight variations in how sand is deposited in the sandbox. Small changes in sifting height, sifting rate, and scraping will result in slightly heterogeneous material bulk densities, which will affect the mechanical properties of the sand, and will result in lateral and vertical differences in peak and boundary friction angles, as well as cohesion values once the model is constructed. Initial variations in basal friction are inferred to play the most important role in causing model variability. Our comparison shows that the human factor plays a decisive role, and even when one modeler repeats the same experiment, quantitative model results still show variability. Our observations highlight the limits of up-scaling quantitative analogue model results to nature or for making comparisons with numerical models. The frictional behavior of sand is highly sensitive to small variations in material state or experimental set-up, and hence, it will remain difficult to scale quantitative results such as number of thrusts, thrust spacing

  17. Benchmarking Competitiveness: Is America's Technological Hegemony Waning?

    Science.gov (United States)

    Lubell, Michael S.

    2006-03-01

    For more than half a century, by almost every standard, the United States has been the world's leader in scientific discovery, innovation and technological competitiveness. To a large degree, that dominant position stemmed from the circumstances our nation inherited at the conclusion of the World War Two: we were, in effect, the only major nation left standing that did not have to repair serious war damage. And we found ourselves with an extraordinary science and technology base that we had developed for military purposes. We had the laboratories -- industrial, academic and government -- as well as the scientific and engineering personnel -- many of them immigrants who had escaped from war-time Europe. What remained was to convert the wartime machinery into peacetime uses. We adopted private and public policies that accomplished the transition remarkably well, and we have prospered ever since. Our higher education system, our protection of intellectual property rights, our venture capital system, our entrepreneurial culture and our willingness to commit government funds for the support of science and engineering have been key components to our success. But recent competitiveness benchmarks suggest that our dominance is waning rapidly, in part because other nations have begun to emulate our successful model, in part because globalization has ``flattened'' the world and in part because we have been reluctant to pursue the public policies that are necessary to ensure our leadership. We will examine these benchmarks and explore the policy changes that are needed to keep our nation's science and technology enterprise vibrant and our economic growth on an upward trajectory.

  18. Benchmarking homogenization algorithms for monthly data

    Directory of Open Access Journals (Sweden)

    V. K. C. Venema

    2012-01-01

    Full Text Available The COST (European Cooperation in Science and Technology Action ES0601: advances in homogenization methods of climate series: an integrated approach (HOME has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative. The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random independent break-type inhomogeneities with normally distributed breakpoint sizes were added to the simulated datasets. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide trend was added.

    Participants provided 25 separate homogenized contributions as part of the blind study. After the deadline at which details of the imposed inhomogeneities were revealed, 22 additional solutions were submitted. These homogenized datasets were assessed by a number of performance metrics including (i the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii the error in linear trend estimates and (iii traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve

  19. Benchmark of MEGA Code on Fast Ion Pressure Profile in the Large Helical Device

    Science.gov (United States)

    Seki, Ryosuke; Todo, Yasushi; Suzuki, Yasuhiro; Osakabe, Masaki

    2016-10-01

    As the first step for the analyses of energetic particle driven instabilities in the Large Helical Device (LHD) including the collisions of fast ions and the neutral beam injection, MEGA code is benchmarked on the classical fast ion pressure profile using the temperature and density profiles measured in the LHD experiments. In this benchmark, the MHD equilibrium is calculated with HINT code, and the beam deposition profile is calculated with HFREYA code. Since the equilibrium is not axisymmetric in LHD, the accuracy of orbit tracing is important for fast ion analyses. In the slowing down process of the MEGA code, the guiding center equation is numerically solved using the 4th order Runge-Kutta method and the linear interpolation. MEGA code is benchmarked against the results of MORH code, in which the 6th order Runge-Kutta and the 4th order spline interpolation are used. In LHD, the position of the loss boundary of fast ion is important because there are many ``re-entering fast ions'' which re-enter in plasma after they have once passed out of plasma. The effects of the position of the loss boundary on the fast ion pressure profile will be discussed, and a preliminary result of Alfven eigenmodes will be presented.

  20. Severe accident recriticality analyses (SARA)

    DEFF Research Database (Denmark)

    Frid, W.; Højerup, C.F.; Lindholm, I.

    2001-01-01

    three computer codes and to further develop and adapt them for the task. The codes were SIMULATE-3K, APROS and RECRIT. Recriticality analyses were carried out for a number of selected reflooding transients for the Oskarshamn 3 plant in Sweden with SIMULATE-3K and for the Olkiluoto I plant in Finland...... with all three codes. The core initial and boundary conditions prior to recriticality have been studied with the severe accident codes SCDAP/RELAP5, MELCOR and MAAP4. The results of the analyses show that all three codes predict recriticality-both super-prompt power bursts and quasi steady-state power...... generation-for the range of parameters studied, i.e. with core uncovering and heat-up to maximum core temperatures of approximately 1800 K, and water flow rates of 45-2000 kg s(-1) injected into the downcomer. Since recriticality takes place in a small fraction of the core, the power densities are high...