WorldWideScience

Sample records for standard benchmark suite

  1. HPC Benchmark Suite NMx, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Intelligent Automation Inc., (IAI) and University of Central Florida (UCF) propose to develop a comprehensive numerical test suite for benchmarking current and...

  2. HPC Benchmark Suite NMx, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — In the phase II effort, Intelligent Automation Inc., (IAI) and University of Central Florida (UCF) propose to develop a comprehensive numerical test suite for...

  3. PMLB: a large benchmark suite for machine learning evaluation and comparison.

    Science.gov (United States)

    Olson, Randal S; La Cava, William; Orzechowski, Patryk; Urbanowicz, Ryan J; Moore, Jason H

    2017-01-01

    The selection, development, or comparison of machine learning methods in data mining can be a difficult task based on the target problem and goals of a particular study. Numerous publicly available real-world and simulated benchmark datasets have emerged from different sources, but their organization and adoption as standards have been inconsistent. As such, selecting and curating specific benchmarks remains an unnecessary burden on machine learning practitioners and data scientists. The present study introduces an accessible, curated, and developing public benchmark resource to facilitate identification of the strengths and weaknesses of different machine learning methodologies. We compare meta-features among the current set of benchmark datasets in this resource to characterize the diversity of available data. Finally, we apply a number of established machine learning methods to the entire benchmark suite and analyze how datasets and algorithms cluster in terms of performance. From this study, we find that existing benchmarks lack the diversity to properly benchmark machine learning algorithms, and there are several gaps in benchmarking problems that still need to be considered. This work represents another important step towards understanding the limitations of popular benchmarking suites and developing a resource that connects existing benchmarking standards to more diverse and efficient standards in the future.

  4. Spherical harmonic results for the 3D Kobayashi Benchmark suite

    International Nuclear Information System (INIS)

    Brown, P N; Chang, B; Hanebutte, U R

    1999-01-01

    Spherical harmonic solutions are presented for the Kobayashi benchmark suite. The results were obtained with Ardra, a scalable, parallel neutron transport code developed at Lawrence Livermore National Laboratory (LLNL). The calculations were performed on the IBM ASCI Blue-Pacific computer at LLNL

  5. Spherical Harmonic Solutions to the 3D Kobayashi Benchmark Suite

    International Nuclear Information System (INIS)

    Brown, P.N.; Chang, B.; Hanebutte, U.R.

    1999-01-01

    Spherical harmonic solutions of order 5, 9 and 21 on spatial grids containing up to 3.3 million cells are presented for the Kobayashi benchmark suite. This suite of three problems with simple geometry of pure absorber with large void region was proposed by Professor Kobayashi at an OECD/NEA meeting in 1996. Each of the three problems contains a source, a void and a shield region. Problem 1 can best be described as a box in a box problem, where a source region is surrounded by a square void region which itself is embedded in a square shield region. Problems 2 and 3 represent a shield with a void duct. Problem 2 having a straight and problem 3 a dog leg shaped duct. A pure absorber and a 50% scattering case are considered for each of the three problems. The solutions have been obtained with Ardra, a scalable, parallel neutron transport code developed at Lawrence Livermore National Laboratory (LLNL). The Ardra code takes advantage of a two-level parallelization strategy, which combines message passing between processing nodes and thread based parallelism amongst processors on each node. All calculations were performed on the IBM ASCI Blue-Pacific computer at LLNL

  6. The CMSSW benchmarking suite: Using HEP code to measure CPU performance

    Science.gov (United States)

    Benelli, G.; CMS Offline Computing Projects

    2010-04-01

    The demanding computing needs of the CMS experiment require thoughtful planning and management of its computing infrastructure. A key factor in this process is the use of realistic benchmarks when assessing the computing power of the different architectures available. In recent years a discrepancy has been observed between the CPU performance estimates given by the reference benchmark for HEP computing (SPECint [1]) and actual performances of HEP code. Making use of the CPU performance tools from the CMSSW performance suite, comparative CPU performance studies have been carried out on several architectures. A benchmarking suite has been developed and integrated in the CMSSW framework, to allow computing centers and interested third parties to benchmark architectures directly with CMSSW. The CMSSW benchmarking suite can be used out of the box, to test and compare several machines in terms of CPU performance and report with the wanted level of detail the different benchmarking scores (e.g. by processing step) and results. In this talk we describe briefly the CMSSW software performance suite, and in detail the CMSSW benchmarking suite client/server design, the performance data analysis and the available CMSSW benchmark scores. The experience in the use of HEP code for benchmarking will be discussed and CMSSW benchmark results presented.

  7. The suite of analytical benchmarks for neutral particle transport in infinite isotropically scattering media

    International Nuclear Information System (INIS)

    Kornreich, D.E.; Ganapol, B.D.

    1997-01-01

    The linear Boltzmann equation for the transport of neutral particles is investigated with the objective of generating benchmark-quality evaluations of solutions for homogeneous infinite media. In all cases, the problems are stationary, of one energy group, and the scattering is isotropic. The solutions are generally obtained through the use of Fourier transform methods with the numerical inversions constructed from standard numerical techniques such as Gauss-Legendre quadrature, summation of infinite series, and convergence acceleration. Consideration of the suite of benchmarks in infinite homogeneous media begins with the standard one-dimensional problems: an isotropic point source, an isotropic planar source, and an isotropic infinite line source. The physical and mathematical relationships between these source configurations are investigated. The progression of complexity then leads to multidimensional problems with source configurations that also emit particles isotropically: the finite line source, the disk source, and the rectangular source. The scalar flux from the finite isotropic line and disk sources will have a two-dimensional spatial variation, whereas a finite rectangular source will have a three-dimensional variation in the scalar flux. Next, sources emitting particles anisotropically are considered. The most basic such source is the point beam giving rise to the Green's function, which is physically the most fundamental transport problem, yet may be constructed from the isotropic point source solution. Finally, the anisotropic plane and anisotropically emitting infinite line sources are considered. Thus, a firm theoretical and numerical base is established for the most fundamental neutral particle benchmarks in infinite homogeneous media

  8. Statistical benchmarking in utility regulation: Role, standards and methods

    International Nuclear Information System (INIS)

    Newton Lowry, Mark; Getachew, Lullit

    2009-01-01

    Statistical benchmarking is being used with increasing frequency around the world in utility rate regulation. We discuss how and where benchmarking is in use for this purpose and the pros and cons of regulatory benchmarking. We then discuss alternative performance standards and benchmarking methods in regulatory applications. We use these to propose guidelines for the appropriate use of benchmarking in the rate setting process. The standards, which we term the competitive market and frontier paradigms, have a bearing on method selection. These along with regulatory experience suggest that benchmarking can either be used for prudence review in regulation or to establish rates or rate setting mechanisms directly

  9. A note on bound constraints handling for the IEEE CEC'05 benchmark function suite.

    Science.gov (United States)

    Liao, Tianjun; Molina, Daniel; de Oca, Marco A Montes; Stützle, Thomas

    2014-01-01

    The benchmark functions and some of the algorithms proposed for the special session on real parameter optimization of the 2005 IEEE Congress on Evolutionary Computation (CEC'05) have played and still play an important role in the assessment of the state of the art in continuous optimization. In this article, we show that if bound constraints are not enforced for the final reported solutions, state-of-the-art algorithms produce infeasible best candidate solutions for the majority of functions of the IEEE CEC'05 benchmark function suite. This occurs even though the optima of the CEC'05 functions are within the specified bounds. This phenomenon has important implications on algorithm comparisons, and therefore on algorithm designs. This article's goal is to draw the attention of the community to the fact that some authors might have drawn wrong conclusions from experiments using the CEC'05 problems.

  10. A suite of benchmark and challenge problems for enhanced geothermal systems

    Energy Technology Data Exchange (ETDEWEB)

    White, Mark; Fu, Pengcheng; McClure, Mark; Danko, George; Elsworth, Derek; Sonnenthal, Eric; Kelkar, Sharad; Podgorney, Robert

    2017-11-06

    A diverse suite of numerical simulators is currently being applied to predict or understand the performance of enhanced geothermal systems (EGS). To build confidence and identify critical development needs for these analytical tools, the United States Department of Energy, Geothermal Technologies Office sponsored a Code Comparison Study (GTO-CCS), with participants from universities, industry, and national laboratories. A principal objective for the study was to create a community forum for improvement and verification of numerical simulators for EGS modeling. Teams participating in the study were those representing U.S. national laboratories, universities, and industries, and each team brought unique numerical simulation capabilities to bear on the problems. Two classes of problems were developed during the study, benchmark problems and challenge problems. The benchmark problems were structured to test the ability of the collection of numerical simulators to solve various combinations of coupled thermal, hydrologic, geomechanical, and geochemical processes. This class of problems was strictly defined in terms of properties, driving forces, initial conditions, and boundary conditions. The challenge problems were based on the enhanced geothermal systems research conducted at Fenton Hill, near Los Alamos, New Mexico, between 1974 and 1995. The problems involved two phases of research, stimulation, development, and circulation in two separate reservoirs. The challenge problems had specific questions to be answered via numerical simulation in three topical areas: 1) reservoir creation/stimulation, 2) reactive and passive transport, and 3) thermal recovery. Whereas the benchmark class of problems were designed to test capabilities for modeling coupled processes under strictly specified conditions, the stated objective for the challenge class of problems was to demonstrate what new understanding of the Fenton Hill experiments could be realized via the application of

  11. SUIT

    DEFF Research Database (Denmark)

    Algreen-Ussing, Gregers; Wedebrunn, Ola

    2003-01-01

    Leaflet om project SUIT udgivet af European Commission. Tryksagen forklarer i korte ord resultatet af projektet SUIT. Kulturværdier i Miljøspørgsmål. Vurdering af projekter og indvirkning på miljø....

  12. The Application of the PEBBED Code Suite to the PBMR-400 Coupled Code Benchmark - FY 2006 Annual Report

    Energy Technology Data Exchange (ETDEWEB)

    2006-09-01

    This document describes the recent developments of the PEBBED code suite and its application to the PBMR-400 Coupled Code Benchmark. This report addresses an FY2006 Level 2 milestone under the NGNP Design and Evaluation Methods Work Package. The milestone states "Complete a report describing the results of the application of the integrated PEBBED code package to the PBMR-400 coupled code benchmark". The report describes the current state of the PEBBED code suite, provides an overview of the Benchmark problems to which it was applied, discusses the code developments achieved in the past year, and states some of the results attained. Results of the steady state problems generated by the PEBBED fuel management code compare favorably to the preliminary results generated by codes from other participating institutions and to similar non-Benchmark analyses. Partial transient analysis capability has been achieved through the acquisition of the NEM-THERMIX code from Penn State University. Phase I of the task has been achieved through the development of a self-consistent set of tools for generating cross sections for design and transient analysis and in the successful execution of the steady state benchmark exercises.

  13. BigDataBench: a Big Data Benchmark Suite from Internet Services

    OpenAIRE

    Wang, Lei; Zhan, Jianfeng; Luo, Chunjie; Zhu, Yuqing; Yang, Qiang; He, Yongqiang; Gao, Wanling; Jia, Zhen; Shi, Yingjie; Zhang, Shujie; Zheng, Chen; Lu, Gang; Zhan, Kent; Li, Xiaona; Qiu, Bizhu

    2014-01-01

    As architecture, systems, and data management communities pay greater attention to innovative big data systems and architectures, the pressure of benchmarking and evaluating these systems rises. Considering the broad use of big data systems, big data benchmarks must include diversity of data and workloads. Most of the state-of-the-art big data benchmarking efforts target evaluating specific types of applications or system software stacks, and hence they are not qualified for serving the purpo...

  14. Piping benchmark problems for the Westinghouse AP600 Standardized Plant

    International Nuclear Information System (INIS)

    Bezler, P.; DeGrassi, G.; Braverman, J.; Wang, Y.K.

    1997-01-01

    To satisfy the need for verification of the computer programs and modeling techniques that will be used to perform the final piping analyses for the Westinghouse AP600 Standardized Plant, three benchmark problems were developed. The problems are representative piping systems subjected to representative dynamic loads with solutions developed using the methods being proposed for analysis for the AP600 standard design. It will be required that the combined license licensees demonstrate that their solutions to these problems are in agreement with the benchmark problem set

  15. Assessment of Usability Benchmarks: Combining Standardized Scales with Specific Questions

    Directory of Open Access Journals (Sweden)

    Stephanie Bettina Linek

    2011-12-01

    Full Text Available The usability of Web sites and online services is of rising importance. When creating a completely new Web site, qualitative data are adequate for identifying the most usability problems. However, changes of an existing Web site should be evaluated by a quantitative benchmarking process. The proposed paper describes the creation of a questionnaire that allows a quantitative usability benchmarking, i.e. a direct comparison of the different versions of a Web site and an orientation on general standards of usability. The questionnaire is also open for qualitative data. The methodology will be explained by the digital library services of the ZBW.

  16. BigDataBench: a Big Data Benchmark Suite from Web Search Engines

    OpenAIRE

    Gao, Wanling; Zhu, Yuqing; Jia, Zhen; Luo, Chunjie; Wang, Lei; Li, Zhiguo; Zhan, Jianfeng; Qi, Yong; He, Yongqiang; Gong, Shiming; Li, Xiaona; Zhang, Shujie; Qiu, Bizhu

    2013-01-01

    This paper presents our joint research efforts on big data benchmarking with several industrial partners. Considering the complexity, diversity, workload churns, and rapid evolution of big data systems, we take an incremental approach in big data benchmarking. For the first step, we pay attention to search engines, which are the most important domain in Internet services in terms of the number of page views and daily visitors. However, search engine service providers treat data, applications,...

  17. Using PISA as an International Benchmark in Standard Setting.

    Science.gov (United States)

    Phillips, Gary W; Jiang, Tao

    2015-01-01

    This study describes how the Programme for International Student Assessment (PISA) can be used to internationally benchmark state performance standards. The process is accomplished in three steps. First, PISA items are embedded in the administration of the state assessment and calibrated on the state scale. Second, the international item calibrations are then used to link the state scale to the PISA scale through common item linking. Third, the statistical linking results are used as part of the state standard setting process to help standard setting panelists determine how high their state standards need to be in order to be internationally competitive. This process was carried out in Delaware, Hawaii, and Oregon, in three subjects-science, mathematics and reading with initial results reported by Phillips and Jiang (2011). An in depth discussion of methods and results are reported in this article for one subject (mathematics) and one state (Hawaii).

  18. Comparative analysis of results between CASMO, MCNP and Serpent for a suite of Benchmark problems on BWR reactors; Analisis comparativo de resultados entre CASMO, MCNP y SERPENT para una suite de problemas Benchmark en reactores BWR

    Energy Technology Data Exchange (ETDEWEB)

    Xolocostli M, J. V.; Vargas E, S.; Gomez T, A. M. [ININ, Carretera Mexico-Toluca s/n, 52750 Ocoyoacac, Estado de Mexico (Mexico); Reyes F, M. del C.; Del Valle G, E., E-mail: vicente.xolocostli@inin.gob.mx [IPN, Escuela Superior de Fisica y Matematicas, UP - Adolfo Lopez Mateos, Edif. 9, 07738 Mexico D. F. (Mexico)

    2014-10-15

    In this paper a comparison is made in analyzing the suite of Benchmark problems for reactors type BWR between CASMO-4, MCNP6 and Serpent code. The Benchmark problem consists of two different geometries: a fuel cell of a pin and assembly type BWR. To facilitate the study of reactors physics in the fuel pin their nuclear characteristics are provided to detail, such as burnt dependence, the reactivity of selected nuclide, etc. With respect to the fuel assembly, the presented results are regarding to infinite multiplication factor for burning different steps and different vacuum conditions. Making the analysis of this set of Benchmark problems provides comprehensive test problems for the next fuels generation of BWR reactors with high extended burned. It is important to note that when making this comparison the purpose is to validate the methodologies used in modeling for different operating conditions, if the case is of other BWR assembly. The results will be within a range with some uncertainty, considering that does not depend on code that is used. Escuela Superior de Fisica y Matematicas of Instituto Politecnico Nacional (IPN (Mexico) has accumulated some experience in using Serpent, due to the potential of this code over other commercial codes such as CASMO and MCNP. The obtained results for the infinite multiplication factor are encouraging and motivate the studies to continue with the generation of the X S of a core to a next step a respective nuclear data library is constructed and this can be used by codes developed as part of the development project of the Mexican Analysis Platform of Nuclear Reactors AZTLAN. (Author)

  19. Comparative analysis of results between CASMO, MCNP and Serpent for a suite of Benchmark problems on BWR reactors

    International Nuclear Information System (INIS)

    Xolocostli M, J. V.; Vargas E, S.; Gomez T, A. M.; Reyes F, M. del C.; Del Valle G, E.

    2014-10-01

    In this paper a comparison is made in analyzing the suite of Benchmark problems for reactors type BWR between CASMO-4, MCNP6 and Serpent code. The Benchmark problem consists of two different geometries: a fuel cell of a pin and assembly type BWR. To facilitate the study of reactors physics in the fuel pin their nuclear characteristics are provided to detail, such as burnt dependence, the reactivity of selected nuclide, etc. With respect to the fuel assembly, the presented results are regarding to infinite multiplication factor for burning different steps and different vacuum conditions. Making the analysis of this set of Benchmark problems provides comprehensive test problems for the next fuels generation of BWR reactors with high extended burned. It is important to note that when making this comparison the purpose is to validate the methodologies used in modeling for different operating conditions, if the case is of other BWR assembly. The results will be within a range with some uncertainty, considering that does not depend on code that is used. Escuela Superior de Fisica y Matematicas of Instituto Politecnico Nacional (IPN (Mexico) has accumulated some experience in using Serpent, due to the potential of this code over other commercial codes such as CASMO and MCNP. The obtained results for the infinite multiplication factor are encouraging and motivate the studies to continue with the generation of the X S of a core to a next step a respective nuclear data library is constructed and this can be used by codes developed as part of the development project of the Mexican Analysis Platform of Nuclear Reactors AZTLAN. (Author)

  20. Suite of Benchmark Tests to Conduct Mesh-Convergence Analysis of Nonlinear and Non-constant Coefficient Transport Codes

    Science.gov (United States)

    Zamani, K.; Bombardelli, F. A.

    2014-12-01

    Verification of geophysics codes is imperative to avoid serious academic as well as practical consequences. In case that access to any given source code is not possible, the Method of Manufactured Solution (MMS) cannot be employed in code verification. In contrast, employing the Method of Exact Solution (MES) has several practical advantages. In this research, we first provide four new one-dimensional analytical solutions designed for code verification; these solutions are able to uncover the particular imperfections of the Advection-diffusion-reaction equation, such as nonlinear advection, diffusion or source terms, as well as non-constant coefficient equations. After that, we provide a solution of Burgers' equation in a novel setup. Proposed solutions satisfy the continuity of mass for the ambient flow, which is a crucial factor for coupled hydrodynamics-transport solvers. Then, we use the derived analytical solutions for code verification. To clarify gray-literature issues in the verification of transport codes, we designed a comprehensive test suite to uncover any imperfection in transport solvers via a hierarchical increase in the level of tests' complexity. The test suite includes hundreds of unit tests and system tests to check vis-a-vis the portions of the code. Examples for checking the suite start by testing a simple case of unidirectional advection; then, bidirectional advection and tidal flow and build up to nonlinear cases. We design tests to check nonlinearity in velocity, dispersivity and reactions. The concealing effect of scales (Peclet and Damkohler numbers) on the mesh-convergence study and appropriate remedies are also discussed. For the cases in which the appropriate benchmarks for mesh convergence study are not available, we utilize symmetry. Auxiliary subroutines for automation of the test suite and report generation are designed. All in all, the test package is not only a robust tool for code verification but it also provides comprehensive

  1. Benchmarking with the BLASST Sessional Staff Standards Framework

    Science.gov (United States)

    Luzia, Karina; Harvey, Marina; Parker, Nicola; McCormack, Coralie; Brown, Natalie R.

    2013-01-01

    Benchmarking as a type of knowledge-sharing around good practice within and between institutions is increasingly common in the higher education sector. More recently, benchmarking as a process that can contribute to quality enhancement has been deployed across numerous institutions with a view to systematising frameworks to assure and enhance the…

  2. Standard Guide for Benchmark Testing of Light Water Reactor Calculations

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2010-01-01

    1.1 This guide covers general approaches for benchmarking neutron transport calculations in light water reactor systems. A companion guide (Guide E2005) covers use of benchmark fields for testing neutron transport calculations and cross sections in well controlled environments. This guide covers experimental benchmarking of neutron fluence calculations (or calculations of other exposure parameters such as dpa) in more complex geometries relevant to reactor surveillance. Particular sections of the guide discuss: the use of well-characterized benchmark neutron fields to provide an indication of the accuracy of the calculational methods and nuclear data when applied to typical cases; and the use of plant specific measurements to indicate bias in individual plant calculations. Use of these two benchmark techniques will serve to limit plant-specific calculational uncertainty, and, when combined with analytical uncertainty estimates for the calculations, will provide uncertainty estimates for reactor fluences with ...

  3. Benchmarking Benchmarks

    NARCIS (Netherlands)

    D.C. Blitz (David)

    2011-01-01

    textabstractBenchmarking benchmarks is a bundle of six studies that are inspired by the prevalence of benchmarking in academic finance research as well as in investment practice. Three studies examine if current benchmark asset pricing models adequately describe the cross-section of stock returns.

  4. A Web Resource for Standardized Benchmark Datasets, Metrics, and Rosetta Protocols for Macromolecular Modeling and Design.

    Directory of Open Access Journals (Sweden)

    Shane Ó Conchúir

    Full Text Available The development and validation of computational macromolecular modeling and design methods depend on suitable benchmark datasets and informative metrics for comparing protocols. In addition, if a method is intended to be adopted broadly in diverse biological applications, there needs to be information on appropriate parameters for each protocol, as well as metrics describing the expected accuracy compared to experimental data. In certain disciplines, there exist established benchmarks and public resources where experts in a particular methodology are encouraged to supply their most efficient implementation of each particular benchmark. We aim to provide such a resource for protocols in macromolecular modeling and design. We present a freely accessible web resource (https://kortemmelab.ucsf.edu/benchmarks to guide the development of protocols for protein modeling and design. The site provides benchmark datasets and metrics to compare the performance of a variety of modeling protocols using different computational sampling methods and energy functions, providing a "best practice" set of parameters for each method. Each benchmark has an associated downloadable benchmark capture archive containing the input files, analysis scripts, and tutorials for running the benchmark. The captures may be run with any suitable modeling method; we supply command lines for running the benchmarks using the Rosetta software suite. We have compiled initial benchmarks for the resource spanning three key areas: prediction of energetic effects of mutations, protein design, and protein structure prediction, each with associated state-of-the-art modeling protocols. With the help of the wider macromolecular modeling community, we hope to expand the variety of benchmarks included on the website and continue to evaluate new iterations of current methods as they become available.

  5. Safety, codes and standards for hydrogen installations. Metrics development and benchmarking

    Energy Technology Data Exchange (ETDEWEB)

    Harris, Aaron P. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Dedrick, Daniel E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); LaFleur, Angela Christine [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); San Marchi, Christopher W. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2014-04-01

    Automakers and fuel providers have made public commitments to commercialize light duty fuel cell electric vehicles and fueling infrastructure in select US regions beginning in 2014. The development, implementation, and advancement of meaningful codes and standards is critical to enable the effective deployment of clean and efficient fuel cell and hydrogen solutions in the energy technology marketplace. Metrics pertaining to the development and implementation of safety knowledge, codes, and standards are important to communicate progress and inform future R&D investments. This document describes the development and benchmarking of metrics specific to the development of hydrogen specific codes relevant for hydrogen refueling stations. These metrics will be most useful as the hydrogen fuel market transitions from pre-commercial to early-commercial phases. The target regions in California will serve as benchmarking case studies to quantify the success of past investments in research and development supporting safety codes and standards R&D.

  6. LHC benchmark scenarios for the real Higgs singlet extension of the standard model

    International Nuclear Information System (INIS)

    Robens, Tania; Stefaniak, Tim

    2016-01-01

    We present benchmark scenarios for searches for an additional Higgs state in the real Higgs singlet extension of the Standard Model in Run 2 of the LHC. The scenarios are selected such that they fulfill all relevant current theoretical and experimental constraints, but can potentially be discovered at the current LHC run. We take into account the results presented in earlier work and update the experimental constraints from relevant LHC Higgs searches and signal rate measurements. The benchmark scenarios are given separately for the low-mass and high-mass region, i.e. the mass range where the additional Higgs state is lighter or heavier than the discovered Higgs state at around 125 GeV. They have also been presented in the framework of the LHC Higgs Cross Section Working Group. (orig.)

  7. A comparison of global optimization algorithms with standard benchmark functions and real-world applications using Energy Plus

    Energy Technology Data Exchange (ETDEWEB)

    Kamph, Jerome Henri; Robinson, Darren; Wetter, Michael

    2009-09-01

    There is an increasing interest in the use of computer algorithms to identify combinations of parameters which optimise the energy performance of buildings. For such problems, the objective function can be multi-modal and needs to be approximated numerically using building energy simulation programs. As these programs contain iterative solution algorithms, they introduce discontinuities in the numerical approximation to the objective function. Metaheuristics often work well for such problems, but their convergence to a global optimum cannot be established formally. Moreover, different algorithms tend to be suited to particular classes of optimization problems. To shed light on this issue we compared the performance of two metaheuristics, the hybrid CMA-ES/HDE and the hybrid PSO/HJ, in minimizing standard benchmark functions and real-world building energy optimization problems of varying complexity. From this we find that the CMA-ES/HDE performs well on more complex objective functions, but that the PSO/HJ more consistently identifies the global minimum for simpler objective functions. Both identified similar values in the objective functions arising from energy simulations, but with different combinations of model parameters. This may suggest that the objective function is multi-modal. The algorithms also correctly identified some non-intuitive parameter combinations that were caused by a simplified control sequence of the building energy system that does not represent actual practice, further reinforcing their utility.

  8. Standardizing Benchmark Dose Calculations to Improve Science-Based Decisions in Human Health Assessments

    Science.gov (United States)

    Wignall, Jessica A.; Shapiro, Andrew J.; Wright, Fred A.; Woodruff, Tracey J.; Chiu, Weihsueh A.; Guyton, Kathryn Z.

    2014-01-01

    Background: Benchmark dose (BMD) modeling computes the dose associated with a prespecified response level. While offering advantages over traditional points of departure (PODs), such as no-observed-adverse-effect-levels (NOAELs), BMD methods have lacked consistency and transparency in application, interpretation, and reporting in human health assessments of chemicals. Objectives: We aimed to apply a standardized process for conducting BMD modeling to reduce inconsistencies in model fitting and selection. Methods: We evaluated 880 dose–response data sets for 352 environmental chemicals with existing human health assessments. We calculated benchmark doses and their lower limits [10% extra risk, or change in the mean equal to 1 SD (BMD/L10/1SD)] for each chemical in a standardized way with prespecified criteria for model fit acceptance. We identified study design features associated with acceptable model fits. Results: We derived values for 255 (72%) of the chemicals. Batch-calculated BMD/L10/1SD values were significantly and highly correlated (R2 of 0.95 and 0.83, respectively, n = 42) with PODs previously used in human health assessments, with values similar to reported NOAELs. Specifically, the median ratio of BMDs10/1SD:NOAELs was 1.96, and the median ratio of BMDLs10/1SD:NOAELs was 0.89. We also observed a significant trend of increasing model viability with increasing number of dose groups. Conclusions: BMD/L10/1SD values can be calculated in a standardized way for use in health assessments on a large number of chemicals and critical effects. This facilitates the exploration of health effects across multiple studies of a given chemical or, when chemicals need to be compared, providing greater transparency and efficiency than current approaches. Citation: Wignall JA, Shapiro AJ, Wright FA, Woodruff TJ, Chiu WA, Guyton KZ, Rusyn I. 2014. Standardizing benchmark dose calculations to improve science-based decisions in human health assessments. Environ Health

  9. Singlet extensions of the standard model at LHC Run 2: benchmarks and comparison with the NMSSM

    Energy Technology Data Exchange (ETDEWEB)

    Costa, Raul [Centro de Física Teórica e Computacional, Faculdade de Ciências,Universidade de Lisboa, Campo Grande, Edifício C8 1749-016 Lisboa (Portugal); Departamento de Física da Universidade de Aveiro,Campus de Santiago, 3810-183 Aveiro (Portugal); Mühlleitner, Margarete [Institute for Theoretical Physics, Karlsruhe Institute of Technology,76128 Karlsruhe (Germany); Sampaio, Marco O.P. [Departamento de Física da Universidade de Aveiro,Campus de Santiago, 3810-183 Aveiro (Portugal); CIDMA - Center for Research Development in Mathematics and Applications,Campus de Santiago, 3810-183 Aveiro (Portugal); Santos, Rui [Centro de Física Teórica e Computacional, Faculdade de Ciências,Universidade de Lisboa, Campo Grande, Edifício C8 1749-016 Lisboa (Portugal); ISEL - Instituto Superior de Engenharia de Lisboa,Instituto Politécnico de Lisboa, 1959-007 Lisboa (Portugal)

    2016-06-07

    The Complex singlet extension of the Standard Model (CxSM) is the simplest extension that provides scenarios for Higgs pair production with different masses. The model has two interesting phases: the dark matter phase, with a Standard Model-like Higgs boson, a new scalar and a dark matter candidate; and the broken phase, with all three neutral scalars mixing. In the latter phase Higgs decays into a pair of two different Higgs bosons are possible. In this study we analyse Higgs-to-Higgs decays in the framework of singlet extensions of the Standard Model (SM), with focus on the CxSM. After demonstrating that scenarios with large rates for such chain decays are possible we perform a comparison between the NMSSM and the CxSM. We find that, based on Higgs-to-Higgs decays, the only possibility to distinguish the two models at the LHC run 2 is through final states with two different scalars. This conclusion builds a strong case for searches for final states with two different scalars at the LHC run 2. Finally, we propose a set of benchmark points for the real and complex singlet extensions to be tested at the LHC run 2. They have been chosen such that the discovery prospects of the involved scalars are maximised and they fulfil the dark matter constraints. Furthermore, for some of the points the theory is stable up to high energy scales. For the computation of the decay widths and branching ratios we developed the Fortran code sHDECAY, which is based on the implementation of the real and complex singlet extensions of the SM in HDECAY.

  10. Stakeholder insights on the planning and development of an independent benchmark standard for responsible food marketing.

    Science.gov (United States)

    Cairns, Georgina; Macdonald, Laura

    2016-06-01

    A mixed methods qualitative survey investigated stakeholder responses to the proposal to develop an independently defined, audited and certifiable set of benchmark standards for responsible food marketing. Its purpose was to inform the policy planning and development process. A majority of respondents were supportive of the proposal. A majority also viewed the engagement and collaboration of a broad base of stakeholders in its planning and development as potentially beneficial. Positive responses were associated with views that policy controls can and should be extended to include all form of marketing, that obesity and non-communicable diseases prevention and control was a shared responsibility and an urgent policy priority and prior experience of independent standardisation as a policy lever for good practice. Strong policy leadership, demonstrable utilisation of the evidence base in its development and deployment and a conceptually clear communications plan were identified as priority targets for future policy planning. Future research priorities include generating more evidence on the feasibility of developing an effective community of practice and theory of change, the strengths and limitations of these and developing an evidence-based step-wise communications strategy. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Reference and standard benchmark field consensus fission yields for U.S. reactor dosimetry programs

    International Nuclear Information System (INIS)

    Gilliam, D.M.; Helmer, R.G.; Greenwood, R.C.; Rogers, J.W.; Heinrich, R.R.; Popek, R.J.; Kellogg, L.S.; Lippincott, E.P.; Hansen, G.E.; Zimmer, W.H.

    1977-01-01

    Measured fission product yields are reported for three benchmark neutron fields--the BIG-10 fast critical assembly at Los Alamos, the CFRMF fast neutron cavity at INEL, and the thermal column of the NBS Research Reactor. These measurements were carried out by participants in the Interlaboratory LMFBR Reaction Rates (ILRR) program. Fission product generation rates were determined by post-irradiation analysis of gamma-ray emission from fission activation foils. The gamma counting was performed by Ge(Li) spectrometry at INEL, ANL, and HEDL; the sample sent to INEL was also analyzed by NaI(Tl) spectrometry for Ba-140 content. The fission rates were determined by means of the NBS Double Fission Ionization Chamber using thin deposits of each of the fissionable isotopes. Four fissionable isotopes were included in the fast neutron field measurements; these were U-235, U-238, Pu-239, and Np-237. Only U-235 was included in the thermal neutron yield measurements. For the fast neutron fields, consensus yields were determined for three fission product isotopes--Zr-95, Ru-103, and Ba-140. For these fission product isotopes, a separately activated foil was analyzed by each of the three gamma counting laboratories. The experimental standard deviation of the three independent results was typically +- 1.5%. For the thermal neutron field, a consensus value for the Cs-137 yield was also obtained. Subsidiary fission yields are also reported for other isotopes which were studied less intensively (usually by only one of the participating laboratories). Comparisons with EBR-II fast reactor yields from destructive analysis and with ENDF/B recommended values are given

  12. Comparisons of the MCNP criticality benchmark suite with ENDF/B-VI.8, JENDL-3.3, and JEFF-3.0

    International Nuclear Information System (INIS)

    Kim, Do Heon; Gil, Choong-Sup; Kim, Jung-Do; Chang, Jonghwa

    2003-01-01

    A comparative study has been performed with the latest evaluated nuclear data libraries ENDF/B-VI.8, JENDL-3.3, and JEFF-3.0. The study has been conducted through the benchmark calculations for 91 criticality problems with the libraries processed for MCNP4C. The calculation results have been compared with those of the ENDF60 library. The self-shielding effects of the unresolved-resonance (UR) probability tables have also been estimated for each library. The χ 2 differences between the MCNP results and experimental data were calculated for the libraries. (author)

  13. The MSRC Ab Initio Methods Benchmark Suite: A measurement of hardware and software performance in the area of electronic structure methods

    Energy Technology Data Exchange (ETDEWEB)

    Feller, D.F.

    1993-07-01

    This collection of benchmark timings represents a snapshot of the hardware and software capabilities available for ab initio quantum chemical calculations at Pacific Northwest Laboratory`s Molecular Science Research Center in late 1992 and early 1993. The ``snapshot`` nature of these results should not be underestimated, because of the speed with which both hardware and software are changing. Even during the brief period of this study, we were presented with newer, faster versions of several of the codes. However, the deadline for completing this edition of the benchmarks precluded updating all the relevant entries in the tables. As will be discussed below, a similar situation occurred with the hardware. The timing data included in this report are subject to all the normal failures, omissions, and errors that accompany any human activity. In an attempt to mimic the manner in which calculations are typically performed, we have run the calculations with the maximum number of defaults provided by each program and a near minimum amount of memory. This approach may not produce the fastest performance that a particular code can deliver. It is not known to what extent improved timings could be obtained for each code by varying the run parameters. If sufficient interest exists, it might be possible to compile a second list of timing data corresponding to the fastest observed performance from each application, using an unrestricted set of input parameters. Improvements in I/O might have been possible by fine tuning the Unix kernel, but we resisted the temptation to make changes to the operating system. Due to the large number of possible variations in levels of operating system, compilers, speed of disks and memory, versions of applications, etc., readers of this report may not be able to exactly reproduce the times indicated. Copies of the output files from individual runs are available if questions arise about a particular set of timings.

  14. PEPSI deep spectra. II. Gaia benchmark stars and other M-K standards

    Science.gov (United States)

    Strassmeier, K. G.; Ilyin, I.; Weber, M.

    2018-04-01

    Context. High-resolution échelle spectra confine many essential stellar parameters once the data reach a quality appropriate to constrain the various physical processes that form these spectra. Aim. We provide a homogeneous library of high-resolution, high-S/N spectra for 48 bright AFGKM stars, some of them approaching the quality of solar-flux spectra. Our sample includes the northern Gaia benchmark stars, some solar analogs, and some other bright Morgan-Keenan (M-K) spectral standards. Methods: Well-exposed deep spectra were created by average-combining individual exposures. The data-reduction process relies on adaptive selection of parameters by using statistical inference and robust estimators. We employed spectrum synthesis techniques and statistics tools in order to characterize the spectra and give a first quick look at some of the science cases possible. Results: With an average spectral resolution of R ≈ 220 000 (1.36 km s-1), a continuous wavelength coverage from 383 nm to 912 nm, and S/N of between 70:1 for the faintest star in the extreme blue and 6000:1 for the brightest star in the red, these spectra are now made public for further data mining and analysis. Preliminary results include new stellar parameters for 70 Vir and α Tau, the detection of the rare-earth element dysprosium and the heavy elements uranium, thorium and neodymium in several RGB stars, and the use of the 12C to 13C isotope ratio for age-related determinations. We also found Arcturus to exhibit few-percent Ca II H&K and Hα residual profile changes with respect to the KPNO atlas taken in 1999. Based on data acquired with PEPSI using the Large Binocular Telescope (LBT) and the Vatican Advanced Technology Telescope (VATT). The LBT is an international collaboration among institutions in the United States, Italy, and Germany. LBT Corporation partners are the University of Arizona on behalf of the Arizona university system; Istituto Nazionale di Astrofisica, Italy; LBT

  15. EVA Health and Human Performance Benchmarking Study

    Science.gov (United States)

    Abercromby, A. F.; Norcross, J.; Jarvis, S. L.

    2016-01-01

    Multiple HRP Risks and Gaps require detailed characterization of human health and performance during exploration extravehicular activity (EVA) tasks; however, a rigorous and comprehensive methodology for characterizing and comparing the health and human performance implications of current and future EVA spacesuit designs does not exist. This study will identify and implement functional tasks and metrics, both objective and subjective, that are relevant to health and human performance, such as metabolic expenditure, suit fit, discomfort, suited postural stability, cognitive performance, and potentially biochemical responses for humans working inside different EVA suits doing functional tasks under the appropriate simulated reduced gravity environments. This study will provide health and human performance benchmark data for humans working in current EVA suits (EMU, Mark III, and Z2) as well as shirtsleeves using a standard set of tasks and metrics with quantified reliability. Results and methodologies developed during this test will provide benchmark data against which future EVA suits, and different suit configurations (eg, varied pressure, mass, CG) may be reliably compared in subsequent tests. Results will also inform fitness for duty standards as well as design requirements and operations concepts for future EVA suits and other exploration systems.

  16. Hospital website rankings in the United States: expanding benchmarks and standards for effective consumer engagement.

    Science.gov (United States)

    Huerta, Timothy R; Hefner, Jennifer L; Ford, Eric W; McAlearney, Ann Scheck; Menachemi, Nir

    2014-02-25

    Passage of the Patient Protection and Affordable Care Act (ACA) increased the roles hospitals and health systems play in care delivery and led to a wave of consolidation of medical groups and hospitals. As such, the traditional patient interaction with an independent medical provider is becoming far less common, replaced by frequent interactions with integrated medical groups and health systems. It is thus increasingly important for these organizations to have an effective social media presence. Moreover, in the age of the informed consumer, patients desire a readily accessible, electronic interface to initiate contact, making a well-designed website and social media strategy critical features of the modern health care organization. The purpose of this study was to assess the Web presence of hospitals and their health systems on five dimensions: accessibility, content, marketing, technology, and usability. In addition, an overall ranking was calculated to identify the top 100 hospital and health system websites. A total of 2407 unique Web domains covering 2785 hospital facilities or their parent organizations were identified and matched against the 2009 American Hospital Association (AHA) Annual Survey. This is a four-fold improvement in prior research and represents what the authors believe to be a census assessment of the online presence of US hospitals and their health systems. Each of the five dimensions was investigated with an automated content analysis using a suite of tools. Scores on the dimensions are reported on a range from 0 to 10, with a higher score on any given dimension representing better comparative performance. Rankings on each dimension and an average ranking are provided for the top 100 hospitals. The mean score on the usability dimension, meant to rate overall website quality, was 5.16 (SD 1.43), with the highest score of 8 shared by only 5 hospitals. Mean scores on other dimensions were between 4.43 (SD 2.19) and 6.49 (SD 0.96). Based on

  17. Variation in assessment and standard setting practices across UK undergraduate medicine and the need for a benchmark.

    Science.gov (United States)

    MacDougall, Margaret

    2015-10-31

    The principal aim of this study is to provide an account of variation in UK undergraduate medical assessment styles and corresponding standard setting approaches with a view to highlighting the importance of a UK national licensing exam in recognizing a common standard. Using a secure online survey system, response data were collected during the period 13 - 30 January 2014 from selected specialists in medical education assessment, who served as representatives for their respective medical schools. Assessment styles and corresponding choices of standard setting methods vary markedly across UK medical schools. While there is considerable consensus on the application of compensatory approaches, individual schools display their own nuances through use of hybrid assessment and standard setting styles, uptake of less popular standard setting techniques and divided views on norm referencing. The extent of variation in assessment and standard setting practices across UK medical schools validates the concern that there is a lack of evidence that UK medical students achieve a common standard on graduation. A national licensing exam is therefore a viable option for benchmarking the performance of all UK undergraduate medical students.

  18. TRAK App Suite: A Web-Based Intervention for Delivering Standard Care for the Rehabilitation of Knee Conditions.

    Science.gov (United States)

    Spasić, Irena; Button, Kate; Divoli, Anna; Gupta, Satyam; Pataky, Tamas; Pizzocaro, Diego; Preece, Alun; van Deursen, Robert; Wilson, Chris

    2015-10-16

    Standard care for the rehabilitation of knee conditions involves exercise programs and information provision. Current methods of rehabilitation delivery struggle to keep up with large volumes of patients and the length of treatment required to maximize the recovery. Therefore, the development of novel interventions to support self-management is strongly recommended. Such interventions need to include information provision, goal setting, monitoring, feedback, and support groups, but the most effective methods of their delivery are poorly understood. The Internet provides a medium for intervention delivery with considerable potential for meeting these needs. The objective of this study was to demonstrate the feasibility of a Web-based app and to conduct a preliminary review of its practicability as part of a complex medical intervention in the rehabilitation of knee disorders. This paper describes the development, implementation, and usability of such an app. An interdisciplinary team of health care professionals and researchers, computer scientists, and app developers developed the TRAK app suite. The key functionality of the app includes information provision, a three-step exercise program based on a standard care for the rehabilitation of knee conditions, self-monitoring with visual feedback, and a virtual support group. There were two types of stakeholders (patients and physiotherapists) that were recruited for the usability study. The usability questionnaire was used to collect both qualitative and quantitative information on computer and Internet usage, task completion, and subjective user preferences. A total of 16 patients and 15 physiotherapists participated in the usability study. Based on the System Usability Scale, the TRAK app has higher perceived usability than 70% of systems. Both patients and physiotherapists agreed that the given Web-based approach would facilitate communication, provide information, help recall information, improve understanding

  19. Benchmarking the Naval Systems Engineering Guide Against the Industry Standards for Systems Engineering

    Science.gov (United States)

    2012-09-01

    are all using the same standard or comparable ones. While one would think that there is coherency among those contractors and others building and...American Scientists, 2011). Adaption and modification are significant parts of SE thinking and methodology that need to be well used and available to...Obsolecence None None None Cover Disposal Standards None None Partial Cover Cover Data Gathering None Cover Cover Cover Enviromental Concerns None

  20. Benchmarking computer platforms for lattice QCD applications

    International Nuclear Information System (INIS)

    Hasenbusch, M.; Jansen, K.; Pleiter, D.; Wegner, P.; Wettig, T.

    2003-09-01

    We define a benchmark suite for lattice QCD and report on benchmark results from several computer platforms. The platforms considered are apeNEXT, CRAY T3E, Hitachi SR8000, IBM p690, PC-Clusters, and QCDOC. (orig.)

  1. Benchmarking computer platforms for lattice QCD applications

    International Nuclear Information System (INIS)

    Hasenbusch, M.; Jansen, K.; Pleiter, D.; Stueben, H.; Wegner, P.; Wettig, T.; Wittig, H.

    2004-01-01

    We define a benchmark suite for lattice QCD and report on benchmark results from several computer platforms. The platforms considered are apeNEXT, CRAY T3E; Hitachi SR8000, IBM p690, PC-Clusters, and QCDOC

  2. Benchmarking quantitative label-free LC-MS data processing workflows using a complex spiked proteomic standard dataset.

    Science.gov (United States)

    Ramus, Claire; Hovasse, Agnès; Marcellin, Marlène; Hesse, Anne-Marie; Mouton-Barbosa, Emmanuelle; Bouyssié, David; Vaca, Sebastian; Carapito, Christine; Chaoui, Karima; Bruley, Christophe; Garin, Jérôme; Cianférani, Sarah; Ferro, Myriam; Van Dorssaeler, Alain; Burlet-Schiltz, Odile; Schaeffer, Christine; Couté, Yohann; Gonzalez de Peredo, Anne

    2016-01-30

    Proteomic workflows based on nanoLC-MS/MS data-dependent-acquisition analysis have progressed tremendously in recent years. High-resolution and fast sequencing instruments have enabled the use of label-free quantitative methods, based either on spectral counting or on MS signal analysis, which appear as an attractive way to analyze differential protein expression in complex biological samples. However, the computational processing of the data for label-free quantification still remains a challenge. Here, we used a proteomic standard composed of an equimolar mixture of 48 human proteins (Sigma UPS1) spiked at different concentrations into a background of yeast cell lysate to benchmark several label-free quantitative workflows, involving different software packages developed in recent years. This experimental design allowed to finely assess their performances in terms of sensitivity and false discovery rate, by measuring the number of true and false-positive (respectively UPS1 or yeast background proteins found as differential). The spiked standard dataset has been deposited to the ProteomeXchange repository with the identifier PXD001819 and can be used to benchmark other label-free workflows, adjust software parameter settings, improve algorithms for extraction of the quantitative metrics from raw MS data, or evaluate downstream statistical methods. Bioinformatic pipelines for label-free quantitative analysis must be objectively evaluated in their ability to detect variant proteins with good sensitivity and low false discovery rate in large-scale proteomic studies. This can be done through the use of complex spiked samples, for which the "ground truth" of variant proteins is known, allowing a statistical evaluation of the performances of the data processing workflow. We provide here such a controlled standard dataset and used it to evaluate the performances of several label-free bioinformatics tools (including MaxQuant, Skyline, MFPaQ, IRMa-hEIDI and Scaffold) in

  3. Toward the rational use of standardized infection ratios to benchmark surgical site infections.

    Science.gov (United States)

    Fukuda, Haruhisa; Morikane, Keita; Kuroki, Manabu; Taniguchi, Shinichiro; Shinzato, Takashi; Sakamoto, Fumie; Okada, Kunihiko; Matsukawa, Hiroshi; Ieiri, Yuko; Hayashi, Kouji; Kawai, Shin

    2013-09-01

    The National Healthcare Safety Network transitioned from surgical site infection (SSI) rates to the standardized infection ratio (SIR) calculated by statistical models that included perioperative factors (surgical approach and surgery duration). Rationally, however, only patient-related variables should be included in the SIR model. Logistic regression was performed to predict expected SSI rate in 2 models that included or excluded perioperative factors. Observed and expected SSI rates were used to calculate the SIR for each participating hospital. The difference of SIR in each model was then evaluated. Surveillance data were collected from a total of 1,530 colon surgery patients and 185 SSIs. C-index in the model with perioperative factors was statistically greater than that in the model including patient-related factors only (0.701 vs 0.621, respectively, P operative process or the competence of surgical teams, these factors should not be considered predictive variables. Copyright © 2013 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Mosby, Inc. All rights reserved.

  4. Reactor dosimetry integral reaction rate data in LMFBR Benchmark and standard neutron fields: status, accuracy and implications

    International Nuclear Information System (INIS)

    Fabry, A.; Ceulemans, H.; Vandeplas, P.; McElroy, W.N.; Lippincott, E.P.

    1977-01-01

    This paper provides conclusions that may be drawn regarding the consistency and accuracy of dosimetry cross-section files on the basis of integral reaction rate data measured in U.S. and European benchmark and standard neutron fields. In a discussion of the major experimental facilities CFRMF (Idaho Falls), BIGTEN (Los Alamos), ΣΣ (Mol, Bucharest), NISUS (London), TAPIRO (Roma), FISSION SPECTRA (NBS, Mol, PTB), attention is paid to quantifying the sensitivity of computed integral data relative to the presently evaluated accuracy of the various neutron spectral distributions. The status of available integral data is reviewed and the assigned uncertainties are appraised, including experience gained by interlaboratory comparisons. For all reactions studied and for the various neutron fields, the measured integral data are compared to the ones computed from the ENDF/B-IV and the SAND-II dosimetry cross-section libraries as well as to some other differential data in relevant cases. This comparison, together with the proposed sensitivity and accuracy assessments, is used, whenever possible, to establish how well the best cross-sections evaluated on the basis of differential measurements (category I dosimetry reactions) are reliable in terms of integral reaction rates prediction and, for those reactions for which discrepancies are indicated, in which energy range it is presumed that additional differential measurements might help. For the other reactions (category II), the inconsistencies and trends are examined. The need for further integral measurements and interlaboratory comparisons is also considered

  5. Implementation and verification of global optimization benchmark problems

    Science.gov (United States)

    Posypkin, Mikhail; Usov, Alexander

    2017-12-01

    The paper considers the implementation and verification of a test suite containing 150 benchmarks for global deterministic box-constrained optimization. A C++ library for describing standard mathematical expressions was developed for this purpose. The library automate the process of generating the value of a function and its' gradient at a given point and the interval estimates of a function and its' gradient on a given box using a single description. Based on this functionality, we have developed a collection of tests for an automatic verification of the proposed benchmarks. The verification has shown that literary sources contain mistakes in the benchmarks description. The library and the test suite are available for download and can be used freely.

  6. Implementation and verification of global optimization benchmark problems

    Directory of Open Access Journals (Sweden)

    Posypkin Mikhail

    2017-12-01

    Full Text Available The paper considers the implementation and verification of a test suite containing 150 benchmarks for global deterministic box-constrained optimization. A C++ library for describing standard mathematical expressions was developed for this purpose. The library automate the process of generating the value of a function and its’ gradient at a given point and the interval estimates of a function and its’ gradient on a given box using a single description. Based on this functionality, we have developed a collection of tests for an automatic verification of the proposed benchmarks. The verification has shown that literary sources contain mistakes in the benchmarks description. The library and the test suite are available for download and can be used freely.

  7. Financial Integrity Benchmarks

    Data.gov (United States)

    City of Jackson, Mississippi — This data compiles standard financial integrity benchmarks that allow the City to measure its financial standing. It measure the City's debt ratio and bond ratings....

  8. Comprehensive Benchmark Suite for Simulation of Particle Laden Flows Using the Discrete Element Method with Performance Profiles from the Multiphase Flow with Interface eXchanges (MFiX) Code

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Peiyuan [Univ. of Colorado, Boulder, CO (United States); Brown, Timothy [Univ. of Colorado, Boulder, CO (United States); Fullmer, William D. [Univ. of Colorado, Boulder, CO (United States); Hauser, Thomas [Univ. of Colorado, Boulder, CO (United States); Hrenya, Christine [Univ. of Colorado, Boulder, CO (United States); Grout, Ray [National Renewable Energy Lab. (NREL), Golden, CO (United States); Sitaraman, Hariswaran [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2016-01-29

    Five benchmark problems are developed and simulated with the computational fluid dynamics and discrete element model code MFiX. The benchmark problems span dilute and dense regimes, consider statistically homogeneous and inhomogeneous (both clusters and bubbles) particle concentrations and a range of particle and fluid dynamic computational loads. Several variations of the benchmark problems are also discussed to extend the computational phase space to cover granular (particles only), bidisperse and heat transfer cases. A weak scaling analysis is performed for each benchmark problem and, in most cases, the scalability of the code appears reasonable up to approx. 103 cores. Profiling of the benchmark problems indicate that the most substantial computational time is being spent on particle-particle force calculations, drag force calculations and interpolating between discrete particle and continuum fields. Hardware performance analysis was also carried out showing significant Level 2 cache miss ratios and a rather low degree of vectorization. These results are intended to serve as a baseline for future developments to the code as well as a preliminary indicator of where to best focus performance optimizations.

  9. A suite of standard post-tagging evaluation metrics can help assess tag retention for field-based fish telemetry research

    Science.gov (United States)

    Gerber, Kayla M.; Mather, Martha E.; Smith, Joseph M.

    2017-01-01

    Telemetry can inform many scientific and research questions if a context exists for integrating individual studies into the larger body of literature. Creating cumulative distributions of post-tagging evaluation metrics would allow individual researchers to relate their telemetry data to other studies. Widespread reporting of standard metrics is a precursor to the calculation of benchmarks for these distributions (e.g., mean, SD, 95% CI). Here we illustrate five types of standard post-tagging evaluation metrics using acoustically tagged Blue Catfish (Ictalurus furcatus) released into a Kansas reservoir. These metrics included: (1) percent of tagged fish detected overall, (2) percent of tagged fish detected daily using abacus plot data, (3) average number of (and percent of available) receiver sites visited, (4) date of last movement between receiver sites (and percent of tagged fish moving during that time period), and (5) number (and percent) of fish that egressed through exit gates. These metrics were calculated for one to three time periods: early ( 5 days early in the study. On average, tagged Blue Catfish visited 9 (50%) and 13 (72%) of 18 within-reservoir receivers early and at the end of the study, respectively. At the end of the study, 73% of all tagged fish were detected moving between receivers. Creating statistical benchmarks for individual metrics can provide useful reference points. In addition, combining multiple metrics can inform ecology and research design. Consequently, individual researchers and the field of telemetry research can benefit from widespread, detailed, and standard reporting of post-tagging detection metrics.

  10. A suite of standard post-tagging evaluation metrics can help assess tag retention for field-based fish telemetry research

    Science.gov (United States)

    Gerber, Kayla M.; Mather, Martha E.; Smith, Joseph M.

    2017-01-01

    Telemetry can inform many scientific and research questions if a context exists for integrating individual studies into the larger body of literature. Creating cumulative distributions of post-tagging evaluation metrics would allow individual researchers to relate their telemetry data to other studies. Widespread reporting of standard metrics is a precursor to the calculation of benchmarks for these distributions (e.g., mean, SD, 95% CI). Here we illustrate five types of standard post-tagging evaluation metrics using acoustically tagged Blue Catfish (Ictalurus furcatus) released into a Kansas reservoir. These metrics included: (1) percent of tagged fish detected overall, (2) percent of tagged fish detected daily using abacus plot data, (3) average number of (and percent of available) receiver sites visited, (4) date of last movement between receiver sites (and percent of tagged fish moving during that time period), and (5) number (and percent) of fish that egressed through exit gates. These metrics were calculated for one to three time periods: early (of the study (5 months). Over three-quarters of our tagged fish were detected early (85%) and at the end (85%) of the study. Using abacus plot data, all tagged fish (100%) were detected at least one day and 96% were detected for > 5 days early in the study. On average, tagged Blue Catfish visited 9 (50%) and 13 (72%) of 18 within-reservoir receivers early and at the end of the study, respectively. At the end of the study, 73% of all tagged fish were detected moving between receivers. Creating statistical benchmarks for individual metrics can provide useful reference points. In addition, combining multiple metrics can inform ecology and research design. Consequently, individual researchers and the field of telemetry research can benefit from widespread, detailed, and standard reporting of post-tagging detection metrics.

  11. Library Benchmarking

    Directory of Open Access Journals (Sweden)

    Wiji Suwarno

    2017-02-01

    Full Text Available The term benchmarking has been encountered in the implementation of total quality (TQM or in Indonesian termed holistic quality management because benchmarking is a tool to look for ideas or learn from the library. Benchmarking is a processof measuring and comparing for continuous business process of systematic and continuous measurement, the process of measuring and comparing for continuous business process of an organization to get information that can help these organization improve their performance efforts.

  12. Assessing and benchmarking multiphoton microscopes for biologists.

    Science.gov (United States)

    Corbin, Kaitlin; Pinkard, Henry; Peck, Sebastian; Beemiller, Peter; Krummel, Matthew F

    2014-01-01

    Multiphoton microscopy has become staple tool for tracking cells within tissues and organs due to superior depth of penetration, low excitation volumes, and reduced phototoxicity. Many factors, ranging from laser pulse width to relay optics to detectors and electronics, contribute to the overall ability of these microscopes to excite and detect fluorescence deep within tissues. However, we have found that there are few standard ways already described in the literature to distinguish between microscopes or to benchmark existing microscopes to measure the overall quality and efficiency of these instruments. Here, we discuss some simple parameters and methods that can either be used within a multiphoton facility or by a prospective purchaser to benchmark performance. This can both assist in identifying decay in microscope performance and in choosing features of a scope that are suited to experimental needs. © 2014 Elsevier Inc. All rights reserved.

  13. Assessing and benchmarking multiphoton microscopes for biologists

    Science.gov (United States)

    Corbin, Kaitlin; Pinkard, Henry; Peck, Sebastian; Beemiller, Peter; Krummel, Matthew F.

    2017-01-01

    Multiphoton microscopy has become staple tool for tracking cells within tissues and organs due to superior depth of penetration, low excitation volumes, and reduced phototoxicity. Many factors, ranging from laser pulse width to relay optics to detectors and electronics, contribute to the overall ability of these microscopes to excite and detect fluorescence deep within tissues. However, we have found that there are few standard ways already described in the literature to distinguish between microscopes or to benchmark existing microscopes to measure the overall quality and efficiency of these instruments. Here, we discuss some simple parameters and methods that can either be used within a multiphoton facility or by a prospective purchaser to benchmark performance. This can both assist in identifying decay in microscope performance and in choosing features of a scope that are suited to experimental needs. PMID:24974026

  14. Benchmark selection

    DEFF Research Database (Denmark)

    Hougaard, Jens Leth; Tvede, Mich

    2002-01-01

    Within a production theoretic framework, this paper considers an axiomatic approach to benchmark selection. It is shown that two simple and weak axioms; efficiency and comprehensive monotonicity characterize a natural family of benchmarks which typically becomes unique. Further axioms are added...... in order to obtain a unique selection...

  15. The Effects of Cold Environments on Double-Poling Performance and Economy in Male Cross-Country Skiers Wearing a Standard Racing Suit.

    Science.gov (United States)

    Wiggen, Øystein N; Heidelberg, Cecilie T; Waagaard, Silje H; F X E Revik, Hilde; Sandbakk, Øyvind

    2016-09-01

    To investigate differences in double-poling (DP) endurance performance, economy, and peak oxygen uptake (V̇O 2peak ) at low (-15°C) and moderate (6°C) ambient temperatures (T A ) in cross-country skiers wearing standard racing suits. Thirteen well-trained male cross-country skiers performed a standardized warm-up followed by a 5-min submaximal test (Sub1), a 20-min self-paced performance test, a 2nd 5-min submaximal test (Sub2), and an incremental test to exhaustion while DP on an ergometer at either low or moderate T A , randomized on 2 different days. Skin and rectal temperatures, as well as power output and respiratory variables, were measured continuously during all tests. Skin and rectal temperatures were more reduced at low T A than moderate T A (both P < .05). There was a 5% (P < .05) lower average power output during the 20-min performance test at low T A than at moderate T A , which primarily occurred in the first 8 min of the test (P < .05). Although DP economy decreased from Sub1 to Sub2 for both T A s (both P < .01), a 3.7% (P < .01) larger decrease in DP economy from Sub1 to Sub2 emerged for the low T A . Across the sample, V̇O 2peak was independent of T A . These results demonstrate a lower body temperature and reduced performance for cross-country skiers when DP at low than at moderate TA while wearing standard cross-country-skiing racing suits. Lower DP performance at the low T A was mainly due to lower power production during the first part of the test and coincided with reduced DP economy.

  16. Can consistent benchmarking within a standardized pain management concept decrease postoperative pain after total hip arthroplasty? A prospective cohort study including 367 patients

    Directory of Open Access Journals (Sweden)

    Benditz A

    2016-12-01

    Full Text Available Achim Benditz,1 Felix Greimel,1 Patrick Auer,2 Florian Zeman,3 Antje Göttermann,4 Joachim Grifka,1 Winfried Meissner,4 Frederik von Kunow1 1Department of Orthopedics, University Medical Center Regensburg, 2Clinic for anesthesia, Asklepios Klinikum Bad Abbach, Bad Abbach, 3Centre for Clinical Studies, University Medical Center Regensburg, Regensburg, 4Department of Anesthesiology and Intensive Care, Jena University Hospital, Jena, Germany Background: The number of total hip replacement surgeries has steadily increased over recent years. Reduction in postoperative pain increases patient satisfaction and enables better mobilization. Thus, pain management needs to be continuously improved. Problems are often caused not only by medical issues but also by organization and hospital structure. The present study shows how the quality of pain management can be increased by implementing a standardized pain concept and simple, consistent, benchmarking.Methods: All patients included in the study had undergone total hip arthroplasty (THA. Outcome parameters were analyzed 24 hours after surgery by means of the questionnaires from the German-wide project “Quality Improvement in Postoperative Pain Management” (QUIPS. A pain nurse interviewed patients and continuously assessed outcome quality parameters. A multidisciplinary team of anesthetists, orthopedic surgeons, and nurses implemented a regular procedure of data analysis and internal benchmarking. The health care team was informed of any results, and suggested improvements. Every staff member involved in pain management participated in educational lessons, and a special pain nurse was trained in each ward.Results: From 2014 to 2015, 367 patients were included. The mean maximal pain score 24 hours after surgery was 4.0 (±3.0 on an 11-point numeric rating scale, and patient satisfaction was 9.0 (±1.2. Over time, the maximum pain score decreased (mean 3.0, ±2.0, whereas patient satisfaction

  17. Regulatory Benchmarking

    DEFF Research Database (Denmark)

    Agrell, Per J.; Bogetoft, Peter

    2017-01-01

    Benchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators...

  18. Interactive benchmarking

    DEFF Research Database (Denmark)

    Lawson, Lartey; Nielsen, Kurt

    2005-01-01

    We discuss individual learning by interactive benchmarking using stochastic frontier models. The interactions allow the user to tailor the performance evaluation to preferences and explore alternative improvement strategies by selecting and searching the different frontiers using directional...

  19. Benchmarking Complications Associated with Esophagectomy

    NARCIS (Netherlands)

    Low, Donald E.; Kuppusamy, Madhan Kumar; Alderson, Derek; Cecconello, Ivan; Chang, Andrew C.; Darling, Gail; Davies, Andrew; D'journo, Xavier Benoit; Gisbertz, Suzanne S.; Griffin, S. Michael; Hardwick, Richard; Hoelscher, Arnulf; Hofstetter, Wayne; Jobe, Blair; Kitagawa, Yuko; Law, Simon; Mariette, Christophe; Maynard, Nick; Morse, Christopher R.; Nafteux, Philippe; Pera, Manuel; Pramesh, C. S.; Puig, Sonia; Reynolds, John V.; Schroeder, Wolfgang; Smithers, Mark; Wijnhoven, B. P. L.

    2017-01-01

    Utilizing a standardized dataset with specific definitions to prospectively collect international data to provide a benchmark for complications and outcomes associated with esophagectomy. Outcome reporting in oncologic surgery has suffered from the lack of a standardized system for reporting

  20. Enabling benchmarking and improving operational efficiency at nuclear power plants through adoption of a common process model: SNPM (standard nuclear performance model)

    International Nuclear Information System (INIS)

    Pete Karns

    2006-01-01

    To support the projected increase in base-load electricity demand, nuclear operating companies must maintain or improve upon current generation rates, all while their assets continue to age. Certainly new plants are and will be built, however the bulk of the world's nuclear generation comes from plants constructed in the 1970's and 1980's. The nuclear energy industry in the United States has dramatically increased its electricity production over the past decade; from 75.1% in 1994 to 91.9% by 2002 (source NEI US Nuclear Industry Net Capacity Factors - 1980 to 2003). This increase, coupled with lowered production costs; $2.43 in 1994 to $1.71 in 2002 (factored for inflation source NEI US Nuclear Industry net Production Costs 1980 to 2002) is due in large part to a focus on operational excellence that is driven by an industry effort to develop and share best practices for the purposes of benchmarking and improving overall performance. These best-practice processes, known as the standard nuclear performance model (SNPM), present an opportunity for European nuclear power generators who are looking to improve current production rates. In essence the SNPM is a model for the safe, reliable, and economically competitive nuclear power generation. The SNPM has been a joint effort of several industry bodies: Nuclear Energy Institute, Electric Cost Utility Group, and Institute of Nuclear Power Operations (INPO). The standard nuclear performance model (see figure 1) is comprised of eight primary processes, supported by forty four sub-processes and a number of company specific activities and tasks. The processes were originally envisioned by INPO in 1994 and evolved into the SNPM that was originally launched in 1998. Since that time communities of practice (CoPs) have emerged via workshops to further improve the processes and their inter-operability, CoP representatives include people from: nuclear power operating companies, policy bodies, industry suppliers and consultants, and

  1. SafeCare: An Innovative Approach for Improving Quality Through Standards, Benchmarking, and Improvement in Low- and Middle- Income Countries.

    Science.gov (United States)

    Johnson, Michael C; Schellekens, Onno; Stewart, Jacqui; van Ostenberg, Paul; de Wit, Tobias Rinke; Spieker, Nicole

    2016-08-01

    In low- and middle-income countries (LMICs), patients often have limited access to high-quality care because of a shortage of facilities and human resources, inefficiency of resource allocation, and limited health insurance. SafeCare was developed to provide innovative health care standards; surveyor training; a grading system for quality of care; a quality improvement process that is broken down into achievable, measurable steps to facilitate incremental improvement; and a private sector-supported health financing model. Three organizations-PharmAccess Foundation, Joint Commission International, and the Council for Health Service Accreditation of Southern Africa-launched SafeCare in 2011 as a formal partnership. Five SafeCare levels of improvement are allocated on the basis of an algorithm that incorporates both the overall score and weighted criteria, so that certain high-risk criteria need to be in place before a facility can move to the next SafeCare certification level. A customized quality improvement plan based on the SafeCare assessment results lists the specific, measurable activities that should be undertaken to address gaps in quality found during the initial assessment and to meet the nextlevel SafeCare certificate. The standards have been implemented in more than 800 primary and secondary facilities by qualified local surveyors, in partnership with various local public and private partner organizations, in six sub-Saharan African countries (Ghana, Kenya, Nigeria, Namibia, Tanzania, and Zambia). Expanding access to care and improving health care quality in LMICs will require a coordinated effort between institutions and other stakeholders. SafeCare's standards and assessment methodology can help build trust between stakeholders and lay the foundation for country-led quality monitoring systems.

  2. Handbook of critical experiments benchmarks

    International Nuclear Information System (INIS)

    Durst, B.M.; Bierman, S.R.; Clayton, E.D.

    1978-03-01

    Data from critical experiments have been collected together for use as benchmarks in evaluating calculational techniques and nuclear data. These benchmarks have been selected from the numerous experiments performed on homogeneous plutonium systems. No attempt has been made to reproduce all of the data that exists. The primary objective in the collection of these data is to present representative experimental data defined in a concise, standardized format that can easily be translated into computer code input

  3. Integrating the Nqueens Algorithm into a Parameterized Benchmark Suite

    Science.gov (United States)

    2016-02-01

    SHOC 4 3. Related Works 4 4. Backtrack Branch and Bound 5 5. Autotuning Background 8 6. Method 10 7. Results 12 8. Conclusions 16 9. Future...autotuning methodologies to compare optimal performance of various architectures. The work presented here builds upon this by adding the Backtrack ...only OpenCL. 4. Backtrack Branch and Bound The BBB algorithm is a way to search for a solution to a problem among a variety of potential solutions

  4. A Benchmark Suite for Distributed Publish/Subscribe Systems

    National Research Council Canada - National Science Library

    Carzaniga, Antonio; Wolf, Alexander L

    2002-01-01

    .... The service model should provide a value-added service for a wide variety of applications, while the implementation should gracefully scale up to handle an intense traffic of publications and subscriptions...

  5. Co-ordinated research programme on benchmark study for the seismic analysis and testing of WWER-type nuclear power plants. V. 2. Generic material: Codes, standards, criteria. Working material

    International Nuclear Information System (INIS)

    1995-01-01

    The Co-ordinated research programme on the benchmark study for the seismic analysis and testing of WWER-type nuclear power plants was initiated subsequent to the request from representatives of Member States. The conclusions adopted at the Technical Committee Meeting on Seismic Issues related to existing nuclear power plants held in Tokyo in 1991 called for the harmonization of methods and criteria used in Member States in issues related to seismic safety. The Consulltants' Meeting which followed resulted in producing a working document for CRP. It was decided that a benchmark study is the most effective way to achieve the principal objective. Two types of WWER reactors (WWER-440/213 and WWER-1000) were selected as prototypes for the benchmark exercise to be tested on a full scale using explosions and/or vibration generators. The two prototypes are Kozloduy Units 5/6 for WWER-1000 and Paks for WWER-440/213 nuclear power plants. This volume of Working material contains reports related to generic material, namely codes, standards and criteria for benchmark analysis

  6. EMU Suit Performance Simulation

    Science.gov (United States)

    Cowley, Matthew S.; Benson, Elizabeth; Harvill, Lauren; Rajulu, Sudhakar

    2014-01-01

    Introduction: Designing a planetary suit is very complex and often requires difficult trade-offs between performance, cost, mass, and system complexity. To verify that new suit designs meet requirements, full prototypes must be built and tested with human subjects. However, numerous design iterations will occur before the hardware meets those requirements. Traditional draw-prototype-test paradigms for research and development are prohibitively expensive with today's shrinking Government budgets. Personnel at NASA are developing modern simulation techniques that focus on a human-centric design paradigm. These new techniques make use of virtual prototype simulations and fully adjustable physical prototypes of suit hardware. This is extremely advantageous and enables comprehensive design down-selections to be made early in the design process. Objectives: The primary objective was to test modern simulation techniques for evaluating the human performance component of two EMU suit concepts, pivoted and planar style hard upper torso (HUT). Methods: This project simulated variations in EVA suit shoulder joint design and subject anthropometry and then measured the differences in shoulder mobility caused by the modifications. These estimations were compared to human-in-the-loop test data gathered during past suited testing using four subjects (two large males, two small females). Results: Results demonstrated that EVA suit modeling and simulation are feasible design tools for evaluating and optimizing suit design based on simulated performance. The suit simulation model was found to be advantageous in its ability to visually represent complex motions and volumetric reach zones in three dimensions, giving designers a faster and deeper comprehension of suit component performance vs. human performance. Suit models were able to discern differing movement capabilities between EMU HUT configurations, generic suit fit concerns, and specific suit fit concerns for crewmembers based

  7. A proposal for benchmarking learning objects

    OpenAIRE

    Rita Falcão; Alfredo Soeiro

    2007-01-01

    This article proposes a methodology for benchmarking learning objects. It aims to deal with twoproblems related to e-learning: the validation of learning using this method and the return oninvestment of the process of development and use: effectiveness and efficiency.This paper describes a proposal for evaluating learning objects (LOs) through benchmarking, basedon the Learning Object Metadata Standard and on an adaptation of the main tools of the BENVICproject. The Benchmarking of Learning O...

  8. Operational benchmarking of Japanese and Danish hopsitals

    DEFF Research Database (Denmark)

    Traberg, Andreas; Itoh, Kenji; Jacobsen, Peter

    2010-01-01

    This benchmarking model is designed as an integration of three organizational dimensions suited for the healthcare sector. The model incorporates posterior operational indicators, and evaluates upon aggregation of performance. The model is tested upon seven cases from Japan and Denmark. Japanese...

  9. Comparison of Standard Light Water Reactor Cross-Section Libraries using the United States Nuclear Regulatory Commission Boiling Water Reactor Benchmark Problem

    Directory of Open Access Journals (Sweden)

    Kulesza Joel A.

    2016-01-01

    Full Text Available This paper describes a comparison of contemporary and historical light water reactor shielding and pressure vessel dosimetry cross-section libraries for a boiling water reactor calculational benchmark problem. The calculational benchmark problem was developed at Brookhaven National Laboratory by the request of the U. S. Nuclear Regulatory Commission. The benchmark problem was originally evaluated by Brookhaven National Laboratory using the Oak Ridge National Laboratory discrete ordinates code DORT and the BUGLE-93 cross-section library. In this paper, the Westinghouse RAPTOR-M3G three-dimensional discrete ordinates code was used. A variety of cross-section libraries were used with RAPTOR-M3G including the BUGLE93, BUGLE-96, and BUGLE-B7 cross-section libraries developed at Oak Ridge National Laboratory and ALPAN-VII.0 developed at Westinghouse. In comparing the calculated fast reaction rates using the four aforementioned cross-section libraries in the pressure vessel capsule, for six dosimetry reaction rates, a maximum relative difference of 8% was observed. As such, it is concluded that the results calculated by RAPTOR-M3G are consistent with the benchmark and further that the different vintage BUGLE cross-section libraries investigated are largely self-consistent.

  10. Instant Spring Tool Suite

    CERN Document Server

    Chiang, Geoff

    2013-01-01

    Filled with practical, step-by-step instructions and clear explanations for the most important and useful tasks. A tutorial guide that walks you through how to use the features of Spring Tool Suite using well defined sections for the different parts of Spring.Instant Spring Tool Suite is for novice to intermediate Java developers looking to get a head-start in enterprise application development using Spring Tool Suite and the Spring framework. If you are looking for a guide for effective application development using Spring Tool Suite, then this book is for you.

  11. Pharmacy settles suit.

    Science.gov (United States)

    1998-10-02

    A suit was filed by an HIV-positive man against a pharmacy that inadvertently disclosed his HIV status to his ex-wife and children. His ex-wife tried to use the information in a custody battle for their two children. The suit against the pharmacy was settled, but the terms of the settlement remain confidential.

  12. Computational shielding benchmarks

    International Nuclear Information System (INIS)

    The American Nuclear Society Standards Committee 6.2.1 is engaged in the documentation of radiation transport problems and their solutions. The primary objective of this effort is to test computational methods used within the international shielding community. Dissemination of benchmarks will, it is hoped, accomplish several goals: (1) Focus attention on problems whose solutions represent state-of-the-art methodology for representative transport problems of generic interest; (2) Specification of standard problems makes comparisons of alternate computational methods, including use of approximate vs. ''exact'' computer codes, more meaningful; (3) Comparison with experimental data may suggest improvements in computer codes and/or associated data sets; (4) Test reliability of new methods as they are introduced for the solution of specific problems; (5) Verify user ability to apply a given computational method; and (6) Verify status of a computer program being converted for use on a different computer (e.g., CDC vs IBM) or facility

  13. Analytical three-dimensional neutron transport benchmarks for verification of nuclear engineering codes. Final report

    International Nuclear Information System (INIS)

    Ganapol, B.D.; Kornreich, D.E.

    1997-01-01

    Because of the requirement of accountability and quality control in the scientific world, a demand for high-quality analytical benchmark calculations has arisen in the neutron transport community. The intent of these benchmarks is to provide a numerical standard to which production neutron transport codes may be compared in order to verify proper operation. The overall investigation as modified in the second year renewal application includes the following three primary tasks. Task 1 on two dimensional neutron transport is divided into (a) single medium searchlight problem (SLP) and (b) two-adjacent half-space SLP. Task 2 on three-dimensional neutron transport covers (a) point source in arbitrary geometry, (b) single medium SLP, and (c) two-adjacent half-space SLP. Task 3 on code verification, includes deterministic and probabilistic codes. The primary aim of the proposed investigation was to provide a suite of comprehensive two- and three-dimensional analytical benchmarks for neutron transport theory applications. This objective has been achieved. The suite of benchmarks in infinite media and the three-dimensional SLP are a relatively comprehensive set of one-group benchmarks for isotropically scattering media. Because of time and resource limitations, the extensions of the benchmarks to include multi-group and anisotropic scattering are not included here. Presently, however, enormous advances in the solution for the planar Green's function in an anisotropically scattering medium have been made and will eventually be implemented in the two- and three-dimensional solutions considered under this grant. Of particular note in this work are the numerical results for the three-dimensional SLP, which have never before been presented. The results presented were made possible only because of the tremendous advances in computing power that have occurred during the past decade

  14. Analytical three-dimensional neutron transport benchmarks for verification of nuclear engineering codes. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Ganapol, B.D.; Kornreich, D.E. [Univ. of Arizona, Tucson, AZ (United States). Dept. of Nuclear Engineering

    1997-07-01

    Because of the requirement of accountability and quality control in the scientific world, a demand for high-quality analytical benchmark calculations has arisen in the neutron transport community. The intent of these benchmarks is to provide a numerical standard to which production neutron transport codes may be compared in order to verify proper operation. The overall investigation as modified in the second year renewal application includes the following three primary tasks. Task 1 on two dimensional neutron transport is divided into (a) single medium searchlight problem (SLP) and (b) two-adjacent half-space SLP. Task 2 on three-dimensional neutron transport covers (a) point source in arbitrary geometry, (b) single medium SLP, and (c) two-adjacent half-space SLP. Task 3 on code verification, includes deterministic and probabilistic codes. The primary aim of the proposed investigation was to provide a suite of comprehensive two- and three-dimensional analytical benchmarks for neutron transport theory applications. This objective has been achieved. The suite of benchmarks in infinite media and the three-dimensional SLP are a relatively comprehensive set of one-group benchmarks for isotropically scattering media. Because of time and resource limitations, the extensions of the benchmarks to include multi-group and anisotropic scattering are not included here. Presently, however, enormous advances in the solution for the planar Green`s function in an anisotropically scattering medium have been made and will eventually be implemented in the two- and three-dimensional solutions considered under this grant. Of particular note in this work are the numerical results for the three-dimensional SLP, which have never before been presented. The results presented were made possible only because of the tremendous advances in computing power that have occurred during the past decade.

  15. Aquatic Life Benchmarks

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Aquatic Life Benchmarks is an EPA-developed set of criteria for freshwater species. These benchmarks are based on toxicity values reviewed by EPA and used in the...

  16. A benchmarking study

    Directory of Open Access Journals (Sweden)

    H. Groessing

    2015-02-01

    Full Text Available A benchmark study for permeability measurement is presented. In the past studies of other research groups which focused on the reproducibility of 1D-permeability measurements showed high standard deviations of the gained permeability values (25%, even though a defined test rig with required specifications was used. Within this study, the reproducibility of capacitive in-plane permeability testing system measurements was benchmarked by comparing results of two research sites using this technology. The reproducibility was compared by using a glass fibre woven textile and carbon fibre non crimped fabric (NCF. These two material types were taken into consideration due to the different electrical properties of glass and carbon with respect to dielectric capacitive sensors of the permeability measurement systems. In order to determine the unsaturated permeability characteristics as function of fibre volume content the measurements were executed at three different fibre volume contents including five repetitions. It was found that the stability and reproducibility of the presentedin-plane permeability measurement system is very good in the case of the glass fibre woven textiles. This is true for the comparison of the repetition measurements as well as for the comparison between the two different permeameters. These positive results were confirmed by a comparison to permeability values of the same textile gained with an older generation permeameter applying the same measurement technology. Also it was shown, that a correct determination of the grammage and the material density are crucial for correct correlation of measured permeability values and fibre volume contents.

  17. The ZPIC educational code suite

    Science.gov (United States)

    Calado, R.; Pardal, M.; Ninhos, P.; Helm, A.; Mori, W. B.; Decyk, V. K.; Vieira, J.; Silva, L. O.; Fonseca, R. A.

    2017-10-01

    Particle-in-Cell (PIC) codes are used in almost all areas of plasma physics, such as fusion energy research, plasma accelerators, space physics, ion propulsion, and plasma processing, and many other areas. In this work, we present the ZPIC educational code suite, a new initiative to foster training in plasma physics using computer simulations. Leveraging on our expertise and experience from the development and use of the OSIRIS PIC code, we have developed a suite of 1D/2D fully relativistic electromagnetic PIC codes, as well as 1D electrostatic. These codes are self-contained and require only a standard laptop/desktop computer with a C compiler to be run. The output files are written in a new file format called ZDF that can be easily read using the supplied routines in a number of languages, such as Python, and IDL. The code suite also includes a number of example problems that can be used to illustrate several textbook and advanced plasma mechanisms, including instructions for parameter space exploration. We also invite contributions to this repository of test problems that will be made freely available to the community provided the input files comply with the format defined by the ZPIC team. The code suite is freely available and hosted on GitHub at https://github.com/zambzamb/zpic. Work partially supported by PICKSC.

  18. Co-ordinated research programme on benchmark study for the seismic analysis and testing of WWER-type nuclear power plants. V. 2B. General material: codes, standards, criteria. Working material

    International Nuclear Information System (INIS)

    1999-01-01

    In August 1991, following the SMiRT-11 Conference in Tokyo, a Technical Committee Meeting was held on the 'Seismic safety issues relating to existing NPPs'. The Proceedings of this TCM was subsequently compiled in an IAEA Working Material. One of the main recommendations of this TCM, called for the harmonization of criteria and methods used in Member States in seismic reassessment and upgrading of existing NPPs. Twenty four institutions from thirteen countries participated in the CRP named 'Benchmark study for the seismic analysis and testing of WWER type NPPs'. Two types of WWER reactors (WWER-1000 and WWER-440/213) selected for benchmarking. Kozloduy NPP Units 5/6 and Paks NPP represented these respectively as prototypes. Consistent with the recommendations of the TCM and the working paper prepared by the subsequent Consultants' Meeting, the focal activity of the CRP was the benchmarking exercises. A similar methodology was followed both for Paks NPP and Kozloduy NPP Unit 5. Firstly, the NPP (mainly the reactor building) was tested using a blast loading generated by a series of explosions from buried TNT charges. Records from this test were obtained at several free field locations (both downhole and surface), foundation mat, various elevations of structures as well as some tanks and the stack. Then the benchmark participants were provided with structural drawings, soil data and the free field record of the blast experiment. Their task was to make a blind prediction of the response at preselected locations. The analytical results from these participants were then compared with the results from the test. Although the benchmarking exercises constituted the focus of the CRP, there were many other interesting problems related to the seismic safety of WWER type NPPs which were addressed by the participants. These involved generic studies, i.e. codes and standards used in original WWER designs and their comparison with current international practice; seismic analysis

  19. Benchmarking semantic web technology

    CERN Document Server

    García-Castro, R

    2009-01-01

    This book addresses the problem of benchmarking Semantic Web Technologies; first, from a methodological point of view, proposing a general methodology to follow in benchmarking activities over Semantic Web Technologies and, second, from a practical point of view, presenting two international benchmarking activities that involved benchmarking the interoperability of Semantic Web technologies using RDF(S) as the interchange language in one activity and OWL in the other.The book presents in detail how the different resources needed for these interoperability benchmarking activities were defined:

  20. Benchmarking in University Toolbox

    Directory of Open Access Journals (Sweden)

    Katarzyna Kuźmicz

    2015-06-01

    Full Text Available In the face of global competition and rising challenges that higher education institutions (HEIs meet, it is imperative to increase innovativeness and efficiency of their management. Benchmarking can be the appropriate tool to search for a point of reference necessary to assess institution’s competitive position and learn from the best in order to improve. The primary purpose of the paper is to present in-depth analysis of benchmarking application in HEIs worldwide. The study involves indicating premises of using benchmarking in HEIs. It also contains detailed examination of types, approaches and scope of benchmarking initiatives. The thorough insight of benchmarking applications enabled developing classification of benchmarking undertakings in HEIs. The paper includes review of the most recent benchmarking projects and relating them to the classification according to the elaborated criteria (geographical range, scope, type of data, subject, support and continuity. The presented examples were chosen in order to exemplify different approaches to benchmarking in higher education setting. The study was performed on the basis of the published reports from benchmarking projects, scientific literature and the experience of the author from the active participation in benchmarking projects. The paper concludes with recommendations for university managers undertaking benchmarking, derived on the basis of the conducted analysis.

  1. RAJA Performance Suite

    Energy Technology Data Exchange (ETDEWEB)

    2017-09-01

    The RAJA Performance Suite is designed to evaluate performance of the RAJA performance portability library on a wide variety of important high performance computing (HPC) algorithmic lulmels. These kernels assess compiler optimizations and various parallel programming model backends accessible through RAJA, such as OpenMP, CUDA, etc. The Initial version of the suite contains 25 computational kernels, each of which appears in 6 variants: Baseline SequcntiaJ, RAJA SequentiaJ, Baseline OpenMP, RAJA OpenMP, Baseline CUDA, RAJA CUDA. All variants of each kernel perform essentially the same mathematical operations and the loop body code for each kernel is identical across all variants. There are a few kernels, such as those that contain reduction operations, that require CUDA-specific coding for their CUDA variants. ActuaJ computer instructions executed and how they run in parallel differs depending on the parallel programming model backend used and which optimizations are perfonned by the compiler used to build the Perfonnance Suite executable. The Suite will be used primarily by RAJA developers to perform regular assessments of RAJA performance across a range of hardware platforms and compilers as RAJA features are being developed. It will also be used by LLNL hardware and software vendor panners for new defining requirements for future computing platform procurements and acceptance testing. In particular, the RAJA Performance Suite will be used for compiler acceptance testing of the upcoming CORAUSierra machine {initial LLNL delivery expected in late-2017/early 2018) and the CORAL-2 procurement. The Suite will aJso be used to generate concise source code reproducers of compiler and runtime issues we uncover so that we may provide them to relevant vendors to be fixed.

  2. SSI and structural benchmarks

    International Nuclear Information System (INIS)

    Philippacopoulos, A.J.; Miller, C.A.; Costantino, C.J.; Graves, H.

    1987-01-01

    This paper presents the latest results of the ongoing program entitled, Standard Problems for Structural Computer Codes, currently being worked on at BNL for the USNRC, Office of Nuclear Regulatory Research. During FY 1986, efforts were focussed on three tasks, namely, (1) an investigation of ground water effects on the response of Category I structures, (2) the Soil-Structure Interaction Workshop and (3) studies on structural benchmarks associated with Category I structures. The objective of the studies on ground water effects is to verify the applicability and the limitations of the SSI methods currently used by the industry in performing seismic evaluations of nuclear plants which are located at sites with high water tables. In a previous study by BNL (NUREG/CR-4588), it has been concluded that the pore water can influence significantly the soil-structure interaction process. This result, however, is based on the assumption of fully saturated soil profiles. Consequently, the work was further extended to include cases associated with variable water table depths. In this paper, results related to cut-off depths beyond which the pore water effects can be ignored in seismic calculations, are addressed. Comprehensive numerical data are given for soil configurations typical to those encountered in nuclear plant sites. These data were generated by using a modified version of the SLAM code which is capable of handling problems related to the dynamic response of saturated soils. Further, the paper presents some key aspects of the Soil-Structure Interaction Workshop (NUREG/CP-0054) which was held in Bethesda, MD on June 1, 1986. Finally, recent efforts related to the task on the structural benchmarks are described

  3. MRI of the temporo-mandibular joint: which sequence is best suited to assess the cortical bone of the mandibular condyle? A cadaveric study using micro-CT as the standard of reference

    Energy Technology Data Exchange (ETDEWEB)

    Karlo, Christoph A. [University Hospital Zurich, Department of Diagnostic and Interventional Radiology, Zurich (Switzerland); University Children' s Hospital Zurich, Department of Diagnostic Imaging, Zurich (Switzerland); Patcas, Raphael; Signorelli, Luca; Mueller, Lukas [University of Zurich, Clinic for Orthodontics and Pediatric Dentistry, Center of Dental Medicine, Zurich (Switzerland); Kau, Thomas; Watzal, Helmut; Kellenberger, Christian J. [University Children' s Hospital Zurich, Department of Diagnostic Imaging, Zurich (Switzerland); Ullrich, Oliver [University of Zurich, Institute of Anatomy, Faculty of Medicine, Zurich (Switzerland); Luder, Hans-Ulrich [University of Zurich, Section of Orofacial Structures and Development, Center of Dental Medicine, Zurich (Switzerland)

    2012-07-15

    To determine the best suited sagittal MRI sequence out of a standard temporo-mandibular joint (TMJ) imaging protocol for the assessment of the cortical bone of the mandibular condyles of cadaveric specimens using micro-CT as the standard of reference. Sixteen TMJs in 8 human cadaveric heads (mean age, 81 years) were examined by MRI. Upon all sagittal sequences, two observers measured the cortical bone thickness (CBT) of the anterior, superior and posterior portions of the mandibular condyles (i.e. objective analysis), and assessed for the presence of cortical bone thinning, erosions or surface irregularities as well as subcortical bone cysts and anterior osteophytes (i.e. subjective analysis). Micro-CT of the condyles was performed to serve as the standard of reference for statistical analysis. Inter-observer agreements for objective (r = 0.83-0.99, P < 0.01) and subjective ({kappa} = 0.67-0.88) analyses were very good. Mean CBT measurements were most accurate, and cortical bone thinning, erosions, surface irregularities and subcortical bone cysts were best depicted on the 3D fast spoiled gradient echo recalled sequence (3D FSPGR). The most reliable MRI sequence to assess the cortical bone of the mandibular condyles on sagittal imaging planes is the 3D FSPGR sequence. (orig.)

  4. MRI of the temporo-mandibular joint: which sequence is best suited to assess the cortical bone of the mandibular condyle? A cadaveric study using micro-CT as the standard of reference

    International Nuclear Information System (INIS)

    Karlo, Christoph A.; Patcas, Raphael; Signorelli, Luca; Mueller, Lukas; Kau, Thomas; Watzal, Helmut; Kellenberger, Christian J.; Ullrich, Oliver; Luder, Hans-Ulrich

    2012-01-01

    To determine the best suited sagittal MRI sequence out of a standard temporo-mandibular joint (TMJ) imaging protocol for the assessment of the cortical bone of the mandibular condyles of cadaveric specimens using micro-CT as the standard of reference. Sixteen TMJs in 8 human cadaveric heads (mean age, 81 years) were examined by MRI. Upon all sagittal sequences, two observers measured the cortical bone thickness (CBT) of the anterior, superior and posterior portions of the mandibular condyles (i.e. objective analysis), and assessed for the presence of cortical bone thinning, erosions or surface irregularities as well as subcortical bone cysts and anterior osteophytes (i.e. subjective analysis). Micro-CT of the condyles was performed to serve as the standard of reference for statistical analysis. Inter-observer agreements for objective (r = 0.83-0.99, P < 0.01) and subjective (κ = 0.67-0.88) analyses were very good. Mean CBT measurements were most accurate, and cortical bone thinning, erosions, surface irregularities and subcortical bone cysts were best depicted on the 3D fast spoiled gradient echo recalled sequence (3D FSPGR). The most reliable MRI sequence to assess the cortical bone of the mandibular condyles on sagittal imaging planes is the 3D FSPGR sequence. (orig.)

  5. MCNP neutron benchmarks

    International Nuclear Information System (INIS)

    Hendricks, J.S.; Whalen, D.J.; Cardon, D.A.; Uhle, J.L.

    1991-01-01

    Over 50 neutron benchmark calculations have recently been completed as part of an ongoing program to validate the MCNP Monte Carlo radiation transport code. The new and significant aspects of this work are as follows: These calculations are the first attempt at a validation program for MCNP and the first official benchmarking of version 4 of the code. We believe the chosen set of benchmarks is a comprehensive set that may be useful for benchmarking other radiation transport codes and data libraries. These calculations provide insight into how well neutron transport calculations can be expected to model a wide variety of problems

  6. How Activists Use Benchmarks

    DEFF Research Database (Denmark)

    Seabrooke, Leonard; Wigan, Duncan

    2015-01-01

    Non-governmental organisations use benchmarks as a form of symbolic violence to place political pressure on firms, states, and international organisations. The development of benchmarks requires three elements: (1) salience, that the community of concern is aware of the issue and views it as impo......Non-governmental organisations use benchmarks as a form of symbolic violence to place political pressure on firms, states, and international organisations. The development of benchmarks requires three elements: (1) salience, that the community of concern is aware of the issue and views...

  7. Internet based benchmarking

    DEFF Research Database (Denmark)

    Bogetoft, Peter; Nielsen, Kurt

    2005-01-01

    We discuss the design of interactive, internet based benchmarking using parametric (statistical) as well as nonparametric (DEA) models. The user receives benchmarks and improvement potentials. The user is also given the possibility to search different efficiency frontiers and hereby to explore...

  8. Benchmark risk analysis models

    NARCIS (Netherlands)

    Ale BJM; Golbach GAM; Goos D; Ham K; Janssen LAM; Shield SR; LSO

    2002-01-01

    A so-called benchmark exercise was initiated in which the results of five sets of tools available in the Netherlands would be compared. In the benchmark exercise a quantified risk analysis was performed on a -hypothetical- non-existing hazardous establishment located on a randomly chosen location in

  9. The Drill Down Benchmark

    NARCIS (Netherlands)

    P.A. Boncz (Peter); T. Rühl (Tim); F. Kwakkel

    1998-01-01

    textabstractData Mining places specific requirements on DBMS query performance that cannot be evaluated satisfactorily using existing OLAP benchmarks. The DD Benchmark - defined here - provides a practical case and yardstick to explore how well a DBMS is able to support Data Mining applications. It

  10. Internet Based Benchmarking

    OpenAIRE

    Bogetoft, Peter; Nielsen, Kurt

    2002-01-01

    We discuss the design of interactive, internet based benchmarking using parametric (statistical) as well as non-parametric (DEA) models. The user receives benchmarks and improvement potentials. The user is also given the possibility to search different efficiency frontiers and hereby to explore alternative improvement strategies. An implementation of both a parametric and a non parametric model are presented.

  11. Vver-1000 Mox core computational benchmark

    International Nuclear Information System (INIS)

    2006-01-01

    The NEA Nuclear Science Committee has established an Expert Group that deals with the status and trends of reactor physics, fuel performance and fuel cycle issues related to disposing of weapons-grade plutonium in mixed-oxide fuel. The objectives of the group are to provide NEA member countries with up-to-date information on, and to develop consensus regarding, core and fuel cycle issues associated with burning weapons-grade plutonium in thermal water reactors (PWR, BWR, VVER-1000, CANDU) and fast reactors (BN-600). These issues concern core physics, fuel performance and reliability, and the capability and flexibility of thermal water reactors and fast reactors to dispose of weapons-grade plutonium in standard fuel cycles. The activities of the NEA Expert Group on Reactor-based Plutonium Disposition are carried out in close co-operation (jointly, in most cases) with the NEA Working Party on Scientific Issues in Reactor Systems (WPRS). A prominent part of these activities include benchmark studies. At the time of preparation of this report, the following benchmarks were completed or in progress: VENUS-2 MOX Core Benchmarks: carried out jointly with the WPRS (formerly the WPPR) (completed); VVER-1000 LEU and MOX Benchmark (completed); KRITZ-2 Benchmarks: carried out jointly with the WPRS (formerly the WPPR) (completed); Hollow and Solid MOX Fuel Behaviour Benchmark (completed); PRIMO MOX Fuel Performance Benchmark (ongoing); VENUS-2 MOX-fuelled Reactor Dosimetry Calculation (ongoing); VVER-1000 In-core Self-powered Neutron Detector Calculational Benchmark (started); MOX Fuel Rod Behaviour in Fast Power Pulse Conditions (started); Benchmark on the VENUS Plutonium Recycling Experiments Configuration 7 (started). This report describes the detailed results of the benchmark investigating the physics of a whole VVER-1000 reactor core using two-thirds low-enriched uranium (LEU) and one-third MOX fuel. It contributes to the computer code certification process and to the

  12. Verification and validation benchmarks.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy

    2007-02-01

    Verification and validation (V&V) are the primary means to assess the accuracy and reliability of computational simulations. V&V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V&V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the level of

  13. Verification and validation benchmarks

    International Nuclear Information System (INIS)

    Oberkampf, William L.; Trucano, Timothy G.

    2008-01-01

    Verification and validation (V and V) are the primary means to assess the accuracy and reliability of computational simulations. V and V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V and V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the

  14. Benchmarking and Evaluating Unified Memory for OpenMP GPU Offloading

    Energy Technology Data Exchange (ETDEWEB)

    Mishra, Alok [Stony Brook Univ., Stony Brook, NY (United States); Li, Lingda [Brookhaven National Lab. (BNL), Upton, NY (United States); Kong, Martin [Brookhaven National Lab. (BNL), Upton, NY (United States); Finkel, Hal [Argonne National Lab. (ANL), Argonne, IL (United States); Chapman, Barbara [Stony Brook Univ., Stony Brook, NY (United States); Brookhaven National Lab. (BNL), Upton, NY (United States)

    2017-01-01

    Here, the latest OpenMP standard offers automatic device offloading capabilities which facilitate GPU programming. Despite this, there remain many challenges. One of these is the unified memory feature introduced in recent GPUs. GPUs in current and future HPC systems have enhanced support for unified memory space. In such systems, CPU and GPU can access each other's memory transparently, that is, the data movement is managed automatically by the underlying system software and hardware. Memory over subscription is also possible in these systems. However, there is a significant lack of knowledge about how this mechanism will perform, and how programmers should use it. We have modified several benchmarks codes, in the Rodinia benchmark suite, to study the behavior of OpenMP accelerator extensions and have used them to explore the impact of unified memory in an OpenMP context. We moreover modified the open source LLVM compiler to allow OpenMP programs to exploit unified memory. The results of our evaluation reveal that, while the performance of unified memory is comparable with that of normal GPU offloading for benchmarks with little data reuse, it suffers from significant overhead when GPU memory is over subcribed for benchmarks with large amount of data reuse. Based on these results, we provide several guidelines for programmers to achieve better performance with unified memory.

  15. Learning DHTMLX suite UI

    CERN Document Server

    Geske, Eli

    2013-01-01

    A fast-paced, example-based guide to learning DHTMLX.""Learning DHTMLX Suite UI"" is for web designers who have a basic knowledge of JavaScript and who are looking for powerful tools that will give them an extra edge in their own application development. This book is also useful for experienced developers who wish to get started with DHTMLX without going through the trouble of learning its quirks through trial and error. Readers are expected to have some knowledge of JavaScript, HTML, Document Object Model, and the ability to install a local web server.

  16. Shielding benchmark problems, (2)

    International Nuclear Information System (INIS)

    Tanaka, Shun-ichi; Sasamoto, Nobuo; Oka, Yoshiaki; Shin, Kazuo; Tada, Keiko.

    1980-02-01

    Shielding benchmark problems prepared by Working Group of Assessment of Shielding Experiments in the Research Committee on Shielding Design in the Atomic Energy Society of Japan were compiled by Shielding Laboratory in Japan Atomic Energy Research Institute. Fourteen shielding benchmark problems are presented newly in addition to twenty-one problems proposed already, for evaluating the calculational algorithm and accuracy of computer codes based on discrete ordinates method and Monte Carlo method and for evaluating the nuclear data used in codes. The present benchmark problems are principally for investigating the backscattering and the streaming of neutrons and gamma rays in two- and three-dimensional configurations. (author)

  17. Toxicological Benchmarks for Wildlife

    Energy Technology Data Exchange (ETDEWEB)

    Sample, B.E. Opresko, D.M. Suter, G.W.

    1993-01-01

    Ecological risks of environmental contaminants are evaluated by using a two-tiered process. In the first tier, a screening assessment is performed where concentrations of contaminants in the environment are compared to no observed adverse effects level (NOAEL)-based toxicological benchmarks. These benchmarks represent concentrations of chemicals (i.e., concentrations presumed to be nonhazardous to the biota) in environmental media (water, sediment, soil, food, etc.). While exceedance of these benchmarks does not indicate any particular level or type of risk, concentrations below the benchmarks should not result in significant effects. In practice, when contaminant concentrations in food or water resources are less than these toxicological benchmarks, the contaminants may be excluded from further consideration. However, if the concentration of a contaminant exceeds a benchmark, that contaminant should be retained as a contaminant of potential concern (COPC) and investigated further. The second tier in ecological risk assessment, the baseline ecological risk assessment, may use toxicological benchmarks as part of a weight-of-evidence approach (Suter 1993). Under this approach, based toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. Other sources of evidence include media toxicity tests, surveys of biota (abundance and diversity), measures of contaminant body burdens, and biomarkers. This report presents NOAEL- and lowest observed adverse effects level (LOAEL)-based toxicological benchmarks for assessment of effects of 85 chemicals on 9 representative mammalian wildlife species (short-tailed shrew, little brown bat, meadow vole, white-footed mouse, cottontail rabbit, mink, red fox, and whitetail deer) or 11 avian wildlife species (American robin, rough-winged swallow, American woodcock, wild turkey, belted kingfisher, great blue heron, barred owl, barn owl, Cooper's hawk, and red

  18. Present status of International Criticality Safety Benchmark Evaluation Project (ICSBEP)

    International Nuclear Information System (INIS)

    Miyoshi, Yoshinori

    2000-01-01

    The International Criticality Safety Evaluation Project, ICSBEP was designed to identify and evaluate a comprehensive set of critical experiment benchmark data. Compilation of the data into a standardized format are made by reviewing original and subsequently revised documentation for calculating each experiment with standard criticality safety codes. Five handbooks of evaluated criticality safety benchmark experiments have been published since 1995. (author)

  19. XWeB: The XML Warehouse Benchmark

    Science.gov (United States)

    Mahboubi, Hadj; Darmont, Jérôme

    With the emergence of XML as a standard for representing business data, new decision support applications are being developed. These XML data warehouses aim at supporting On-Line Analytical Processing (OLAP) operations that manipulate irregular XML data. To ensure feasibility of these new tools, important performance issues must be addressed. Performance is customarily assessed with the help of benchmarks. However, decision support benchmarks do not currently support XML features. In this paper, we introduce the XML Warehouse Benchmark (XWeB), which aims at filling this gap. XWeB derives from the relational decision support benchmark TPC-H. It is mainly composed of a test data warehouse that is based on a unified reference model for XML warehouses and that features XML-specific structures, and its associate XQuery decision support workload. XWeB's usage is illustrated by experiments on several XML database management systems.

  20. On Big Data Benchmarking

    OpenAIRE

    Han, Rui; Lu, Xiaoyi

    2014-01-01

    Big data systems address the challenges of capturing, storing, managing, analyzing, and visualizing big data. Within this context, developing benchmarks to evaluate and compare big data systems has become an active topic for both research and industry communities. To date, most of the state-of-the-art big data benchmarks are designed for specific types of systems. Based on our experience, however, we argue that considering the complexity, diversity, and rapid evolution of big data systems, fo...

  1. [Signal Processing Suite Design

    Science.gov (United States)

    Sahr, John D.; Mir, Hasan; Morabito, Andrew; Grossman, Matthew

    2003-01-01

    Our role in this project was to participate in the design of the signal processing suite to analyze plasma density measurements on board a small constellation (3 or 4) satellites in Low Earth Orbit. As we are new to space craft experiments, one of the challenges was to simply gain understanding of the quantity of data which would flow from the satellites, and possibly to interact with the design teams in generating optimal sampling patterns. For example, as the fleet of satellites were intended to fly through the same volume of space (displaced slightly in time and space), the bulk plasma structure should be common among the spacecraft. Therefore, an optimal, limited bandwidth data downlink would take advantage of this commonality. Also, motivated by techniques in ionospheric radar, we hoped to investigate the possibility of employing aperiodic sampling in order to gain access to a wider spatial spectrum without suffering aliasing in k-space.

  2. Clementine sensor suite

    Energy Technology Data Exchange (ETDEWEB)

    Ledebuhr, A.G. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    LLNL designed and built the suite of six miniaturized light-weight space-qualified sensors utilized in the Clementine mission. A major goal of the Clementine program was to demonstrate technologies originally developed for Ballistic Missile Defense Organization Programs. These sensors were modified to gather data from the moon. This overview presents each of these sensors and some preliminary on-orbit performance estimates. The basic subsystems of these sensors include optical baffles to reject off-axis stray light, light-weight ruggedized optical systems, filter wheel assemblies, radiation tolerant focal plane arrays, radiation hardened control and readout electronics and low mass and power mechanical cryogenic coolers for the infrared sensors. Descriptions of each sensor type are given along with design specifications, photographs and on-orbit data collected.

  3. Benchmarking the Netherlands. Benchmarking for growth

    International Nuclear Information System (INIS)

    2003-01-01

    This is the fourth edition of the Ministry of Economic Affairs' publication 'Benchmarking the Netherlands', which aims to assess the competitiveness of the Dutch economy. The methodology and objective of the benchmarking remain the same. The basic conditions for economic activity (institutions, regulation, etc.) in a number of benchmark countries are compared in order to learn from the solutions found by other countries for common economic problems. This publication is devoted entirely to the potential output of the Dutch economy. In other words, its ability to achieve sustainable growth and create work over a longer period without capacity becoming an obstacle. This is important because economic growth is needed to increase prosperity in the broad sense and meeting social needs. Prosperity in both a material (per capita GDP) and immaterial (living environment, environment, health, etc) sense, in other words. The economy's potential output is determined by two structural factors: the growth of potential employment and the structural increase in labour productivity. Analysis by the Netherlands Bureau for Economic Policy Analysis (CPB) shows that in recent years the increase in the capacity for economic growth has been realised mainly by increasing the supply of labour and reducing the equilibrium unemployment rate. In view of the ageing of the population in the coming years and decades the supply of labour is unlikely to continue growing at the pace we have become accustomed to in recent years. According to a number of recent studies, to achieve a respectable rate of sustainable economic growth the aim will therefore have to be to increase labour productivity. To realise this we have to focus on for six pillars of economic policy: (1) human capital, (2) functioning of markets, (3) entrepreneurship, (4) spatial planning, (5) innovation, and (6) sustainability. These six pillars determine the course for economic policy aiming at higher productivity growth. Throughout

  4. A small evaluation suite for Ada compilers

    Science.gov (United States)

    Wilke, Randy; Roy, Daniel M.

    1986-01-01

    After completing a small Ada pilot project (OCC simulator) for the Multi Satellite Operations Control Center (MSOCC) at Goddard last year, the use of Ada to develop OCCs was recommended. To help MSOCC transition toward Ada, a suite of about 100 evaluation programs was developed which can be used to assess Ada compilers. These programs compare the overall quality of the compilation system, compare the relative efficiencies of the compilers and the environments in which they work, and compare the size and execution speed of generated machine code. Another goal of the benchmark software was to provide MSOCC system developers with rough timing estimates for the purpose of predicting performance of future systems written in Ada.

  5. PNNL Information Technology Benchmarking

    Energy Technology Data Exchange (ETDEWEB)

    DD Hostetler

    1999-09-08

    Benchmarking is a methodology for searching out industry best practices that lead to superior performance. It is exchanging information, not just with any organization, but with organizations known to be the best within PNNL, in industry, or in dissimilar industries with equivalent functions. It is used as a continuous improvement tool for business and technical processes, products, and services. Information technology--comprising all computer and electronic communication products and services--underpins the development and/or delivery of many PNNL products and services. This document describes the Pacific Northwest National Laboratory's (PNNL's) approach to information technology (IT) benchmarking. The purpose is to engage other organizations in the collaborative process of benchmarking in order to improve the value of IT services provided to customers. TM document's intended audience consists of other US Department of Energy (DOE) national laboratories and their IT staff. Although the individual participants must define the scope of collaborative benchmarking, an outline of IT service areas for possible benchmarking is described.

  6. Benchmarking infrastructure for mutation text mining.

    Science.gov (United States)

    Klein, Artjom; Riazanov, Alexandre; Hindle, Matthew M; Baker, Christopher Jo

    2014-02-25

    Experimental research on the automatic extraction of information about mutations from texts is greatly hindered by the lack of consensus evaluation infrastructure for the testing and benchmarking of mutation text mining systems. We propose a community-oriented annotation and benchmarking infrastructure to support development, testing, benchmarking, and comparison of mutation text mining systems. The design is based on semantic standards, where RDF is used to represent annotations, an OWL ontology provides an extensible schema for the data and SPARQL is used to compute various performance metrics, so that in many cases no programming is needed to analyze results from a text mining system. While large benchmark corpora for biological entity and relation extraction are focused mostly on genes, proteins, diseases, and species, our benchmarking infrastructure fills the gap for mutation information. The core infrastructure comprises (1) an ontology for modelling annotations, (2) SPARQL queries for computing performance metrics, and (3) a sizeable collection of manually curated documents, that can support mutation grounding and mutation impact extraction experiments. We have developed the principal infrastructure for the benchmarking of mutation text mining tasks. The use of RDF and OWL as the representation for corpora ensures extensibility. The infrastructure is suitable for out-of-the-box use in several important scenarios and is ready, in its current state, for initial community adoption.

  7. Benchmarking infrastructure for mutation text mining

    Science.gov (United States)

    2014-01-01

    Background Experimental research on the automatic extraction of information about mutations from texts is greatly hindered by the lack of consensus evaluation infrastructure for the testing and benchmarking of mutation text mining systems. Results We propose a community-oriented annotation and benchmarking infrastructure to support development, testing, benchmarking, and comparison of mutation text mining systems. The design is based on semantic standards, where RDF is used to represent annotations, an OWL ontology provides an extensible schema for the data and SPARQL is used to compute various performance metrics, so that in many cases no programming is needed to analyze results from a text mining system. While large benchmark corpora for biological entity and relation extraction are focused mostly on genes, proteins, diseases, and species, our benchmarking infrastructure fills the gap for mutation information. The core infrastructure comprises (1) an ontology for modelling annotations, (2) SPARQL queries for computing performance metrics, and (3) a sizeable collection of manually curated documents, that can support mutation grounding and mutation impact extraction experiments. Conclusion We have developed the principal infrastructure for the benchmarking of mutation text mining tasks. The use of RDF and OWL as the representation for corpora ensures extensibility. The infrastructure is suitable for out-of-the-box use in several important scenarios and is ready, in its current state, for initial community adoption. PMID:24568600

  8. A framework for benchmarking land models

    Directory of Open Access Journals (Sweden)

    Y. Q. Luo

    2012-10-01

    Full Text Available Land models, which have been developed by the modeling community in the past few decades to predict future states of ecosystems and climate, have to be critically evaluated for their performance skills of simulating ecosystem responses and feedback to climate change. Benchmarking is an emerging procedure to measure performance of models against a set of defined standards. This paper proposes a benchmarking framework for evaluation of land model performances and, meanwhile, highlights major challenges at this infant stage of benchmark analysis. The framework includes (1 targeted aspects of model performance to be evaluated, (2 a set of benchmarks as defined references to test model performance, (3 metrics to measure and compare performance skills among models so as to identify model strengths and deficiencies, and (4 model improvement. Land models are required to simulate exchange of water, energy, carbon and sometimes other trace gases between the atmosphere and land surface, and should be evaluated for their simulations of biophysical processes, biogeochemical cycles, and vegetation dynamics in response to climate change across broad temporal and spatial scales. Thus, one major challenge is to select and define a limited number of benchmarks to effectively evaluate land model performance. The second challenge is to develop metrics of measuring mismatches between models and benchmarks. The metrics may include (1 a priori thresholds of acceptable model performance and (2 a scoring system to combine data–model mismatches for various processes at different temporal and spatial scales. The benchmark analyses should identify clues of weak model performance to guide future development, thus enabling improved predictions of future states of ecosystems and climate. The near-future research effort should be on development of a set of widely acceptable benchmarks that can be used to objectively, effectively, and reliably evaluate fundamental properties

  9. Deviating From the Benchmarks

    DEFF Research Database (Denmark)

    Rocha, Vera; Van Praag, Mirjam; Carneiro, Anabela

    This paper studies three related questions: To what extent otherwise similar startups employ different quantities and qualities of human capital at the moment of entry? How persistent are initial human capital choices over time? And how does deviating from human capital benchmarks influence firm...... survival? The analysis is based on a matched employer-employee dataset and covers about 17,500 startups in manufacturing and services. We adopt a new procedure to estimate individual benchmarks for the quantity and quality of initial human resources, acknowledging correlations between hiring decisions...... the benchmark can be substantial, are persistent over time, and hinder the survival of firms. The implications may, however, vary according to the sector and the ownership structure at entry. Given the stickiness of initial choices, wrong human capital decisions at entry turn out to be a close to irreversible...

  10. Deviating From the Benchmarks

    DEFF Research Database (Denmark)

    Rocha, Vera; Van Praag, Mirjam; Carneiro, Anabela

    survival? The analysis is based on a matched employer-employee dataset and covers about 17,500 startups in manufacturing and services. We adopt a new procedure to estimate individual benchmarks for the quantity and quality of initial human resources, acknowledging correlations between hiring decisions......This paper studies three related questions: To what extent otherwise similar startups employ different quantities and qualities of human capital at the moment of entry? How persistent are initial human capital choices over time? And how does deviating from human capital benchmarks influence firm......, founders human capital, and the ownership structure of startups (solo entrepreneurs versus entrepreneurial teams). We then study the survival implications of exogenous deviations from these benchmarks, based on spline models for survival data. Our results indicate that (especially negative) deviations from...

  11. Benchmarking for Best Practice

    CERN Document Server

    Zairi, Mohamed

    1998-01-01

    Benchmarking for Best Practice uses up-to-the-minute case-studies of individual companies and industry-wide quality schemes to show how and why implementation has succeeded. For any practitioner wanting to establish best practice in a wide variety of business areas, this book makes essential reading. .It is also an ideal textbook on the applications of TQM since it describes concepts, covers definitions and illustrates the applications with first-hand examples. Professor Mohamed Zairi is an international expert and leading figure in the field of benchmarking. His pioneering work in this area l

  12. HPCG Benchmark Technical Specification

    Energy Technology Data Exchange (ETDEWEB)

    Heroux, Michael Allen [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Dongarra, Jack [Univ. of Tennessee, Knoxville, TN (United States); Luszczek, Piotr [Univ. of Tennessee, Knoxville, TN (United States)

    2013-10-01

    The High Performance Conjugate Gradient (HPCG) benchmark [cite SNL, UTK reports] is a tool for ranking computer systems based on a simple additive Schwarz, symmetric Gauss-Seidel preconditioned conjugate gradient solver. HPCG is similar to the High Performance Linpack (HPL), or Top 500, benchmark [1] in its purpose, but HPCG is intended to better represent how today’s applications perform. In this paper we describe the technical details of HPCG: how it is designed and implemented, what code transformations are permitted and how to interpret and report results.

  13. Apparatus for storing protective suits

    International Nuclear Information System (INIS)

    Englemann, H.J.; Koller, J.; Schrader, H.R.; Schade, G.; Pedrerol, J.

    1975-01-01

    Arrangements are described for storing one or more protective suits when contaminated on the outside. In order to permit a person wearing a contaminated suit to leave a contaminated area safely, and without contaminating the environment, it has hitherto been the practice for the suit to be passed through a 'lock' and cleansed under decontaminating showers whilst still being worn. This procedure is time wasting and not always completely effective, and it may be necessary to provide a second suit for use whilst the first suit is being decontaminated. Repeated decontamination may also result in undue wear and tear. The arrangements described provide a 'lock' chamber in which a contaminated suit may be stowed away without its interior becoming contaminated, thus allowing repeated use by persons donning and shedding it. (U.K.)

  14. Why and How to Benchmark XML Databases

    NARCIS (Netherlands)

    A.R. Schmidt; F. Waas; M.L. Kersten (Martin); D. Florescu; M.J. Carey; I. Manolescu; R. Busse

    2001-01-01

    textabstractBenchmarks belong to the very standard repertory of tools deployed in database development. Assessing the capabilities of a system, analyzing actual and potential bottlenecks, and, naturally, comparing the pros and cons of different systems architectures have become indispensable tasks

  15. The COST Benchmark

    DEFF Research Database (Denmark)

    Jensen, Christian Søndergaard; Tiesyte, Dalia; Tradisauskas, Nerius

    2006-01-01

    takes into account that the available positions of the moving objects are inaccurate, an aspect largely ignored in previous indexing research. The concepts of data and query enlargement are introduced for addressing inaccuracy. As proof of concepts of the benchmark, the paper covers the application...

  16. Benchmarking and Performance Management

    Directory of Open Access Journals (Sweden)

    Adrian TANTAU

    2010-12-01

    Full Text Available The relevance of the chosen topic is explained by the meaning of the firm efficiency concept - the firm efficiency means the revealed performance (how well the firm performs in the actual market environment given the basic characteristics of the firms and their markets that are expected to drive their profitability (firm size, market power etc.. This complex and relative performance could be due to such things as product innovation, management quality, work organization, some other factors can be a cause even if they are not directly observed by the researcher. The critical need for the management individuals/group to continuously improve their firm/company’s efficiency and effectiveness, the need for the managers to know which are the success factors and the competitiveness determinants determine consequently, what performance measures are most critical in determining their firm’s overall success. Benchmarking, when done properly, can accurately identify both successful companies and the underlying reasons for their success. Innovation and benchmarking firm level performance are critical interdependent activities. Firm level variables, used to infer performance, are often interdependent due to operational reasons. Hence, the managers need to take the dependencies among these variables into account when forecasting and benchmarking performance. This paper studies firm level performance using financial ratio and other type of profitability measures. It uses econometric models to describe and then propose a method to forecast and benchmark performance.

  17. Benchmarking of workplace performance

    NARCIS (Netherlands)

    van der Voordt, Theo; Jensen, Per Anker

    2017-01-01

    This paper aims to present a process model of value adding corporate real estate and facilities management and to discuss which indicators can be used to measure and benchmark workplace performance.

    In order to add value to the organisation, the work environment has to provide value for

  18. How to Advance TPC Benchmarks with Dependability Aspects

    Science.gov (United States)

    Almeida, Raquel; Poess, Meikel; Nambiar, Raghunath; Patil, Indira; Vieira, Marco

    Transactional systems are the core of the information systems of most organizations. Although there is general acknowledgement that failures in these systems often entail significant impact both on the proceeds and reputation of companies, the benchmarks developed and managed by the Transaction Processing Performance Council (TPC) still maintain their focus on reporting bare performance. Each TPC benchmark has to pass a list of dependability-related tests (to verify ACID properties), but not all benchmarks require measuring their performances. While TPC-E measures the recovery time of some system failures, TPC-H and TPC-C only require functional correctness of such recovery. Consequently, systems used in TPC benchmarks are tuned mostly for performance. In this paper we argue that nowadays systems should be tuned for a more comprehensive suite of dependability tests, and that a dependability metric should be part of TPC benchmark publications. The paper discusses WHY and HOW this can be achieved. Two approaches are introduced and discussed: augmenting each TPC benchmark in a customized way, by extending each specification individually; and pursuing a more unified approach, defining a generic specification that could be adjoined to any TPC benchmark.

  19. Standardized benchmarking in the quest for orthologs

    DEFF Research Database (Denmark)

    Altenhoff, Adrian M; Boeckmann, Brigitte; Capella-Gutierrez, Salvador

    2016-01-01

    Achieving high accuracy in orthology inference is essential for many comparative, evolutionary and functional genomic analyses, yet the true evolutionary history of genes is generally unknown and orthologs are used for very different applications across phyla, requiring different precision-recall...

  20. Performance Targets and External Benchmarking

    DEFF Research Database (Denmark)

    Friis, Ivar; Hansen, Allan; Vámosi, Tamás S.

    Research on relative performance measures, transfer pricing, beyond budgeting initiatives, target costing, piece rates systems and value based management has for decades underlined the importance of external benchmarking in performance management. Research conceptualises external benchmarking...

  1. Manikin Testing on LASA Suit

    Science.gov (United States)

    2006-03-16

    environmental sensors were either attached to a hanging frame or the flotation frame to provide the environmental temperature. A wave generator consisting...results of the ACE extreme cold weather garments, LASA immersion suit, and modified current flyer’s coverall immersion suit in stil air and 40 cm

  2. Algebraic Multigrid Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    2017-08-01

    AMG is a parallel algebraic multigrid solver for linear systems arising from problems on unstructured grids. It has been derived directly from the BoomerAMG solver in the hypre library, a large linear solvers library that is being developed in the Center for Applied Scientific Computing (CASC) at LLNL and is very similar to the AMG2013 benchmark with additional optimizations. The driver provided in the benchmark can build various test problems. The default problem is a Laplace type problem with a 27-point stencil, which can be scaled up and is designed to solve a very large problem. A second problem simulates a time dependent problem, in which successively various smnllcr systems are solved.

  3. Evaluation of CHO Benchmarks on the Arria 10 FPGA using Intel FPGA SDK for OpenCL

    Energy Technology Data Exchange (ETDEWEB)

    Jin, Zheming [Argonne National Lab. (ANL), Argonne, IL (United States); Yoshii, Kazutomo [Argonne National Lab. (ANL), Argonne, IL (United States); Finkel, Hal [Argonne National Lab. (ANL), Argonne, IL (United States); Cappello, Franck [Argonne National Lab. (ANL), Argonne, IL (United States)

    2017-05-23

    The OpenCL standard is an open programming model for accelerating algorithms on heterogeneous computing system. OpenCL extends the C-based programming language for developing portable codes on different platforms such as CPU, Graphics processing units (GPUs), Digital Signal Processors (DSPs) and Field Programmable Gate Arrays (FPGAs). The Intel FPGA SDK for OpenCL is a suite of tools that allows developers to abstract away the complex FPGA-based development flow for a high-level software development flow. Users can focus on the design of hardware-accelerated kernel functions in OpenCL and then direct the tools to generate the low-level FPGA implementations. The approach makes the FPGA-based development more accessible to software users as the needs for hybrid computing using CPUs and FPGAs are increasing. It can also significantly reduce the hardware development time as users can evaluate different ideas with high-level language without deep FPGA domain knowledge. Benchmarking of OpenCL-based framework is an effective way for analyzing the performance of system by studying the execution of the benchmark applications. CHO is a suite of benchmark applications that provides support for OpenCL [1]. The authors presented CHO as an OpenCL port of the CHStone benchmark. Using Altera OpenCL (AOCL) compiler to synthesize the benchmark applications, they listed the resource usage and performance of each kernel that can be successfully synthesized by the compiler. In this report, we evaluate the resource usage and performance of the CHO benchmark applications using the Intel FPGA SDK for OpenCL and Nallatech 385A FPGA board that features an Arria 10 FPGA device. The focus of the report is to have a better understanding of the resource usage and performance of the kernel implementations using Arria-10 FPGA devices compared to Stratix-5 FPGA devices. In addition, we also gain knowledge about the limitations of the current compiler when it fails to synthesize a benchmark

  4. Mask Waves Benchmark

    Science.gov (United States)

    2007-10-01

    frequenciesfoeahpbeswllsa"gdnsmtrc fo eah/Rbe. /Qthe acuation are de fiamn aprltmethod raetheorta cmiurve fTtn,wihe ies whynee select ful cycle wisdoimporat tob...See Figure 22 for a comparison of measured waves, linear waves, and non- linear Stokes waves. Looking at the selected 16 runs from the trough-to-peak...Figure 23 for the benchmark data set, the relation of obtained frequency verses desired frequency is almost completely linear . The slight variation at

  5. BSMBench: a flexible and scalable supercomputer benchmark from computational particle physics

    CERN Document Server

    Bennett, Ed; Del Debbio, Luigi; Jordan, Kirk; Patella, Agostino; Pica, Claudio; Rago, Antonio

    2016-01-01

    Benchmarking plays a central role in the evaluation of High Performance Computing architectures. Several benchmarks have been designed that allow users to stress various components of supercomputers. In order for the figures they provide to be useful, benchmarks need to be representative of the most common real-world scenarios. In this work, we introduce BSMBench, a benchmarking suite derived from Monte Carlo code used in computational particle physics. The advantage of this suite (which can be freely downloaded from http://www.bsmbench.org/) over others is the capacity to vary the relative importance of computation and communication. This enables the tests to simulate various practical situations. To showcase BSMBench, we perform a wide range of tests on various architectures, from desktop computers to state-of-the-art supercomputers, and discuss the corresponding results. Possible future directions of development of the benchmark are also outlined.

  6. Benchmarking Cloud Resources for HEP

    Science.gov (United States)

    Alef, M.; Cordeiro, C.; De Salvo, A.; Di Girolamo, A.; Field, L.; Giordano, D.; Guerri, M.; Schiavi, F. C.; Wiebalck, A.

    2017-10-01

    In a commercial cloud environment, exhaustive resource profiling is beneficial to cope with the intrinsic variability of the virtualised environment, allowing to promptly identify performance degradation. In the context of its commercial cloud initiatives, CERN has acquired extensive experience in benchmarking commercial cloud resources. Ultimately, this activity provides information on the actual delivered performance of invoiced resources. In this report we discuss the experience acquired and the results collected using several fast benchmark applications adopted by the HEP community. These benchmarks span from open-source benchmarks to specific user applications and synthetic benchmarks. The workflow put in place to collect and analyse performance metrics is also described.

  7. Adobe Creative Suite 4 Bible

    CERN Document Server

    Padova, Ted

    2009-01-01

    As one of the few books to cover integration and workflow issues between Photoshop, Illustrator, InDesign, GoLive, Acrobat, and Version Cue, this comprehensive reference is the one book that Creative Suite users need; Two well-known and respected authors cover topics such as developing consistent color-managed workflows, moving files among the Creative Suite applications, preparing files for print or the Web, repurposing documents, and using the Creative Suite with Microsoft Office documents; More than 1,200 pages are packed with valuable advice and techniques for tackling common everyday issu

  8. EDL Sensor Suite, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Optical Air Data Systems (OADS) L.L.C. proposes a LIDAR based remote measurement sensor suite capable of satisfying a significant number of the desired sensing...

  9. EVA Suit Microbial Leakage Investigation

    Data.gov (United States)

    National Aeronautics and Space Administration — The objective of this project is to collect microbial samples from various EVA suits to determine how much microbial contamination is typically released during...

  10. Satellite Ocean Heat Content Suite

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This collection contains an operational Satellite Ocean Heat Content Suite (SOHCS) product generated by NOAA National Environmental Satellite, Data, and Information...

  11. Benchmarking Cloud Storage Systems

    OpenAIRE

    Wang, Xing

    2014-01-01

    With the rise of cloud computing, many cloud storage systems like Dropbox, Google Drive and Mega have been built to provide decentralized and reliable file storage. It is thus of prime importance to know their features, performance, and the best way to make use of them. In this context, we introduce BenchCloud, a tool designed as part of this thesis to conveniently and efficiently benchmark any cloud storage system. First, we provide a study of six commonly-used cloud storage systems to ident...

  12. Benchmarking Danish Industries

    DEFF Research Database (Denmark)

    Gammelgaard, Britta; Bentzen, Eric; Aagaard Andreassen, Mette

    2003-01-01

    compatible survey. The International Manufacturing Strategy Survey (IMSS) doesbring up the question of supply chain management, but unfortunately, we did not have access to thedatabase. Data from the members of the SCOR-model, in the form of benchmarked performance data,may exist, but are nonetheless......This report is based on the survey "Industrial Companies in Denmark - Today and Tomorrow',section IV: Supply Chain Management - Practices and Performance, question number 4.9 onperformance assessment. To our knowledge, this survey is unique, as we have not been able to findresults from any...

  13. Benchmarking Evaluation Results for Prototype Extravehicular Activity Gloves

    Science.gov (United States)

    Aitchison, Lindsay; McFarland, Shane

    2012-01-01

    The Space Suit Assembly (SSA) Development Team at NASA Johnson Space Center has invested heavily in the advancement of rear-entry planetary exploration suit design but largely deferred development of extravehicular activity (EVA) glove designs, and accepted the risk of using the current flight gloves, Phase VI, for unique mission scenarios outside the Space Shuttle and International Space Station (ISS) Program realm of experience. However, as design reference missions mature, the risks of using heritage hardware have highlighted the need for developing robust new glove technologies. To address the technology gap, the NASA Game-Changing Technology group provided start-up funding for the High Performance EVA Glove (HPEG) Project in the spring of 2012. The overarching goal of the HPEG Project is to develop a robust glove design that increases human performance during EVA and creates pathway for future implementation of emergent technologies, with specific aims of increasing pressurized mobility to 60% of barehanded capability, increasing the durability by 100%, and decreasing the potential of gloves to cause injury during use. The HPEG Project focused initial efforts on identifying potential new technologies and benchmarking the performance of current state of the art gloves to identify trends in design and fit leading to establish standards and metrics against which emerging technologies can be assessed at both the component and assembly levels. The first of the benchmarking tests evaluated the quantitative mobility performance and subjective fit of four prototype gloves developed by Flagsuit LLC, Final Frontier Designs, LLC Dover, and David Clark Company as compared to the Phase VI. All of the companies were asked to design and fabricate gloves to the same set of NASA provided hand measurements (which corresponded to a single size of Phase Vi glove) and focus their efforts on improving mobility in the metacarpal phalangeal and carpometacarpal joints. Four test

  14. Benchmarking the UAF Tsunami Code

    Science.gov (United States)

    Nicolsky, D.; Suleimani, E.; West, D.; Hansen, R.

    2008-12-01

    We have developed a robust numerical model to simulate propagation and run-up of tsunami waves in the framework of non-linear shallow water theory. A temporal position of the shoreline is calculated using the free-surface moving boundary condition. The numerical code adopts a staggered leapfrog finite-difference scheme to solve the shallow water equations formulated for depth-averaged water fluxes in spherical coordinates. To increase spatial resolution, we construct a series of telescoping embedded grids that focus on areas of interest. For large scale problems, a parallel version of the algorithm is developed by employing a domain decomposition technique. The developed numerical model is benchmarked in an exhaustive series of tests suggested by NOAA. We conducted analytical and laboratory benchmarking for the cases of solitary wave runup on simple and composite beaches, run-up of a solitary wave on a conical island, and the extreme runup in the Monai Valley, Okushiri Island, Japan, during the 1993 Hokkaido-Nansei-Oki tsunami. Additionally, we field-tested the developed model to simulate the November 15, 2006 Kuril Islands tsunami, and compared the simulated water height to observations at several DART buoys. In all conducted tests we calculated a numerical solution with an accuracy recommended by NOAA standards. In this work we summarize results of numerical benchmarking of the code, its strengths and limits with regards to reproduction of fundamental features of coastal inundation, and also illustrate some possible improvements. We applied the developed model to simulate potential inundation of the city of Seward located in Resurrection Bay, Alaska. To calculate an aerial extent of potential inundation, we take into account available near-shore bathymetry and inland topography on a grid of 15 meter resolution. By choosing several scenarios of potential earthquakes, we calculated the maximal aerial extent of Seward inundation. As a test to validate our model, we

  15. Development of Power Assisting Suit

    Science.gov (United States)

    Yamamoto, Keijiro; Ishii, Mineo; Hyodo, Kazuhito; Yoshimitsu, Toshihiro; Matsuo, Takashi

    In order to realize a wearable power assisting suit for assisting a nurse to carry a patient in her arms, the power supply and control systems of the suit have to be miniaturized, and it has to be wireless and pipeline-less. The new wearable suit consists of shoulders, arms, back, waist and legs units to be fitted on the nurse's body. The arms, waist and legs have new pneumatic rotary actuators driven directly by micro air pumps supplied by portable Ni-Cd batteries. The muscle forces are sensed by a new muscle hardness sensor utilizing a sensing tip mounted on a force sensing film device. An embedded microcomputer is used for the calculations of control signals. The new wearable suit was applied practically to a human body and a series of movement experiments that weights in the arms were held and taken up and down was performed. Each unit of the suit could transmit assisting torque directly to each joint verifying its practicability.

  16. Benchmarking high performance computing architectures with CMS’ skeleton framework

    Science.gov (United States)

    Sexton-Kennedy, E.; Gartung, P.; Jones, C. D.

    2017-10-01

    In 2012 CMS evaluated which underlying concurrency technology would be the best to use for its multi-threaded framework. The available technologies were evaluated on the high throughput computing systems dominating the resources in use at that time. A skeleton framework benchmarking suite that emulates the tasks performed within a CMSSW application was used to select Intel’s Thread Building Block library, based on the measured overheads in both memory and CPU on the different technologies benchmarked. In 2016 CMS will get access to high performance computing resources that use new many core architectures; machines such as Cori Phase 1&2, Theta, Mira. Because of this we have revived the 2012 benchmark to test it’s performance and conclusions on these new architectures. This talk will discuss the results of this exercise.

  17. Benchmark problems for numerical implementations of phase field models

    International Nuclear Information System (INIS)

    Jokisaari, A. M.; Voorhees, P. W.; Guyer, J. E.; Warren, J.; Heinonen, O. G.

    2016-01-01

    Here, we present the first set of benchmark problems for phase field models that are being developed by the Center for Hierarchical Materials Design (CHiMaD) and the National Institute of Standards and Technology (NIST). While many scientific research areas use a limited set of well-established software, the growing phase field community continues to develop a wide variety of codes and lacks benchmark problems to consistently evaluate the numerical performance of new implementations. Phase field modeling has become significantly more popular as computational power has increased and is now becoming mainstream, driving the need for benchmark problems to validate and verify new implementations. We follow the example set by the micromagnetics community to develop an evolving set of benchmark problems that test the usability, computational resources, numerical capabilities and physical scope of phase field simulation codes. In this paper, we propose two benchmark problems that cover the physics of solute diffusion and growth and coarsening of a second phase via a simple spinodal decomposition model and a more complex Ostwald ripening model. We demonstrate the utility of benchmark problems by comparing the results of simulations performed with two different adaptive time stepping techniques, and we discuss the needs of future benchmark problems. The development of benchmark problems will enable the results of quantitative phase field models to be confidently incorporated into integrated computational materials science and engineering (ICME), an important goal of the Materials Genome Initiative.

  18. NASA Software Engineering Benchmarking Study

    Science.gov (United States)

    Rarick, Heather L.; Godfrey, Sara H.; Kelly, John C.; Crumbley, Robert T.; Wifl, Joel M.

    2013-01-01

    was its software assurance practices, which seemed to rate well in comparison to the other organizational groups and also seemed to include a larger scope of activities. An unexpected benefit of the software benchmarking study was the identification of many opportunities for collaboration in areas including metrics, training, sharing of CMMI experiences and resources such as instructors and CMMI Lead Appraisers, and even sharing of assets such as documented processes. A further unexpected benefit of the study was the feedback on NASA practices that was received from some of the organizations interviewed. From that feedback, other potential areas where NASA could improve were highlighted, such as accuracy of software cost estimation and budgetary practices. The detailed report contains discussion of the practices noted in each of the topic areas, as well as a summary of observations and recommendations from each of the topic areas. The resulting 24 recommendations from the topic areas were then consolidated to eliminate duplication and culled into a set of 14 suggested actionable recommendations. This final set of actionable recommendations, listed below, are items that can be implemented to improve NASA's software engineering practices and to help address many of the items that were listed in the NASA top software engineering issues. 1. Develop and implement standard contract language for software procurements. 2. Advance accurate and trusted software cost estimates for both procured and in-house software and improve the capture of actual cost data to facilitate further improvements. 3. Establish a consistent set of objectives and expectations, specifically types of metrics at the Agency level, so key trends and models can be identified and used to continuously improve software processes and each software development effort. 4. Maintain the CMMI Maturity Level requirement for critical NASA projects and use CMMI to measure organizations developing software for NASA. 5

  19. Present Status and Extensions of the Monte Carlo Performance Benchmark

    Science.gov (United States)

    Hoogenboom, J. Eduard; Petrovic, Bojan; Martin, William R.

    2014-06-01

    The NEA Monte Carlo Performance benchmark started in 2011 aiming to monitor over the years the abilities to perform a full-size Monte Carlo reactor core calculation with a detailed power production for each fuel pin with axial distribution. This paper gives an overview of the contributed results thus far. It shows that reaching a statistical accuracy of 1 % for most of the small fuel zones requires about 100 billion neutron histories. The efficiency of parallel execution of Monte Carlo codes on a large number of processor cores shows clear limitations for computer clusters with common type computer nodes. However, using true supercomputers the speedup of parallel calculations is increasing up to large numbers of processor cores. More experience is needed from calculations on true supercomputers using large numbers of processors in order to predict if the requested calculations can be done in a short time. As the specifications of the reactor geometry for this benchmark test are well suited for further investigations of full-core Monte Carlo calculations and a need is felt for testing other issues than its computational performance, proposals are presented for extending the benchmark to a suite of benchmark problems for evaluating fission source convergence for a system with a high dominance ratio, for coupling with thermal-hydraulics calculations to evaluate the use of different temperatures and coolant densities and to study the correctness and effectiveness of burnup calculations. Moreover, other contemporary proposals for a full-core calculation with realistic geometry and material composition will be discussed.

  20. Shielding benchmark test

    International Nuclear Information System (INIS)

    Kawai, Masayoshi

    1984-01-01

    Iron data in JENDL-2 have been tested by analyzing shielding benchmark experiments for neutron transmission through iron block performed at KFK using CF-252 neutron source and at ORNL using collimated neutron beam from reactor. The analyses are made by a shielding analysis code system RADHEAT-V4 developed at JAERI. The calculated results are compared with the measured data. As for the KFK experiments, the C/E values are about 1.1. For the ORNL experiments, the calculated values agree with the measured data within an accuracy of 33% for the off-center geometry. The d-t neutron transmission measurements through carbon sphere made at LLNL are also analyzed preliminarily by using the revised JENDL data for fusion neutronics calculation. (author)

  1. Benchmarking foreign electronics technologies

    Energy Technology Data Exchange (ETDEWEB)

    Bostian, C.W.; Hodges, D.A.; Leachman, R.C.; Sheridan, T.B.; Tsang, W.T.; White, R.M.

    1994-12-01

    This report has been drafted in response to a request from the Japanese Technology Evaluation Center`s (JTEC) Panel on Benchmarking Select Technologies. Since April 1991, the Competitive Semiconductor Manufacturing (CSM) Program at the University of California at Berkeley has been engaged in a detailed study of quality, productivity, and competitiveness in semiconductor manufacturing worldwide. The program is a joint activity of the College of Engineering, the Haas School of Business, and the Berkeley Roundtable on the International Economy, under sponsorship of the Alfred P. Sloan Foundation, and with the cooperation of semiconductor producers from Asia, Europe and the United States. Professors David A. Hodges and Robert C. Leachman are the project`s Co-Directors. The present report for JTEC is primarily based on data and analysis drawn from that continuing program. The CSM program is being conducted by faculty, graduate students and research staff from UC Berkeley`s Schools of Engineering and Business, and Department of Economics. Many of the participating firms are represented on the program`s Industry Advisory Board. The Board played an important role in defining the research agenda. A pilot study was conducted in 1991 with the cooperation of three semiconductor plants. The research plan and survey documents were thereby refined. The main phase of the CSM benchmarking study began in mid-1992 and will continue at least through 1997. reports are presented on the manufacture of integrated circuits; data storage; wireless technology; human-machine interfaces; and optoelectronics. Selected papers are indexed separately for inclusion in the Energy Science and Technology Database.

  2. Suited Contingency Ops Food - 2

    Science.gov (United States)

    Glass, J. W.; Leong, M. L.; Douglas, G. L.

    2014-01-01

    The contingency scenario for an emergency cabin depressurization event may require crewmembers to subsist in a pressurized suit for up to 144 hours. This scenario requires the capability for safe nutrition delivery through a helmet feed port against a 4 psi pressure differential to enable crewmembers to maintain strength and cognition to perform critical tasks. Two nutritional delivery prototypes were developed and analyzed for compatibility with the helmet feed port interface and for operational effectiveness against the pressure differential. The bag-in-bag (BiB) prototype, designed to equalize the suit pressure with the beverage pouch and enable a crewmember to drink normally, delivered water successfully to three different subjects in suits pressurized to 4 psi. The Boa restrainer pouch, designed to provide mechanical leverage to overcome the pressure differential, did not operate sufficiently. Guidelines were developed and compiled for contingency beverages that provide macro-nutritional requirements, a minimum one-year shelf life, and compatibility with the delivery hardware. Evaluation results and food product parameters have the potential to be used to improve future prototype designs and develop complete nutritional beverages for contingency events. These feeding capabilities would have additional use on extended surface mission EVAs, where the current in-suit drinking device may be insufficient.

  3. Author's Rights, Tout de Suite

    OpenAIRE

    Bailey, Jr., Charles W.

    2008-01-01

    Author's Rights, Tout de Suite is designed to give journal article authors a quick introduction to key aspects of author's rights and to foster further exploration of this topic through liberal use of relevant references to online documents and links to pertinent Web sites.

  4. Benchmark job – Watch out!

    CERN Multimedia

    Staff Association

    2017-01-01

    On 12 December 2016, in Echo No. 259, we already discussed at length the MERIT and benchmark jobs. Still, we find that a couple of issues warrant further discussion. Benchmark job – administrative decision on 1 July 2017 On 12 January 2017, the HR Department informed all staff members of a change to the effective date of the administrative decision regarding benchmark jobs. The benchmark job title of each staff member will be confirmed on 1 July 2017, instead of 1 May 2017 as originally announced in HR’s letter on 18 August 2016. Postponing the administrative decision by two months will leave a little more time to address the issues related to incorrect placement in a benchmark job. Benchmark job – discuss with your supervisor, at the latest during the MERIT interview In order to rectify an incorrect placement in a benchmark job, it is essential that the supervisor and the supervisee go over the assigned benchmark job together. In most cases, this placement has been done autom...

  5. EPA's Benchmark Dose Modeling Software

    Science.gov (United States)

    The EPA developed the Benchmark Dose Software (BMDS) as a tool to help Agency risk assessors facilitate applying benchmark dose (BMD) method’s to EPA’s human health risk assessment (HHRA) documents. The application of BMD methods overcomes many well know limitations ...

  6. A Field-Based Aquatic Life Benchmark for Conductivity in ...

    Science.gov (United States)

    This report adapts the standard U.S. EPA methodology for deriving ambient water quality criteria. Rather than use toxicity test results, the adaptation uses field data to determine the loss of 5% of genera from streams. The method is applied to derive effect benchmarks for dissolved salts as measured by conductivity in Central Appalachian streams using data from West Virginia and Kentucky. This report provides scientific evidence for a conductivity benchmark in a specific region rather than for the entire United States.

  7. Benchmarking and Sustainable Transport Policy

    DEFF Research Database (Denmark)

    Gudmundsson, Henrik; Wyatt, Andrew; Gordon, Lucy

    2004-01-01

    is generally not advised. Several other ways in which benchmarking and policy can support one another are identified in the analysis. This leads to a range of recommended initiatives to exploit the benefits of benchmarking in transport while avoiding some of the lurking pitfalls and dead ends......Order to learn from the best. In 2000 the European Commission initiated research to explore benchmarking as a tool to promote policies for ‘sustainable transport’. This paper reports findings and recommendations on how to address this challenge. The findings suggest that benchmarking is a valuable...... tool that may indeed help to move forward the transport policy agenda. However, there are major conditions and limitations. First of all it is not always so straightforward to delimit, measure and compare transport services in order to establish a clear benchmark. Secondly ‘sustainable transport...

  8. A ChIP-Seq benchmark shows that sequence conservation mainly improves detection of strong transcription factor binding sites.

    Directory of Open Access Journals (Sweden)

    Tony Håndstad

    Full Text Available BACKGROUND: Transcription factors are important controllers of gene expression and mapping transcription factor binding sites (TFBS is key to inferring transcription factor regulatory networks. Several methods for predicting TFBS exist, but there are no standard genome-wide datasets on which to assess the performance of these prediction methods. Also, it is believed that information about sequence conservation across different genomes can generally improve accuracy of motif-based predictors, but it is not clear under what circumstances use of conservation is most beneficial. RESULTS: Here we use published ChIP-seq data and an improved peak detection method to create comprehensive benchmark datasets for prediction methods which use known descriptors or binding motifs to detect TFBS in genomic sequences. We use this benchmark to assess the performance of five different prediction methods and find that the methods that use information about sequence conservation generally perform better than simpler motif-scanning methods. The difference is greater on high-affinity peaks and when using short and information-poor motifs. However, if the motifs are specific and information-rich, we find that simple motif-scanning methods can perform better than conservation-based methods. CONCLUSIONS: Our benchmark provides a comprehensive test that can be used to rank the relative performance of transcription factor binding site prediction methods. Moreover, our results show that, contrary to previous reports, sequence conservation is better suited for predicting strong than weak transcription factor binding sites.

  9. DYNA3D/ParaDyn Regression Test Suite Inventory

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Jerry I. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-09-01

    The following table constitutes an initial assessment of feature coverage across the regression test suite used for DYNA3D and ParaDyn. It documents the regression test suite at the time of preliminary release 16.1 in September 2016. The columns of the table represent groupings of functionalities, e.g., material models. Each problem in the test suite is represented by a row in the table. All features exercised by the problem are denoted by a check mark (√) in the corresponding column. The definition of “feature” has not been subdivided to its smallest unit of user input, e.g., algorithmic parameters specific to a particular type of contact surface. This represents a judgment to provide code developers and users a reasonable impression of feature coverage without expanding the width of the table by several multiples. All regression testing is run in parallel, typically with eight processors, except problems involving features only available in serial mode. Many are strictly regression tests acting as a check that the codes continue to produce adequately repeatable results as development unfolds; compilers change and platforms are replaced. A subset of the tests represents true verification problems that have been checked against analytical or other benchmark solutions. Users are welcomed to submit documented problems for inclusion in the test suite, especially if they are heavily exercising, and dependent upon, features that are currently underrepresented.

  10. Benchmarking biofuels; Biobrandstoffen benchmarken

    Energy Technology Data Exchange (ETDEWEB)

    Croezen, H.; Kampman, B.; Bergsma, G.

    2012-03-15

    A sustainability benchmark for transport biofuels has been developed and used to evaluate the various biofuels currently on the market. For comparison, electric vehicles, hydrogen vehicles and petrol/diesel vehicles were also included. A range of studies as well as growing insight are making it ever clearer that biomass-based transport fuels may have just as big a carbon footprint as fossil fuels like petrol or diesel, or even bigger. At the request of Greenpeace Netherlands, CE Delft has brought together current understanding on the sustainability of fossil fuels, biofuels and electric vehicles, with particular focus on the performance of the respective energy carriers on three sustainability criteria, with the first weighing the heaviest: (1) Greenhouse gas emissions; (2) Land use; and (3) Nutrient consumption [Dutch] Greenpeace Nederland heeft CE Delft gevraagd een duurzaamheidsmeetlat voor biobrandstoffen voor transport te ontwerpen en hierop de verschillende biobrandstoffen te scoren. Voor een vergelijk zijn ook elektrisch rijden, rijden op waterstof en rijden op benzine of diesel opgenomen. Door onderzoek en voortschrijdend inzicht blijkt steeds vaker dat transportbrandstoffen op basis van biomassa soms net zoveel of zelfs meer broeikasgassen veroorzaken dan fossiele brandstoffen als benzine en diesel. CE Delft heeft voor Greenpeace Nederland op een rijtje gezet wat de huidige inzichten zijn over de duurzaamheid van fossiele brandstoffen, biobrandstoffen en elektrisch rijden. Daarbij is gekeken naar de effecten van de brandstoffen op drie duurzaamheidscriteria, waarbij broeikasgasemissies het zwaarst wegen: (1) Broeikasgasemissies; (2) Landgebruik; en (3) Nutriëntengebruik.

  11. Virtual machine performance benchmarking.

    Science.gov (United States)

    Langer, Steve G; French, Todd

    2011-10-01

    The attractions of virtual computing are many: reduced costs, reduced resources and simplified maintenance. Any one of these would be compelling for a medical imaging professional attempting to support a complex practice on limited resources in an era of ever tightened reimbursement. In particular, the ability to run multiple operating systems optimized for different tasks (computational image processing on Linux versus office tasks on Microsoft operating systems) on a single physical machine is compelling. However, there are also potential drawbacks. High performance requirements need to be carefully considered if they are to be executed in an environment where the running software has to execute through multiple layers of device drivers before reaching the real disk or network interface. Our lab has attempted to gain insight into the impact of virtualization on performance by benchmarking the following metrics on both physical and virtual platforms: local memory and disk bandwidth, network bandwidth, and integer and floating point performance. The virtual performance metrics are compared to baseline performance on "bare metal." The results are complex, and indeed somewhat surprising.

  12. Benchmarking of energy time series

    Energy Technology Data Exchange (ETDEWEB)

    Williamson, M.A.

    1990-04-01

    Benchmarking consists of the adjustment of time series data from one source in order to achieve agreement with similar data from a second source. The data from the latter source are referred to as the benchmark(s), and often differ in that they are observed at a lower frequency, represent a higher level of temporal aggregation, and/or are considered to be of greater accuracy. This report provides an extensive survey of benchmarking procedures which have appeared in the statistical literature, and reviews specific benchmarking procedures currently used by the Energy Information Administration (EIA). The literature survey includes a technical summary of the major benchmarking methods and their statistical properties. Factors influencing the choice and application of particular techniques are described and the impact of benchmark accuracy is discussed. EIA applications and procedures are reviewed and evaluated for residential natural gas deliveries series and coal production series. It is found that the current method of adjusting the natural gas series is consistent with the behavior of the series and the methods used in obtaining the initial data. As a result, no change is recommended. For the coal production series, a staged approach based on a first differencing technique is recommended over the current procedure. A comparison of the adjustments produced by the two methods is made for the 1987 Indiana coal production series. 32 refs., 5 figs., 1 tab.

  13. Correction factors for assessing immersion suits under harsh conditions.

    Science.gov (United States)

    Power, Jonathan; Tikuisis, Peter; Ré, António Simões; Barwood, Martin; Tipton, Michael

    2016-03-01

    Many immersion suit standards require testing of thermal protective properties in calm, circulating water while these suits are typically used in harsher environments where they often underperform. Yet it can be expensive and logistically challenging to test immersion suits in realistic conditions. The goal of this work was to develop a set of correction factors that would allow suits to be tested in calm water yet ensure they will offer sufficient protection in harsher conditions. Two immersion studies, one dry and the other with 500 mL of water within the suit, were conducted in wind and waves to measure the change in suit insulation. In both studies, wind and waves resulted in a significantly lower immersed insulation value compared to calm water. The minimum required thermal insulation for maintaining heat balance can be calculated for a given mean skin temperature, metabolic heat production, and water temperature. Combining the physiological limits of sustainable cold water immersion and actual suit insulation, correction factors can be deduced for harsh conditions compared to calm. The minimum in-situ suit insulation to maintain thermal balance is 1.553-0.0624·TW + 0.00018·TW(2) for a dry calm condition. Multiplicative correction factors to the above equation are 1.37, 1.25, and 1.72 for wind + waves, 500 mL suit wetness, and both combined, respectively. Calm water certification tests of suit insulation should meet or exceed the minimum in-situ requirements to maintain thermal balance, and correction factors should be applied for a more realistic determination of minimum insulation for harsh conditions. Crown Copyright © 2015. Published by Elsevier Ltd. All rights reserved.

  14. California commercial building energy benchmarking

    Energy Technology Data Exchange (ETDEWEB)

    Kinney, Satkartar; Piette, Mary Ann

    2003-07-01

    Building energy benchmarking is the comparison of whole-building energy use relative to a set of similar buildings. It provides a useful starting point for individual energy audits and for targeting buildings for energy-saving measures in multiple-site audits. Benchmarking is of interest and practical use to a number of groups. Energy service companies and performance contractors communicate energy savings potential with ''typical'' and ''best-practice'' benchmarks while control companies and utilities can provide direct tracking of energy use and combine data from multiple buildings. Benchmarking is also useful in the design stage of a new building or retrofit to determine if a design is relatively efficient. Energy managers and building owners have an ongoing interest in comparing energy performance to others. Large corporations, schools, and government agencies with numerous facilities also use benchmarking methods to compare their buildings to each other. The primary goal of Task 2.1.1 Web-based Benchmarking was the development of a web-based benchmarking tool, dubbed Cal-Arch, for benchmarking energy use in California commercial buildings. While there were several other benchmarking tools available to California consumers prior to the development of Cal-Arch, there were none that were based solely on California data. Most available benchmarking information, including the Energy Star performance rating, were developed using DOE's Commercial Building Energy Consumption Survey (CBECS), which does not provide state-level data. Each database and tool has advantages as well as limitations, such as the number of buildings and the coverage by type, climate regions and end uses. There is considerable commercial interest in benchmarking because it provides an inexpensive method of screening buildings for tune-ups and retrofits. However, private companies who collect and manage consumption data are concerned that the

  15. Benchmarking & european sustainable transport policies

    DEFF Research Database (Denmark)

    Gudmundsson, Henrik

    2003-01-01

    way forward is to ensure a higher level of environmental integration in transport policy benchmarking. To this effect the paper will discuss the possible role of the socalled Transport and Environment Reporting Mechanism developed by the European Environment Agency. The paper provides an independent...... to support Sustainable European Transport Policies. The key message is that transport benchmarking has not yet been developed to cope with the challenges of this task. Rather than backing down completely, the paper suggests some critical conditions for applying and adopting benchmarking for this purpose. One...

  16. A Heterogeneous Medium Analytical Benchmark

    International Nuclear Information System (INIS)

    Ganapol, B.D.

    1999-01-01

    A benchmark, called benchmark BLUE, has been developed for one-group neutral particle (neutron or photon) transport in a one-dimensional sub-critical heterogeneous plane parallel medium with surface illumination. General anisotropic scattering is accommodated through the Green's Function Method (GFM). Numerical Fourier transform inversion is used to generate the required Green's functions which are kernels to coupled integral equations that give the exiting angular fluxes. The interior scalar flux is then obtained through quadrature. A compound iterative procedure for quadrature order and slab surface source convergence provides highly accurate benchmark qualities (4- to 5- places of accuracy) results

  17. Detection of Weak Spots in Benchmarks Memory Space by using PCA and CA

    Directory of Open Access Journals (Sweden)

    Abdul Kareem PARCHUR

    2010-12-01

    Full Text Available This paper describes the weak spots in SPEC CPU INT 2006 Benchmarks memory space by using Principal Component Analysis and Cluster Analysis. We used recently published SPEC CPU INT 2006 Benchmark scores of AMD Opteron 2000+ and AMD Opteron 8000+ series processors. The four most significant PCs, which are retained for 72.6% of the variance, PC2, PC3, and PC4 covers 26.5%, 2.9%, 0.91% and 0.019% variance respectively. The dendrogram is useful to identify the similarities and dissimilarities between the benchmarks in workload space. These results and analysis can be used by performance engineers, scientists and developers to better understand the benchmark behavior in workload space and to design a Benchmark Suite that covers the complete workload space.

  18. Vehicle-network defensive aids suite

    Science.gov (United States)

    Rapanotti, John

    2005-05-01

    Defensive Aids Suites (DAS) developed for vehicles can be extended to the vehicle network level. The vehicle network, typically comprising four platoon vehicles, will benefit from improved communications and automation based on low latency response to threats from a flexible, dynamic, self-healing network environment. Improved DAS performance and reliability relies on four complementary sensor technologies including: acoustics, visible and infrared optics, laser detection and radar. Long-range passive threat detection and avoidance is based on dual-purpose optics, primarily designed for manoeuvring, targeting and surveillance, combined with dazzling, obscuration and countermanoeuvres. Short-range active armour is based on search and track radar and intercepting grenades to defeat the threat. Acoustic threat detection increases the overall robustness of the DAS and extends the detection range to include small calibers. Finally, detection of active targeting systems is carried out with laser and radar warning receivers. Synthetic scene generation will provide the integrated environment needed to investigate, develop and validate these new capabilities. Computer generated imagery, based on validated models and an acceptable set of benchmark vignettes, can be used to investigate and develop fieldable sensors driven by real-time algorithms and countermeasure strategies. The synthetic scene environment will be suitable for sensor and countermeasure development in hardware-in-the-loop simulation. The research effort focuses on two key technical areas: a) computing aspects of the synthetic scene generation and b) and development of adapted models and databases. OneSAF is being developed for research and development, in addition to the original requirement of Simulation and Modelling for Acquisition, Rehearsal, Requirements and Training (SMARRT), and is becoming useful as a means for transferring technology to other users, researchers and contractors. This procedure

  19. Water Level Superseded Benchmark Sheets

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Images of National Coast & Geodetic Survey (now NOAA's National Geodetic Survey/NGS) tidal benchmarks which have been superseded by new markers or locations....

  20. Benchmark simulation models, quo vadis?

    DEFF Research Database (Denmark)

    Jeppsson, U.; Alex, J; Batstone, D. J.

    2013-01-01

    As the work of the IWA Task Group on Benchmarking of Control Strategies for wastewater treatment plants (WWTPs) is coming to an end, it is essential to disseminate the knowledge gained. For this reason, all authors of the IWA Scientific and Technical Report on benchmarking have come together...... to provide their insights, highlighting areas where knowledge may still be deficient and where new opportunities are emerging, and to propose potential avenues for future development and application of the general benchmarking framework and its associated tools. The paper focuses on the topics of temporal...... and spatial extension, process modifications within the WWTP, the realism of models, control strategy extensions and the potential for new evaluation tools within the existing benchmark system. We find that there are major opportunities for application within all of these areas, either from existing work...

  1. Analysis of Benchmark 2 results

    International Nuclear Information System (INIS)

    Bacha, F.; Lefievre, B.; Maillard, J.; Silva, J.

    1994-01-01

    The code GEANT315 has been compared to different codes in two benchmarks. We analyze its performances through our results, especially in the thick target case. In spite of gaps in nucleus-nucleus interaction theories at intermediate energies, benchmarks allow possible improvements of physical models used in our codes. Thereafter, a scheme of radioactive waste burning system is studied. (authors). 4 refs., 7 figs., 1 tab

  2. Research on computer systems benchmarking

    Science.gov (United States)

    Smith, Alan Jay (Principal Investigator)

    1996-01-01

    This grant addresses the topic of research on computer systems benchmarking and is more generally concerned with performance issues in computer systems. This report reviews work in those areas during the period of NASA support under this grant. The bulk of the work performed concerned benchmarking and analysis of CPUs, compilers, caches, and benchmark programs. The first part of this work concerned the issue of benchmark performance prediction. A new approach to benchmarking and machine characterization was reported, using a machine characterizer that measures the performance of a given system in terms of a Fortran abstract machine. Another report focused on analyzing compiler performance. The performance impact of optimization in the context of our methodology for CPU performance characterization was based on the abstract machine model. Benchmark programs are analyzed in another paper. A machine-independent model of program execution was developed to characterize both machine performance and program execution. By merging these machine and program characterizations, execution time can be estimated for arbitrary machine/program combinations. The work was continued into the domain of parallel and vector machines, including the issue of caches in vector processors and multiprocessors. All of the afore-mentioned accomplishments are more specifically summarized in this report, as well as those smaller in magnitude supported by this grant.

  3. Features of energy efficiency benchmarking implementation as tools of DSTU ISO 50001: 2014 for Ukrainian industrial enterprises

    Directory of Open Access Journals (Sweden)

    Анастасія Юріївна Данілкова

    2015-12-01

    Full Text Available Essence, types and stages of energy efficiency benchmarking in the industrial enterprises are considered. Features, advantages, disadvantages and limitations on the use are defined and underlying problems that could affect the successful conduct of energy efficiency benchmarking to Ukrainian industrial enterprises are specified. Energy efficiency benchmarking as tools to the national standard of DSTU ISO 50001: 2014 is proposed

  4. Aeroelasticity Benchmark Assessment: Subsonic Fixed Wing Program

    Science.gov (United States)

    Florance, Jennifer P.; Chwalowski, Pawel; Wieseman, Carol D.

    2010-01-01

    The fundamental technical challenge in computational aeroelasticity is the accurate prediction of unsteady aerodynamic phenomena and the effect on the aeroelastic response of a vehicle. Currently, a benchmarking standard for use in validating the accuracy of computational aeroelasticity codes does not exist. Many aeroelastic data sets have been obtained in wind-tunnel and flight testing throughout the world; however, none have been globally presented or accepted as an ideal data set. There are numerous reasons for this. One reason is that often, such aeroelastic data sets focus on the aeroelastic phenomena alone (flutter, for example) and do not contain associated information such as unsteady pressures and time-correlated structural dynamic deflections. Other available data sets focus solely on the unsteady pressures and do not address the aeroelastic phenomena. Other discrepancies can include omission of relevant data, such as flutter frequency and / or the acquisition of only qualitative deflection data. In addition to these content deficiencies, all of the available data sets present both experimental and computational technical challenges. Experimental issues include facility influences, nonlinearities beyond those being modeled, and data processing. From the computational perspective, technical challenges include modeling geometric complexities, coupling between the flow and the structure, grid issues, and boundary conditions. The Aeroelasticity Benchmark Assessment task seeks to examine the existing potential experimental data sets and ultimately choose the one that is viewed as the most suitable for computational benchmarking. An initial computational evaluation of that configuration will then be performed using the Langley-developed computational fluid dynamics (CFD) software FUN3D1 as part of its code validation process. In addition to the benchmarking activity, this task also includes an examination of future research directions. Researchers within the

  5. A Base Integer Programming Model and Benchmark Suite for Liner-Shipping Network Design

    DEFF Research Database (Denmark)

    Brouer, Berit Dangaard; Alvarez, Fernando; Plum, Christian Edinger Munk

    2014-01-01

    . The potential for making cost-effective and energy-efficient liner-shipping networks using operations research (OR) is huge and neglected. The implementation of logistic planning tools based upon OR has enhanced performance of airlines, railways, and general transportation companies, but within the field...

  6. An integer programming model and benchmark suite for liner shipping network design

    DEFF Research Database (Denmark)

    Løfstedt, Berit; Alvarez, Jose Fernando; Plum, Christian Edinger Munk

    effective and energy efficient liner shipping networks using operations research is huge and neglected. The implementation of logistic planning tools based upon operations research has enhanced performance of both airlines, railways and general transportation companies, but within the field of liner...

  7. European Organisation for Research and Treatment of Cancer (EORTC) Pathobiology Group standard operating procedure for the preparation of human tumour tissue extracts suited for the quantitative analysis of tissue-associated biomarkers.

    Science.gov (United States)

    Schmitt, Manfred; Mengele, Karin; Schueren, Elisabeth; Sweep, Fred C G J; Foekens, John A; Brünner, Nils; Laabs, Juliane; Malik, Abha; Harbeck, Nadia

    2007-03-01

    With the new concept of 'individualized treatment and targeted therapies', tumour tissue-associated biomarkers have been given a new role in selection of cancer patients for treatment and in cancer patient management. Tumour biomarkers can give support to cancer patient stratification and risk assessment, treatment response identification, or to identifying those patients who are expected to respond to certain anticancer drugs. As the field of tumour-associated biomarkers has expanded rapidly over the last years, it has become increasingly apparent that a strong need exists to establish guidelines on how to easily disintegrate the tumour tissue for assessment of the presence of tumour tissue-associated biomarkers. Several mechanical tissue (cell) disruption techniques exist, ranging from bead mill homogenisation and freeze-fracturing through to blade or pestle-type homogenisation, to grinding and ultrasonics. Still, only a few directives have been given on how fresh-frozen tumour tissues should be processed for the extraction and determination of tumour biomarkers. The PathoBiology Group of the European Organisation for Research and Treatment of Cancer therefore has devised a standard operating procedure for the standardised preparation of human tumour tissue extracts which is designed for the quantitative analysis of tumour tissue-associated biomarkers. The easy to follow technical steps involved require 50-300 mg of deep-frozen cancer tissue placed into small size (1.2 ml) cryogenic tubes. These are placed into the shaking flask of a Mikro-Dismembrator S machine (bead mill) to pulverise the tumour tissue in the capped tubes in the deep-frozen state by use of a stainless steel ball, all within 30 s of exposure. RNA is isolated from the pulverised tissue following standard procedures. Proteins are extracted from the still frozen pulverised tissue by addition of Tris-buffered saline to obtain the cytosol fraction of the tumour or by the Tris buffer supplemented with

  8. Safety in the use of pressurized suits

    International Nuclear Information System (INIS)

    1984-01-01

    This Code of Practice describes the procedures relating to the safe operation of Pressurized Suit Areas and their supporting services. It is directed at personnel responsible for the design and/or operation of Pressurized Suit Areas. (author)

  9. Inertial motion capture system for biomechanical analysis in pressure suits

    Science.gov (United States)

    Di Capua, Massimiliano

    A non-invasive system has been developed at the University of Maryland Space System Laboratory with the goal of providing a new capability for quantifying the motion of the human inside a space suit. Based on an array of six microprocessors and eighteen microelectromechanical (MEMS) inertial measurement units (IMUs), the Body Pose Measurement System (BPMS) allows the monitoring of the kinematics of the suit occupant in an unobtrusive, self-contained, lightweight and compact fashion, without requiring any external equipment such as those necessary with modern optical motion capture systems. BPMS measures and stores the accelerations, angular rates and magnetic fields acting upon each IMU, which are mounted on the head, torso, and each segment of each limb. In order to convert the raw data into a more useful form, such as a set of body segment angles quantifying pose and motion, a series of geometrical models and a non-linear complimentary filter were implemented. The first portion of this works focuses on assessing system performance, which was measured by comparing the BPMS filtered data against rigid body angles measured through an external VICON optical motion capture system. This type of system is the industry standard, and is used here for independent measurement of body pose angles. By comparing the two sets of data, performance metrics such as BPMS system operational conditions, accuracy, and drift were evaluated and correlated against VICON data. After the system and models were verified and their capabilities and limitations assessed, a series of pressure suit evaluations were conducted. Three different pressure suits were used to identify the relationship between usable range of motion and internal suit pressure. In addition to addressing range of motion, a series of exploration tasks were also performed, recorded, and analysed in order to identify different motion patterns and trajectories as suit pressure is increased and overall suit mobility is reduced

  10. A Conformance Test Suite for Arden Syntax Compilers and Interpreters.

    Science.gov (United States)

    Wolf, Klaus-Hendrik; Klimek, Mike

    2016-01-01

    The Arden Syntax for Medical Logic Modules is a standardized and well-established programming language to represent medical knowledge. To test the compliance level of existing compilers and interpreters no public test suite exists. This paper presents the research to transform the specification into a set of unit tests, represented in JUnit. It further reports on the utilization of the test suite testing four different Arden Syntax processors. The presented and compared results reveal the status conformance of the tested processors. How test driven development of Arden Syntax processors can help increasing the compliance with the standard is described with two examples. In the end some considerations how an open source test suite can improve the development and distribution of the Arden Syntax are presented.

  11. Benchmarking of human resources management

    Directory of Open Access Journals (Sweden)

    David M. Akinnusi

    2008-11-01

    Full Text Available This paper reviews the role of human resource management (HRM which, today, plays a strategic partnership role in management. The focus of the paper is on HRM in the public sector, where much hope rests on HRM as a means of transforming the public service and achieving much needed service delivery. However, a critical evaluation of HRM practices in the public sector reveals that these services leave much to be desired. The paper suggests the adoption of benchmarking as a process to revamp HRM in the public sector so that it is able to deliver on its promises. It describes the nature and process of benchmarking and highlights the inherent difficulties in applying benchmarking in HRM. It concludes with some suggestions for a plan of action. The process of identifying “best” practices in HRM requires the best collaborative efforts of HRM practitioners and academicians. If used creatively, benchmarking has the potential to bring about radical and positive changes in HRM in the public sector. The adoption of the benchmarking process is, in itself, a litmus test of the extent to which HRM in the public sector has grown professionally.

  12. Benchmarking motion planning algorithms for bin-picking applications

    DEFF Research Database (Denmark)

    Iversen, Thomas Fridolin; Ellekilde, Lars-Peter

    2017-01-01

    planning algorithms to identify which are most suited in the given context. Design/methodology/approach The paper presents a selection of motion planning algorithms and defines benchmarks based on three different bin-picking scenarios. The evaluation is done based on a fixed set of tasks, which are planned...... and executed on a real and a simulated robot. Findings The benchmarking shows a clear difference between the planners and generally indicates that algorithms integrating optimization, despite longer planning time, perform better due to a faster execution. Originality/value The originality of this work lies...... in the selected set of planners and the specific choice of application. Most new planners are only compared to existing methods for specific applications chosen to demonstrate the advantages. However, with the specifics of another application, such as bin picking, it is not obvious which planner to choose....

  13. Building America Research Benchmark Definition, Updated December 29, 2004

    Energy Technology Data Exchange (ETDEWEB)

    Hendron, R.

    2005-02-01

    To track progress toward aggressive multi-year whole-house energy savings goals of 40-70% and onsite power production of up to 30%, the U.S. Department of Energy (DOE) Residential Buildings Program and the National Renewable Energy Laboratory (NREL) developed the Building America Research Benchmark in consultation with the Building America industry teams. The Benchmark is generally consistent with mid-1990s standard practice, as reflected in the Home Energy Rating System (HERS) Technical Guidelines (RESNET 2002), with additional definitions that allow the analyst to evaluate all residential end-uses, an extension of the traditional HERS rating approach that focuses on space conditioning and hot water. A series of user profiles, intended to represent the behavior of a ''standard'' set of occupants, was created for use in conjunction with the Benchmark.

  14. FLOWTRAN-TF code benchmarking

    Energy Technology Data Exchange (ETDEWEB)

    Flach, G.P. (ed.)

    1990-12-01

    FLOWTRAN-TF is a two-component (air-water), two-phase thermal-hydraulics code designed for performing accident analyses of SRS reactor fuel assemblies during the Emergency Cooling System (ECS) phase of a Double Ended Guillotine Break (DEGB) Loss Of Coolant Accident (LOCA). A description of the code is given by Flach et al. (1990). This report provides benchmarking results for the version of FLOWTRAN-TF used to compute the Recommended K-Reactor Restart ECS Power Limit (Smith et al., 1990a; 1990b). Individual constitutive relations are benchmarked in Sections 2 through 5 while in Sections 6 and 7 integral code benchmarking results are presented. An overall assessment of FLOWTRAN-TF for its intended use in computing the ECS power limit completes the document.

  15. Radiation Detection Computational Benchmark Scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Shaver, Mark W.; Casella, Andrew M.; Wittman, Richard S.; McDonald, Ben S.

    2013-09-24

    Modeling forms an important component of radiation detection development, allowing for testing of new detector designs, evaluation of existing equipment against a wide variety of potential threat sources, and assessing operation performance of radiation detection systems. This can, however, result in large and complex scenarios which are time consuming to model. A variety of approaches to radiation transport modeling exist with complementary strengths and weaknesses for different problems. This variety of approaches, and the development of promising new tools (such as ORNL’s ADVANTG) which combine benefits of multiple approaches, illustrates the need for a means of evaluating or comparing different techniques for radiation detection problems. This report presents a set of 9 benchmark problems for comparing different types of radiation transport calculations, identifying appropriate tools for classes of problems, and testing and guiding the development of new methods. The benchmarks were drawn primarily from existing or previous calculations with a preference for scenarios which include experimental data, or otherwise have results with a high level of confidence, are non-sensitive, and represent problem sets of interest to NA-22. From a technical perspective, the benchmarks were chosen to span a range of difficulty and to include gamma transport, neutron transport, or both and represent different important physical processes and a range of sensitivity to angular or energy fidelity. Following benchmark identification, existing information about geometry, measurements, and previous calculations were assembled. Monte Carlo results (MCNP decks) were reviewed or created and re-run in order to attain accurate computational times and to verify agreement with experimental data, when present. Benchmark information was then conveyed to ORNL in order to guide testing and development of hybrid calculations. The results of those ADVANTG calculations were then sent to PNNL for

  16. Benchmark of a Cubieboard cluster

    Science.gov (United States)

    Schnepf, M. J.; Gudu, D.; Rische, B.; Fischer, M.; Jung, C.; Hardt, M.

    2015-12-01

    We built a cluster of ARM-based Cubieboards2 which has a SATA interface to connect a harddrive. This cluster was set up as a storage system using Ceph and as a compute cluster for high energy physics analyses. To study the performance in these applications, we ran two benchmarks on this cluster. We also checked the energy efficiency of the cluster using the preseted benchmarks. Performance and energy efficency of our cluster were compared with a network-attached storage (NAS), and with a desktop PC.

  17. An XML format for benchmarks in High School Timetabling

    NARCIS (Netherlands)

    Post, Gerhard F.; Ahmadi, Samad; Daskalaki, Sophia; Kingston, Jeffrey H.; Kyngas, Jari; Nurmi, Cimmo; Ranson, David

    2012-01-01

    The High School Timetabling Problem is amongst the most widely used timetabling problems. This problem has varying structures in different high schools even within the same country or educational system. Due to lack of standard benchmarks and data formats this problem has been studied less than

  18. Parameter Curation for Benchmark Queries

    NARCIS (Netherlands)

    Gubichev, Andrey; Boncz, Peter

    2014-01-01

    In this paper we consider the problem of generating parameters for benchmark queries so these have stable behavior despite being executed on datasets (real-world or synthetic) with skewed data distributions and value correlations. We show that uniform random sampling of the substitution parameters

  19. Benchmarked Library Websites Comparative Study

    KAUST Repository

    Ramli, Rindra M.

    2015-01-01

    This presentation provides an analysis of services provided by the benchmarked library websites. The exploratory study includes comparison of these websites against a list of criterion and presents a list of services that are most commonly deployed by the selected websites. In addition to that, the investigators proposed a list of services that could be provided via the KAUST library website.

  20. PRISMATIC CORE COUPLED TRANSIENT BENCHMARK

    Energy Technology Data Exchange (ETDEWEB)

    J. Ortensi; M.A. Pope; G. Strydom; R.S. Sen; M.D. DeHart; H.D. Gougar; C. Ellis; A. Baxter; V. Seker; T.J. Downar; K. Vierow; K. Ivanov

    2011-06-01

    The Prismatic Modular Reactor (PMR) is one of the High Temperature Reactor (HTR) design concepts that have existed for some time. Several prismatic units have operated in the world (DRAGON, Fort St. Vrain, Peach Bottom) and one unit is still in operation (HTTR). The deterministic neutronics and thermal-fluids transient analysis tools and methods currently available for the design and analysis of PMRs have lagged behind the state of the art compared to LWR reactor technologies. This has motivated the development of more accurate and efficient tools for the design and safety evaluations of the PMR. In addition to the work invested in new methods, it is essential to develop appropriate benchmarks to verify and validate the new methods in computer codes. The purpose of this benchmark is to establish a well-defined problem, based on a common given set of data, to compare methods and tools in core simulation and thermal hydraulics analysis with a specific focus on transient events. The benchmark-working group is currently seeking OECD/NEA sponsorship. This benchmark is being pursued and is heavily based on the success of the PBMR-400 exercise.

  1. Estimating the Need for Palliative Radiation Therapy: A Benchmarking Approach

    Energy Technology Data Exchange (ETDEWEB)

    Mackillop, William J., E-mail: william.mackillop@krcc.on.ca [Cancer Care and Epidemiology, Queen' s Cancer Research Institute, Queen' s University, Kingston, Ontario (Canada); Department of Public Health Sciences, Queen' s University, Kingston, Ontario (Canada); Department of Oncology, Queen' s University, Kingston, Ontario (Canada); Kong, Weidong [Cancer Care and Epidemiology, Queen' s Cancer Research Institute, Queen' s University, Kingston, Ontario (Canada)

    2016-01-01

    Purpose: Palliative radiation therapy (PRT) benefits many patients with incurable cancer, but the overall need for PRT is unknown. Our primary objective was to estimate the appropriate rate of use of PRT in Ontario. Methods and Materials: The Ontario Cancer Registry identified patients who died of cancer in Ontario between 2006 and 2010. Comprehensive RT records were linked to the registry. Multivariate analysis identified social and health system-related factors affecting the use of PRT, enabling us to define a benchmark population of patients with unimpeded access to PRT. The proportion of cases treated at any time (PRT{sub lifetime}), the proportion of cases treated in the last 2 years of life (PRT{sub 2y}), and number of courses of PRT per thousand cancer deaths were measured in the benchmark population. These benchmarks were standardized to the characteristics of the overall population, and province-wide PRT rates were then compared to benchmarks. Results: Cases diagnosed at hospitals with no RT on-site and residents of poorer communities and those who lived farther from an RT center, were significantly less likely than others to receive PRT. However, availability of RT at the diagnosing hospital was the dominant factor. Neither socioeconomic status nor distance from home to nearest RT center had a significant effect on the use of PRT in patients diagnosed at a hospital with RT facilities. The benchmark population therefore consisted of patients diagnosed at a hospital with RT facilities. The standardized benchmark for PRT{sub lifetime} was 33.9%, and the corresponding province-wide rate was 28.5%. The standardized benchmark for PRT{sub 2y} was 32.4%, and the corresponding province-wide rate was 27.0%. The standardized benchmark for the number of courses of PRT per thousand cancer deaths was 652, and the corresponding province-wide rate was 542. Conclusions: Approximately one-third of patients who die of cancer in Ontario need PRT, but many of them are never

  2. Tourism Destination Benchmarking: Evaluation and Selection of the Benchmarking Partners

    Directory of Open Access Journals (Sweden)

    Luštický Martin

    2012-03-01

    Full Text Available Tourism development has an irreplaceable role in regional policy of almost all countries. This is due to its undeniable benefits for the local population with regards to the economic, social and environmental sphere. Tourist destinations compete for visitors at tourism market and subsequently get into a relatively sharp competitive struggle. The main goal of regional governments and destination management institutions is to succeed in this struggle by increasing the competitiveness of their destination. The quality of strategic planning and final strategies is a key factor of competitiveness. Even though the tourism sector is not the typical field where the benchmarking methods are widely used, such approaches could be successfully applied. The paper focuses on key phases of the benchmarking process which lies in the search for suitable referencing partners. The partners are consequently selected to meet general requirements to ensure the quality if strategies. Following from this, some specific characteristics are developed according to the SMART approach. The paper tests this procedure with an expert evaluation of eight selected regional tourism strategies of regions in the Czech Republic, Slovakia and Great Britain. In this way it validates the selected criteria in the frame of the international environment. Hence, it makes it possible to find strengths and weaknesses of selected strategies and at the same time facilitates the discovery of suitable benchmarking partners.

  3. The BRITNeY Suite Animation Tool

    DEFF Research Database (Denmark)

    Westergaard, Michael; Lassen, Kristian Bisgaard

    2006-01-01

    This paper describes the BRITNeY suite, a tool which enables users to create visualizations of formal models. BRITNeY suite is integrated with CPN Tools, and we give an example of how to extend a simple stop-and-wait protocol with a visualization in the form of message sequence charts. We also sh...... examples of animations created during industrial projects to give an impression of what is possible with the BRITNeY suite....

  4. BENCHMARKING LEARNER EDUCATION USING ONLINE BUSINESS SIMULATION

    Directory of Open Access Journals (Sweden)

    Alfred H. Miller

    2016-06-01

    Full Text Available For programmatic accreditation by the Accreditation Council of Business Schools and Programs (ACBSP, business programs are required to meet STANDARD #4, Measurement and Analysis of Student Learning and Performance. Business units must demonstrate that outcome assessment systems are in place using documented evidence that shows how the results are being used to further develop or improve the academic business program. The Higher Colleges of Technology, a 17 campus federal university in the United Arab Emirates, differentiates its applied degree programs through a ‘learning by doing ethos,’ which permeates the entire curricula. This paper documents benchmarking of education for managing innovation. Using business simulation for Bachelors of Business, Year 3 learners, in a business strategy class; learners explored through a simulated environment the following functional areas; research and development, production, and marketing of a technology product. Student teams were required to use finite resources and compete against other student teams in the same universe. The study employed an instrument developed in a 60-sample pilot study of business simulation learners against which subsequent learners participating in online business simulation could be benchmarked. The results showed incremental improvement in the program due to changes made in assessment strategies, including the oral defense.

  5. Building America Research Benchmark Definition: Updated December 20, 2007

    Energy Technology Data Exchange (ETDEWEB)

    Hendron, R.

    2008-01-01

    To track progress toward aggressive multi-year whole-house energy savings goals of 40-70% and onsite power production of up to 30%, DOE's Residential Buildings Program and NREL developed the Building America Research Benchmark in consultation with the Building America industry teams. The Benchmark is generally consistent with mid-1990s standard practice, as reflected in the Home Energy Rating System (HERS) Technical Guidelines (RESNET 2002), with additional definitions that allow the analyst to evaluate all residential end-uses, an extension of the traditional HERS rating approach that focuses on space conditioning and hot water. Unlike the reference homes used for HERS, EnergyStar, and most energy codes, the Benchmark represents typical construction at a fixed point in time so it can be used as the basis for Building America's multi-year energy savings goals without the complication of chasing a 'moving target'.

  6. Building America Research Benchmark Definition, Updated December 15, 2006

    Energy Technology Data Exchange (ETDEWEB)

    Hendron, R.

    2007-01-01

    To track progress toward aggressive multi-year whole-house energy savings goals of 40-70% and onsite power production of up to 30%, DOE's Residential Buildings Program and NREL developed the Building America Research Benchmark in consultation with the Building America industry teams. The Benchmark is generally consistent with mid-1990s standard practice, as reflected in the Home Energy Rating System (HERS) Technical Guidelines (RESNET 2002), with additional definitions that allow the analyst to evaluate all residential end-uses, an extension of the traditional HERS rating approach that focuses on space conditioning and hot water. Unlike the reference homes used for HERS, EnergyStar, and most energy codes, the Benchmark represents typical construction at a fixed point in time so it can be used as the basis for Building America's multi-year energy savings goals without the complication of chasing a ''moving target''.

  7. Building America Research Benchmark Definition: Updated August 15, 2007

    Energy Technology Data Exchange (ETDEWEB)

    Hendron, R.

    2007-09-01

    To track progress toward aggressive multi-year whole-house energy savings goals of 40-70% and onsite power production of up to 30%, DOE's Residential Buildings Program and NREL developed the Building America Research Benchmark in consultation with the Building America industry teams. The Benchmark is generally consistent with mid-1990s standard practice, as reflected in the Home Energy Rating System (HERS) Technical Guidelines (RESNET 2002), with additional definitions that allow the analyst to evaluate all residential end-uses, an extension of the traditional HERS rating approach that focuses on space conditioning and hot water. Unlike the reference homes used for HERS, EnergyStar, and most energy codes, the Benchmark represents typical construction at a fixed point in time so it can be used as the basis for Building America's multi-year energy savings goals without the complication of chasing a 'moving target'.

  8. Benchmarking clinical photography services in the NHS.

    Science.gov (United States)

    Arbon, Giles

    2015-01-01

    Benchmarking is used in services across the National Health Service (NHS) using various benchmarking programs. Clinical photography services do not have a program in place and services have to rely on ad hoc surveys of other services. A trial benchmarking exercise was undertaken with 13 services in NHS Trusts. This highlights valuable data and comparisons that can be used to benchmark and improve services throughout the profession.

  9. Testing of Space Suit Materials for Mars

    Science.gov (United States)

    Larson, Kristine

    2016-01-01

    Human missions to Mars may require radical changes in our approach to EVA suit design. A major challenge is the balance of building a suit robust enough to complete 50 EVAs in the dirt under intense UV exposure without losing mechanical strength or compromising its mobility. We conducted ground testing on both current and new space suit materials to determine performance degradation after exposure to 2500 hours of Mars mission equivalent UV. This testing will help mature the material technologies and provide performance data that can be used by not only the space suit development teams but for all Mars inflatable and soft goods derived structures from airlocks to habitats.

  10. WWER-1000 Burnup Credit Benchmark (CB5)

    International Nuclear Information System (INIS)

    Manolova, M.A.

    2002-01-01

    In the paper the specification of WWER-1000 Burnup Credit Benchmark first phase (depletion calculations), given. The second phase - criticality calculations for the WWER-1000 fuel pin cell, will be given after the evaluation of the results, obtained at the first phase. The proposed benchmark is a continuation of the WWER benchmark activities in this field (Author)

  11. A Framework for Urban Transport Benchmarking

    OpenAIRE

    Theuns Henning; Mohammed Dalil Essakali; Jung Eun Oh

    2011-01-01

    This report summarizes the findings of a study aimed at exploring key elements of a benchmarking framework for urban transport. Unlike many industries where benchmarking has proven to be successful and straightforward, the multitude of the actors and interactions involved in urban transport systems may make benchmarking a complex endeavor. It was therefore important to analyze what has bee...

  12. Surgical Space Suits Increase Particle and Microbiological Emission Rates in a Simulated Surgical Environment.

    Science.gov (United States)

    Vijaysegaran, Praveen; Knibbs, Luke D; Morawska, Lidia; Crawford, Ross W

    2018-05-01

    The role of space suits in the prevention of orthopedic prosthetic joint infection remains unclear. Recent evidence suggests that space suits may in fact contribute to increased infection rates, with bioaerosol emissions from space suits identified as a potential cause. This study aimed to compare the particle and microbiological emission rates (PER and MER) of space suits and standard surgical clothing. A comparison of emission rates between space suits and standard surgical clothing was performed in a simulated surgical environment during 5 separate experiments. Particle counts were analyzed with 2 separate particle counters capable of detecting particles between 0.1 and 20 μm. An Andersen impactor was used to sample bacteria, with culture counts performed at 24 and 48 hours. Four experiments consistently showed statistically significant increases in both PER and MER when space suits are used compared with standard surgical clothing. One experiment showed inconsistent results, with a trend toward increases in both PER and MER when space suits are used compared with standard surgical clothing. Space suits cause increased PER and MER compared with standard surgical clothing. This finding provides mechanistic evidence to support the increased prosthetic joint infection rates observed in clinical studies. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. The development of code benchmarks

    International Nuclear Information System (INIS)

    Glass, R.E.

    1986-01-01

    Sandia National Laboratories has undertaken a code benchmarking effort to define a series of cask-like problems having both numerical solutions and experimental data. The development of the benchmarks includes: (1) model problem definition, (2) code intercomparison, and (3) experimental verification. The first two steps are complete and a series of experiments are planned. The experiments will examine the elastic/plastic behavior of cylinders for both the end and side impacts resulting from a nine meter drop. The cylinders will be made from stainless steel and aluminum to give a range of plastic deformations. This paper presents the results of analyses simulating the model's behavior using materials properties for stainless steel and aluminum

  14. Geothermal Heat Pump Benchmarking Report

    Energy Technology Data Exchange (ETDEWEB)

    None

    1997-01-17

    A benchmarking study was conducted on behalf of the Department of Energy to determine the critical factors in successful utility geothermal heat pump programs. A Successful program is one that has achieved significant market penetration. Successfully marketing geothermal heat pumps has presented some major challenges to the utility industry. However, select utilities have developed programs that generate significant GHP sales. This benchmarking study concludes that there are three factors critical to the success of utility GHP marking programs: (1) Top management marketing commitment; (2) An understanding of the fundamentals of marketing and business development; and (3) An aggressive competitive posture. To generate significant GHP sales, competitive market forces must by used. However, because utilities have functioned only in a regulated arena, these companies and their leaders are unschooled in competitive business practices. Therefore, a lack of experience coupled with an intrinsically non-competitive culture yields an industry environment that impedes the generation of significant GHP sales in many, but not all, utilities.

  15. Enhanced Verification Test Suite for Physics Simulation Codes

    Energy Technology Data Exchange (ETDEWEB)

    Kamm, J R; Brock, J S; Brandon, S T; Cotrell, D L; Johnson, B; Knupp, P; Rider, W; Trucano, T; Weirs, V G

    2008-10-10

    This document discusses problems with which to augment, in quantity and in quality, the existing tri-laboratory suite of verification problems used by Los Alamos National Laboratory (LANL), Lawrence Livermore National Laboratory (LLNL), and Sandia National Laboratories (SNL). The purpose of verification analysis is demonstrate whether the numerical results of the discretization algorithms in physics and engineering simulation codes provide correct solutions of the corresponding continuum equations. The key points of this document are: (1) Verification deals with mathematical correctness of the numerical algorithms in a code, while validation deals with physical correctness of a simulation in a regime of interest. This document is about verification. (2) The current seven-problem Tri-Laboratory Verification Test Suite, which has been used for approximately five years at the DOE WP laboratories, is limited. (3) Both the methodology for and technology used in verification analysis have evolved and been improved since the original test suite was proposed. (4) The proposed test problems are in three basic areas: (a) Hydrodynamics; (b) Transport processes; and (c) Dynamic strength-of-materials. (5) For several of the proposed problems we provide a 'strong sense verification benchmark', consisting of (i) a clear mathematical statement of the problem with sufficient information to run a computer simulation, (ii) an explanation of how the code result and benchmark solution are to be evaluated, and (iii) a description of the acceptance criterion for simulation code results. (6) It is proposed that the set of verification test problems with which any particular code be evaluated include some of the problems described in this document. Analysis of the proposed verification test problems constitutes part of a necessary--but not sufficient--step that builds confidence in physics and engineering simulation codes. More complicated test cases, including physics models of

  16. Thermal Performance Benchmarking: Annual Report

    Energy Technology Data Exchange (ETDEWEB)

    Moreno, Gilbert

    2016-04-08

    The goal for this project is to thoroughly characterize the performance of state-of-the-art (SOA) automotive power electronics and electric motor thermal management systems. Information obtained from these studies will be used to: Evaluate advantages and disadvantages of different thermal management strategies; establish baseline metrics for the thermal management systems; identify methods of improvement to advance the SOA; increase the publicly available information related to automotive traction-drive thermal management systems; help guide future electric drive technologies (EDT) research and development (R&D) efforts. The performance results combined with component efficiency and heat generation information obtained by Oak Ridge National Laboratory (ORNL) may then be used to determine the operating temperatures for the EDT components under drive-cycle conditions. In FY15, the 2012 Nissan LEAF power electronics and electric motor thermal management systems were benchmarked. Testing of the 2014 Honda Accord Hybrid power electronics thermal management system started in FY15; however, due to time constraints it was not possible to include results for this system in this report. The focus of this project is to benchmark the thermal aspects of the systems. ORNL's benchmarking of electric and hybrid electric vehicle technology reports provide detailed descriptions of the electrical and packaging aspects of these automotive systems.

  17. RISKIND verification and benchmark comparisons

    Energy Technology Data Exchange (ETDEWEB)

    Biwer, B.M.; Arnish, J.J.; Chen, S.Y.; Kamboj, S.

    1997-08-01

    This report presents verification calculations and benchmark comparisons for RISKIND, a computer code designed to estimate potential radiological consequences and health risks to individuals and the population from exposures associated with the transportation of spent nuclear fuel and other radioactive materials. Spreadsheet calculations were performed to verify the proper operation of the major options and calculational steps in RISKIND. The program is unique in that it combines a variety of well-established models into a comprehensive treatment for assessing risks from the transportation of radioactive materials. Benchmark comparisons with other validated codes that incorporate similar models were also performed. For instance, the external gamma and neutron dose rate curves for a shipping package estimated by RISKIND were compared with those estimated by using the RADTRAN 4 code and NUREG-0170 methodology. Atmospheric dispersion of released material and dose estimates from the GENII and CAP88-PC codes. Verification results have shown the program to be performing its intended function correctly. The benchmark results indicate that the predictions made by RISKIND are within acceptable limits when compared with predictions from similar existing models.

  18. RISKIND verification and benchmark comparisons

    International Nuclear Information System (INIS)

    Biwer, B.M.; Arnish, J.J.; Chen, S.Y.; Kamboj, S.

    1997-08-01

    This report presents verification calculations and benchmark comparisons for RISKIND, a computer code designed to estimate potential radiological consequences and health risks to individuals and the population from exposures associated with the transportation of spent nuclear fuel and other radioactive materials. Spreadsheet calculations were performed to verify the proper operation of the major options and calculational steps in RISKIND. The program is unique in that it combines a variety of well-established models into a comprehensive treatment for assessing risks from the transportation of radioactive materials. Benchmark comparisons with other validated codes that incorporate similar models were also performed. For instance, the external gamma and neutron dose rate curves for a shipping package estimated by RISKIND were compared with those estimated by using the RADTRAN 4 code and NUREG-0170 methodology. Atmospheric dispersion of released material and dose estimates from the GENII and CAP88-PC codes. Verification results have shown the program to be performing its intended function correctly. The benchmark results indicate that the predictions made by RISKIND are within acceptable limits when compared with predictions from similar existing models

  19. Closed-Loop Neuromorphic Benchmarks

    Directory of Open Access Journals (Sweden)

    Terrence C Stewart

    2015-12-01

    Full Text Available Evaluating the effectiveness and performance of neuromorphic hardware is difficult. It is evenmore difficult when the task of interest is a closed-loop task; that is, a task where the outputfrom the neuromorphic hardware affects some environment, which then in turn affects thehardware’s future input. However, closed-loop situations are one of the primary potential uses ofneuromorphic hardware. To address this, we present a methodology for generating closed-loopbenchmarks that makes use of a hybrid of real physical embodiment and a type of minimalsimulation. Minimal simulation has been shown to lead to robust real-world performance, whilestill maintaining the practical advantages of simulation, such as making it easy for the samebenchmark to be used by many researchers. This method is flexible enough to allow researchersto explicitly modify the benchmarks to identify specific task domains where particular hardwareexcels. To demonstrate the method, we present a set of novel benchmarks that focus on motorcontrol for an arbitrary system with unknown external forces. Using these benchmarks, we showthat an error-driven learning rule can consistently improve motor control performance across arandomly generated family of closed-loop simulations, even when there are up to 15 interactingjoints to be controlled.

  20. Analytical Radiation Transport Benchmarks for The Next Century

    International Nuclear Information System (INIS)

    Ganapol, B.D.

    2005-01-01

    Verification of large-scale computational algorithms used in nuclear engineering and radiological applications is an essential element of reliable code performance. For this reason, the development of a suite of multidimensional semi-analytical benchmarks has been undertaken to provide independent verification of proper operation of codes dealing with the transport of neutral particles. The benchmarks considered cover several one-dimensional, multidimensional, monoenergetic and multigroup, fixed source and critical transport scenarios. The first approach, called the Green's Function. In slab geometry, the Green's function is incorporated into a set of integral equations for the boundary fluxes. Through a numerical Fourier transform inversion and subsequent matrix inversion for the boundary fluxes, a semi-analytical benchmark emerges. Multidimensional solutions in a variety of infinite media are also based on the slab Green's function. In a second approach, a new converged SN method is developed. In this method, the SN solution is ''minded'' to bring out hidden high quality solutions. For this case multigroup fixed source and criticality transport problems are considered. Remarkably accurate solutions can be obtained with this new method called the Multigroup Converged SN (MGCSN) method as will be demonstrated

  1. Sibelius. Karelia Suite, Op. 11 / Robert Layton

    Index Scriptorium Estoniae

    Layton, Robert

    1996-01-01

    Uuest heliplaadist "Sibelius. Karelia Suite, Op. 11. Luonnotar, Op. 70 a. Andante festivo. The Oceanides, Op. 73. King Christian II, Op. 27-Suite. Finlandia, Op. 26a. Gothenburg Symphony Orchester, Neeme Järvi" DG 447 760-2GH (72 minutes: DDD)

  2. Adapting benchmarking to project management : an analysis of project management processes, metrics, and benchmarking process models

    OpenAIRE

    Emhjellen, Kjetil

    1997-01-01

    Avhandling (dr.ing.) - Høgskolen i Telemark / Norges teknisk-naturvitenskapelige universitet Since the first publication on benchmarking in 1989 by Robert C. Camp of “Benchmarking: The search for Industry Best Practices that Lead to Superior Performance”, the improvement technique benchmarking has been established as an important tool in the process focused manufacturing or production environment. The use of benchmarking has expanded to other types of industry. Benchmarking has past t...

  3. Guideline for benchmarking thermal treatment systems for low-level mixed waste

    International Nuclear Information System (INIS)

    Hoffman, D.P.; Gibson, L.V. Jr.; Hermes, W.H.; Bastian, R.E.; Davis, W.T.

    1994-01-01

    A process for benchmarking low-level mixed waste (LLMW) treatment technologies has been developed. When used in conjunction with the identification and preparation of surrogate waste mixtures, and with defined quality assurance and quality control procedures, the benchmarking process will effectively streamline the selection of treatment technologies being considered by the US Department of Energy (DOE) for LLMW cleanup and management. Following the quantitative template provided in the benchmarking process will greatly increase the technical information available for the decision-making process. The additional technical information will remove a large part of the uncertainty in the selection of treatment technologies. It is anticipated that the use of the benchmarking process will minimize technology development costs and overall treatment costs. In addition, the benchmarking process will enhance development of the most promising LLMW treatment processes and aid in transferring the technology to the private sector. To instill inherent quality, the benchmarking process is based on defined criteria and a structured evaluation format, which are independent of any specific conventional treatment or emerging process technology. Five categories of benchmarking criteria have been developed for the evaluation: operation/design; personnel health and safety; economics; product quality; and environmental quality. This benchmarking document gives specific guidance on what information should be included and how it should be presented. A standard format for reporting is included in Appendix A and B of this document. Special considerations for LLMW are presented and included in each of the benchmarking categories

  4. Evaporation-Cooled Protective Suits for Firefighters

    Science.gov (United States)

    Weinstein, Leonard Murray

    2007-01-01

    Suits cooled by evaporation of water have been proposed as improved means of temporary protection against high temperatures near fires. When air temperature exceeds 600 F (316 C) or in the presence of radiative heating from nearby sources at temperatures of 1,200 F (649 C) or more, outer suits now used by firefighters afford protection for only a few seconds. The proposed suits would exploit the high latent heat of vaporization of water to satisfy a need to protect against higher air temperatures and against radiant heating for significantly longer times. These suits would be fabricated and operated in conjunction with breathing and cooling systems like those with which firefighting suits are now equipped

  5. Argonne Code Center: Benchmark problem book.

    Energy Technology Data Exchange (ETDEWEB)

    None, None

    1977-06-01

    This book is an outgrowth of activities of the Computational Benchmark Problems Committee of the Mathematics and Computation Division of the American Nuclear Society. This is the second supplement of the original benchmark book which was first published in February, 1968 and contained computational benchmark problems in four different areas. Supplement No. 1, which was published in December, 1972, contained corrections to the original benchmark book plus additional problems in three new areas. The current supplement. Supplement No. 2, contains problems in eight additional new areas. The objectives of computational benchmark work and the procedures used by the committee in pursuing the objectives are outlined in the original edition of the benchmark book (ANL-7416, February, 1968). The members of the committee who have made contributions to Supplement No. 2 are listed below followed by the contributors to the earlier editions of the benchmark book.

  6. Variable Vector Countermeasure Suit (V2Suit) for Space Habitation and Exploration Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The Variable Vector Countermeasure Suit (V2Suit) is a specialized spacesuit designed to keep astronauts healthy during long-duration space exploration missions and...

  7. Toxicological benchmarks for screening potential contaminants of concern for effects on terrestrial plants. Environmental Restoration Program

    Energy Technology Data Exchange (ETDEWEB)

    Suter, G.W. II; Will, M.E.; Evans, C.

    1993-09-01

    One of the initial stages in ecological risk assessment for hazardous waste sites is the screening of contaminants to determine which of them are worthy of further consideration as ``contaminants of potential concern.`` This process is termed ``contaminant screening.`` It is performed by comparing measured ambient concentrations of chemicals to benchmark concentrations. Currently, no standard benchmark concentrations exist for assessing contaminants in soil with respect to their toxicity to plants. This report presents a standard method for deriving benchmarks for this purpose (phytotoxicity benchmarks), a set of data concerning effects of chemicals in soil or soil solution on plants, and a set of phytotoxicity benchmarks for 34 chemicals potentially associated with US Department of Energy (DOE) sites. Chemicals that are found in soil at concentrations exceeding both the phytotoxicity benchmark and the background concentration for the soil type should be considered contaminants of potential concern. The purpose of this report is to present plant toxicity data and discuss their utility as benchmarks for determining the hazard to terrestrial plants caused by contaminants in soil. Benchmarks are provided for soils and solutions.

  8. Toxicological benchmarks for screening potential contaminants of concern for effects on terrestrial plants

    International Nuclear Information System (INIS)

    Suter, G.W. II; Will, M.E.; Evans, C.

    1993-09-01

    One of the initial stages in ecological risk assessment for hazardous waste sites is the screening of contaminants to determine which of them are worthy of further consideration as ''contaminants of potential concern.'' This process is termed ''contaminant screening.'' It is performed by comparing measured ambient concentrations of chemicals to benchmark concentrations. Currently, no standard benchmark concentrations exist for assessing contaminants in soil with respect to their toxicity to plants. This report presents a standard method for deriving benchmarks for this purpose (phytotoxicity benchmarks), a set of data concerning effects of chemicals in soil or soil solution on plants, and a set of phytotoxicity benchmarks for 34 chemicals potentially associated with US Department of Energy (DOE) sites. Chemicals that are found in soil at concentrations exceeding both the phytotoxicity benchmark and the background concentration for the soil type should be considered contaminants of potential concern. The purpose of this report is to present plant toxicity data and discuss their utility as benchmarks for determining the hazard to terrestrial plants caused by contaminants in soil. Benchmarks are provided for soils and solutions

  9. Z-1 Prototype Space Suit Testing Summary

    Science.gov (United States)

    Ross, Amy

    2013-01-01

    The Advanced Space Suit team of the NASA-Johnson Space Center performed a series of test with the Z-1 prototype space suit in 2012. This paper discusses, at a summary level, the tests performed and results from those tests. The purpose of the tests were two-fold: 1) characterize the suit performance so that the data could be used in the downselection of components for the Z-2 Space Suit and 2) develop interfaces with the suitport and exploration vehicles through pressurized suit evaluations. Tests performed included isolated and functional range of motion data capture, Z-1 waist and hip testing, joint torque testing, CO2 washout testing, fit checks and subject familiarizations, an exploration vehicle aft deck and suitport controls interface evaluation, delta pressure suitport tests including pressurized suit don and doff, and gross mobility and suitport ingress and egress demonstrations in reduced gravity. Lessons learned specific to the Z-1 prototype and to suit testing techniques will be presented.

  10. NEACRP thermal fission product benchmark

    International Nuclear Information System (INIS)

    Halsall, M.J.; Taubman, C.J.

    1989-09-01

    The objective of the thermal fission product benchmark was to compare the range of fission product data in use at the present time. A simple homogeneous problem was set with 200 atoms H/1 atom U235, to be burnt up to 1000 days and then decay for 1000 days. The problem was repeated with 200 atoms H/1 atom Pu239, 20 atoms H/1 atom U235 and 20 atoms H/1 atom Pu239. There were ten participants and the submissions received are detailed in this report. (author)

  11. NASA Software Engineering Benchmarking Effort

    Science.gov (United States)

    Godfrey, Sally; Rarick, Heather

    2012-01-01

    Benchmarking was very interesting and provided a wealth of information (1) We did see potential solutions to some of our "top 10" issues (2) We have an assessment of where NASA stands with relation to other aerospace/defense groups We formed new contacts and potential collaborations (1) Several organizations sent us examples of their templates, processes (2) Many of the organizations were interested in future collaboration: sharing of training, metrics, Capability Maturity Model Integration (CMMI) appraisers, instructors, etc. We received feedback from some of our contractors/ partners (1) Desires to participate in our training; provide feedback on procedures (2) Welcomed opportunity to provide feedback on working with NASA

  12. Oracle SOA Suite 11g performance cookbook

    CERN Document Server

    Brasier, Matthew; Wright, Nicholas

    2013-01-01

    This is a Cookbook with interesting, hands-on recipes, giving detailed descriptions and lots of practical walkthroughs for boosting the performance of your Oracle SOA Suite.This book is for Oracle SOA Suite 11g administrators, developers, and architects who want to understand how they can maximise the performance of their SOA Suite infrastructure. The recipes contain easy to follow step-by-step instructions and include many helpful and practical tips. It is suitable for anyone with basic operating system and application server administration experience.

  13. Deploy Nalu/Kokkos algorithmic infrastructure with performance benchmarking.

    Energy Technology Data Exchange (ETDEWEB)

    Domino, Stefan P. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Ananthan, Shreyas [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Knaus, Robert C. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Williams, Alan B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-09-29

    assembly timings faster than that observed on Haswell architecture. The computational workload of higher-order meshes, therefore, seems ideally suited for the many-core architecture and justi es further exploration of higher-order on NGP platforms. A Trilinos/Tpetra-based multi-threaded GMRES preconditioned by symmetric Gauss Seidel (SGS) represents the core solver infrastructure for the low-Mach advection/diffusion implicit solves. The threaded solver stack has been tested on small problems on NREL's Peregrine system using the newly developed and deployed Kokkos-view/SIMD kernels. fforts are underway to deploy the Tpetra-based solver stack on NERSC Cori system to benchmark its performance at scale on KNL machines.

  14. The ESA's Space Trajectory Analysis software suite

    Science.gov (United States)

    Ortega, Guillermo

    The European Space Agency (ESA) initiated in 2005 an internal activity to develop an open source software suite involving university science departments and research institutions all over the world. This project is called the "Space Trajectory Analysis" or STA. This article describes the birth of STA and its present configuration. One of the STA aims is to promote the exchange of technical ideas, and raise knowledge and competence in the areas of applied mathematics, space engineering, and informatics at University level. Conceived as a research and education tool to support the analysis phase of a space mission, STA is able to visualize a wide range of space trajectories. These include among others ascent, re-entry, descent and landing trajectories, orbits around planets and moons, interplanetary trajectories, rendezvous trajectories, etc. The article explains that STA project is an original idea of the Technical Directorate of ESA. It was born in August 2005 to provide a framework in astrodynamics research at University level. As research and education software applicable to Academia, a number of Universities support this development by joining ESA in leading the development. ESA and Universities partnership are expressed in the STA Steering Board. Together with ESA, each University has a chair in the board whose tasks are develop, control, promote, maintain, and expand the software suite. The article describes that STA provides calculations in the fields of spacecraft tracking, attitude analysis, coverage and visibility analysis, orbit determination, position and velocity of solar system bodies, etc. STA implements the concept of "space scenario" composed of Solar system bodies, spacecraft, ground stations, pads, etc. It is able to propagate the orbit of a spacecraft where orbital propagators are included. STA is able to compute communication links between objects of a scenario (coverage, line of sight), and to represent the trajectory computations and

  15. Reevaluation of the Jezebel Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Favorite, Jeffrey A. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2014-03-10

    Every nuclear engineering student is familiar with Jezebel, the homogeneous bare sphere of plutonium first assembled at Los Alamos in 1954-1955. The actual Jezebel assembly was neither homogeneous, nor bare, nor spherical; nor was it singular – there were hundreds of Jezebel configurations assembled. The Jezebel benchmark has been reevaluated for the International Criticality Safety Benchmark Evaluation Project (ICSBEP) Handbook. Logbooks, original drawings, mass accountability statements, internal reports, and published reports have been used to model four actual three-dimensional Jezebel assemblies with high fidelity. Because the documentation available today is often inconsistent, three major assumptions were made regarding plutonium part masses and dimensions. The first was that the assembly masses given in Los Alamos report LA-4208 (1969) were correct, and the second was that the original drawing dimension for the polar height of a certain major part was correct. The third assumption was that a change notice indicated on the original drawing was not actually implemented. This talk will describe these assumptions, the alternatives, and the implications. Since the publication of the 2013 ICSBEP Handbook, the actual masses of the major components have turned up. Our assumption regarding the assembly masses was proven correct, but we had the mass distribution incorrect. Work to incorporate the new information is ongoing, and this talk will describe the latest assessment.

  16. Strauss: Der Rosenkavalier - Suite / Michael Kennedy

    Index Scriptorium Estoniae

    Kennedy, Michael

    1990-01-01

    Uuest heliplaadist "Strauss: Der Rosenkavalier - Suite, Salome-Dance of the seven veils, Capriccio-Prelude, Intermezzo, Morgen Mittag um elf! Felicity Lott, Scottish National Orchestra, Neeme Järvi" Chandos ABRD 1397. ABTD 1397. CHAN 8758

  17. Sensor Suits for Human Motion Detection

    National Research Council Canada - National Science Library

    Feng, Maria Q

    2006-01-01

    An innovative sensor suit is developed, which can be conveniently put on by an operator to detect his or her motion intention by non-invasively monitoring his or her muscle conditions such as the shape...

  18. Sensor Suits for Human Motion Detection

    National Research Council Canada - National Science Library

    Feng, Maria Q

    2006-01-01

    ... shape, the stiffness and the density. This sensor suit is made of soft and elastic fabrics embedded with arrays of MEMS sensors such as muscle stiffness sensor, ultrasonic sensors, accelerometers and optical fiber sensors, to measure...

  19. Coupled Human-Space Suit Mobility Studies

    Data.gov (United States)

    National Aeronautics and Space Administration — Current EVA mobility studies only allow for comparisons of how the suit moves when actuated by a human and how the human moves when unsuited. There are now new...

  20. OpenMP 4.5 Validation and Verification Suite

    Energy Technology Data Exchange (ETDEWEB)

    2017-12-15

    OpenMP, a directive-based programming API, introduce directives for accelerator devices that programmers are starting to use more frequently in production codes. To make sure OpenMP directives work correctly across architectures, it is critical to have a mechanism that tests for an implementation's conformance to the OpenMP standard. This testing process can uncover ambiguities in the OpenMP specification, which helps compiler developers and users make a better use of the standard. We fill this gap through our validation and verification test suite that focuses on the offload directives available in OpenMP 4.5.

  1. The Isprs Benchmark on Indoor Modelling

    Science.gov (United States)

    Khoshelham, K.; Díaz Vilariño, L.; Peter, M.; Kang, Z.; Acharya, D.

    2017-09-01

    Automated generation of 3D indoor models from point cloud data has been a topic of intensive research in recent years. While results on various datasets have been reported in literature, a comparison of the performance of different methods has not been possible due to the lack of benchmark datasets and a common evaluation framework. The ISPRS benchmark on indoor modelling aims to address this issue by providing a public benchmark dataset and an evaluation framework for performance comparison of indoor modelling methods. In this paper, we present the benchmark dataset comprising several point clouds of indoor environments captured by different sensors. We also discuss the evaluation and comparison of indoor modelling methods based on manually created reference models and appropriate quality evaluation criteria. The benchmark dataset is available for download at: html"target="_blank">http://www2.isprs.org/commissions/comm4/wg5/benchmark-on-indoor-modelling.html.

  2. Professional Performance and Bureaucratic Benchmarking Information

    DEFF Research Database (Denmark)

    Schneider, Melanie L.; Mahlendorf, Matthias D.; Schäffer, Utz

    controls because of organizational-professional conflicts. We therefore analyze the association between bureaucratic benchmarking information provision and professional performance and suggest that the association is more positive if prior professional performance was low. We test our hypotheses based...... on archival, publicly disclosed, professional performance data for 191 German orthopedics departments, matched with survey data on bureaucratic benchmarking information given to chief orthopedists by the administration. We find a positive association between bureaucratic benchmarking information provision...

  3. Computational Chemistry Comparison and Benchmark Database

    Science.gov (United States)

    SRD 101 NIST Computational Chemistry Comparison and Benchmark Database (Web, free access)   The NIST Computational Chemistry Comparison and Benchmark Database is a collection of experimental and ab initio thermochemical properties for a selected set of molecules. The goals are to provide a benchmark set of molecules for the evaluation of ab initio computational methods and allow the comparison between different ab initio computational methods for the prediction of thermochemical properties.

  4. Science Driven Supercomputing Architectures: AnalyzingArchitectural Bottlenecks with Applications and Benchmark Probes

    Energy Technology Data Exchange (ETDEWEB)

    Kamil, S.; Yelick, K.; Kramer, W.T.; Oliker, L.; Shalf, J.; Shan,H.; Strohmaier, E.

    2005-09-26

    There is a growing gap between the peak speed of parallelcomputing systems and the actual delivered performance for scientificapplications. In general this gap is caused by inadequate architecturalsupport for the requirements of modern scientific applications, ascommercial applications and the much larger market they represent, havedriven the evolution of computer architectures. This gap has raised theimportance of developing better benchmarking methodologies tocharacterize and to understand the performance requirements of scientificapplications, to communicate them efficiently to influence the design offuture computer architectures. This improved understanding of theperformance behavior of scientific applications will allow improvedperformance predictions, development of adequate benchmarks foridentification of hardware and application features that work well orpoorly together, and a more systematic performance evaluation inprocurement situations. The Berkeley Institute for Performance Studieshas developed a three-level approach to evaluating the design of high endmachines and the software that runs on them: 1) A suite of representativeapplications; 2) A set of application kernels; and 3) Benchmarks tomeasure key system parameters. The three levels yield different type ofinformation, all of which are useful in evaluating systems, and enableNSF and DOE centers to select computer architectures more suited forscientific applications. The analysis will further allow the centers toengage vendors in discussion of strategies to alleviate the presentarchitectural bottlenecks using quantitative information. These mayinclude small hardware changes or larger ones that may be out interest tonon-scientific workloads. Providing quantitative models to the vendorsallows them to assess the benefits of technology alternatives using theirown internal cost-models in the broader marketplace, ideally facilitatingthe development of future computer architectures more suited forscientific

  5. Science Driven Supercomputing Architectures: AnalyzingArchitectural Bottlenecks with Applications and Benchmark Probes

    Energy Technology Data Exchange (ETDEWEB)

    Kamil, S.; Yelick, K.; Kramer, W.T.; Oliker, L.; Shalf, J.; Shan,H.; Strohmaier, E.

    2005-09-26

    There is a growing gap between the peak speed of parallel computing systems and the actual delivered performance for scientific applications. In general this gap is caused by inadequate architectural support for the requirements of modern scientific applications, as commercial applications and the much larger market they represent, have driven the evolution of computer architectures. This gap has raised the importance of developing better benchmarking methodologies to characterize and to understand the performance requirements of scientific applications, to communicate them efficiently to influence the design of future computer architectures. This improved understanding of the performance behavior of scientific applications will allow improved performance predictions, development of adequate benchmarks for identification of hardware and application features that work well or poorly together, and a more systematic performance evaluation in procurement situations. The Berkeley Institute for Performance Studies has developed a three-level approach to evaluating the design of high end machines and the software that runs on them: (1) A suite of representative applications; (2) A set of application kernels; and (3) Benchmarks to measure key system parameters. The three levels yield different type of information, all of which are useful in evaluating systems, and enable NSF and DOE centers to select computer architectures more suited for scientific applications. The analysis will further allow the centers to engage vendors in discussion of strategies to alleviate the present architectural bottlenecks using quantitative information. These may include small hardware changes or larger ones that may be out interest to non-scientific workloads. Providing quantitative models to the vendors allows them to assess the benefits of technology alternatives using their own internal cost-models in the broader marketplace, ideally facilitating the development of future computer

  6. The Inverted Pendulum Benchmark in Nonlinear Control Theory: A Survey

    Directory of Open Access Journals (Sweden)

    Olfa Boubaker

    2013-05-01

    Full Text Available Abstract For at least fifty years, the inverted pendulum has been the most popular benchmark, among others, in nonlinear control theory. The fundamental focus of this work is to enhance the wealth of this robotic benchmark and provide an overall picture of historical and current trend developments in nonlinear control theory, based on its simple structure and its rich nonlinear model. In this review, we will try to explain the high popularity of such a robotic benchmark, which is frequently used to realize experimental models, validate the efficiency of emerging control techniques and verify their implementation. We also attempt to provide details on how many standard techniques in control theory fail when tested on such a benchmark. More than 100 references in the open literature, dating back to 1960, are compiled to provide a survey of emerging ideas and challenging problems in nonlinear control theory accomplished and verified using this robotic system. Possible future trends that we can envision based on the review of this area are also presented.

  7. Generating Shifting Workloads to Benchmark Adaptability in Relational Database Systems

    Science.gov (United States)

    Rabl, Tilmann; Lang, Andreas; Hackl, Thomas; Sick, Bernhard; Kosch, Harald

    A large body of research concerns the adaptability of database systems. Many commercial systems already contain autonomic processes that adapt configurations as well as data structures and data organization. Yet there is virtually no possibility for a just measurement of the quality of such optimizations. While standard benchmarks have been developed that simulate real-world database applications very precisely, none of them considers variations in workloads produced by human factors. Today’s benchmarks test the performance of database systems by measuring peak performance on homogeneous request streams. Nevertheless, in systems with user interaction access patterns are constantly shifting. We present a benchmark that simulates a web information system with interaction of large user groups. It is based on the analysis of a real online eLearning management system with 15,000 users. The benchmark considers the temporal dependency of user interaction. Main focus is to measure the adaptability of a database management system according to shifting workloads. We will give details on our design approach that uses sophisticated pattern analysis and data mining techniques.

  8. A biosegmentation benchmark for evaluation of bioimage analysis methods

    Directory of Open Access Journals (Sweden)

    Kvilekval Kristian

    2009-11-01

    Full Text Available Abstract Background We present a biosegmentation benchmark that includes infrastructure, datasets with associated ground truth, and validation methods for biological image analysis. The primary motivation for creating this resource comes from the fact that it is very difficult, if not impossible, for an end-user to choose from a wide range of segmentation methods available in the literature for a particular bioimaging problem. No single algorithm is likely to be equally effective on diverse set of images and each method has its own strengths and limitations. We hope that our benchmark resource would be of considerable help to both the bioimaging researchers looking for novel image processing methods and image processing researchers exploring application of their methods to biology. Results Our benchmark consists of different classes of images and ground truth data, ranging in scale from subcellular, cellular to tissue level, each of which pose their own set of challenges to image analysis. The associated ground truth data can be used to evaluate the effectiveness of different methods, to improve methods and to compare results. Standard evaluation methods and some analysis tools are integrated into a database framework that is available online at http://bioimage.ucsb.edu/biosegmentation/. Conclusion This online benchmark will facilitate integration and comparison of image analysis methods for bioimages. While the primary focus is on biological images, we believe that the dataset and infrastructure will be of interest to researchers and developers working with biological image analysis, image segmentation and object tracking in general.

  9. MoleculeNet: a benchmark for molecular machine learning.

    Science.gov (United States)

    Wu, Zhenqin; Ramsundar, Bharath; Feinberg, Evan N; Gomes, Joseph; Geniesse, Caleb; Pappu, Aneesh S; Leswing, Karl; Pande, Vijay

    2018-01-14

    Molecular machine learning has been maturing rapidly over the last few years. Improved methods and the presence of larger datasets have enabled machine learning algorithms to make increasingly accurate predictions about molecular properties. However, algorithmic progress has been limited due to the lack of a standard benchmark to compare the efficacy of proposed methods; most new algorithms are benchmarked on different datasets making it challenging to gauge the quality of proposed methods. This work introduces MoleculeNet, a large scale benchmark for molecular machine learning. MoleculeNet curates multiple public datasets, establishes metrics for evaluation, and offers high quality open-source implementations of multiple previously proposed molecular featurization and learning algorithms (released as part of the DeepChem open source library). MoleculeNet benchmarks demonstrate that learnable representations are powerful tools for molecular machine learning and broadly offer the best performance. However, this result comes with caveats. Learnable representations still struggle to deal with complex tasks under data scarcity and highly imbalanced classification. For quantum mechanical and biophysical datasets, the use of physics-aware featurizations can be more important than choice of particular learning algorithm.

  10. Impact testing and analysis for structural code benchmarking

    International Nuclear Information System (INIS)

    Glass, R.E.

    1989-01-01

    Sandia National Laboratories, in cooperation with industry and other national laboratories, has been benchmarking computer codes used to predict the structural, thermal, criticality, and shielding behavior of radioactive materials packages. The first step in the benchmarking of the codes was to develop standard problem sets and to compare the results from several codes and users. This step for structural analysis codes has been completed as described in Structural Code Benchmarking for the Analysis of Impact Response of Nuclear Material Shipping Casks, R.E. Glass, Sandia National Laboratories, 1985. The problem set is shown in Fig. 1. This problem set exercised the ability of the codes to predict the response to end (axisymmetric) and side (plane strain) impacts with both elastic and elastic/plastic materials. The results from these problems showed that there is good agreement in predicting elastic response. Significant differences occurred in predicting strains for the elastic/plastic models. An example of the variation in predicting plastic behavior is given, which shows the hoop strain as a function of time at the impacting end of Model B. These differences in predicting plastic strains demonstrated a need for benchmark data for a cask-like problem

  11. Professional Performance and Bureaucratic Benchmarking Information

    DEFF Research Database (Denmark)

    Schneider, Melanie L.; Mahlendorf, Matthias D.; Schäffer, Utz

    Prior research documents positive effects of benchmarking information provision on performance and attributes this to social comparisons. However, the effects on professional recipients are unclear. Studies of professional control indicate that professional recipients often resist bureaucratic...... controls because of organizational-professional conflicts. We therefore analyze the association between bureaucratic benchmarking information provision and professional performance and suggest that the association is more positive if prior professional performance was low. We test our hypotheses based...... on archival, publicly disclosed, professional performance data for 191 German orthopedics departments, matched with survey data on bureaucratic benchmarking information given to chief orthopedists by the administration. We find a positive association between bureaucratic benchmarking information provision...

  12. Analysis of VENUS-3 benchmark experiment

    International Nuclear Information System (INIS)

    Kodeli, I.; Sartori, E.

    1998-01-01

    The paper presents the revision and the analysis of VENUS-3 benchmark experiment performed at CEN/SCK, Mol (Belgium). This benchmark was found to be particularly suitable for validation of current calculation tools like 3-D neutron transport codes, and in particular of the 3D sensitivity and uncertainty analysis code developed within the EFF project. The compilation of the integral experiment was integrated into the SINBAD electronic data base for storing and retrieving information about the shielding experiments for nuclear systems. SINBAD now includes 33 reviewed benchmark descriptions and several compilations waiting for the review, among them many benchmarks relevant for pressure vessel dosimetry system validation.(author)

  13. Synthetic graph generation for data-intensive HPC benchmarking: Scalability, analysis and real-world application

    Energy Technology Data Exchange (ETDEWEB)

    Powers, Sarah S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Lothian, Joshua [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2014-12-01

    The benchmarking effort within the Extreme Scale Systems Center at Oak Ridge National Laboratory seeks to provide High Performance Computing benchmarks and test suites of interest to the DoD sponsor. The work described in this report is a part of the effort focusing on graph generation. A previously developed benchmark, SystemBurn, allows the emulation of a broad spectrum of application behavior profiles within a single framework. To complement this effort, similar capabilities are desired for graph-centric problems. This report described the in-depth analysis of the generated synthetic graphs' properties at a variety of scales using different generator implementations and examines their applicability to replicating real world datasets.

  14. Benchmark thermal-hydraulic analysis with the Agathe Hex 37-rod bundle

    International Nuclear Information System (INIS)

    Barroyer, P.; Hudina, M.; Huggenberger, M.

    1981-09-01

    Different computer codes are compared, in prediction performance, based on the AGATHE HEX 37-rod bundle experimental results. The compilation of all available calculation results allows a critical assessment of the codes. For the time being, it is concluded which codes are best suited for gas cooled fuel element design purposes. Based on the positive aspects of these cooperative Benchmark exercises, an attempt is made to define a computer code verification procedure. (Auth.)

  15. Benchmarking of nuclear economics tools

    International Nuclear Information System (INIS)

    Moore, Megan; Korinny, Andriy; Shropshire, David; Sadhankar, Ramesh

    2017-01-01

    Highlights: • INPRO and GIF economic tools exhibited good alignment in total capital cost estimation. • Subtle discrepancies in the cost result from differences in financing and the fuel cycle assumptions. • A common set of assumptions was found to reduce the discrepancies to 1% or less. • Opportunities for harmonisation of economic tools exists. - Abstract: Benchmarking of the economics methodologies developed by the Generation IV International Forum (GIF) and the International Atomic Energy Agency’s International Project on Innovative Nuclear Reactors and Fuel Cycles (INPRO), was performed for three Generation IV nuclear energy systems. The Economic Modeling Working Group of GIF developed an Excel based spreadsheet package, G4ECONS (Generation 4 Excel-based Calculation Of Nuclear Systems), to calculate the total capital investment cost (TCIC) and the levelised unit energy cost (LUEC). G4ECONS is sufficiently generic in the sense that it can accept the types of projected input, performance and cost data that are expected to become available for Generation IV systems through various development phases and that it can model both open and closed fuel cycles. The Nuclear Energy System Assessment (NESA) Economic Support Tool (NEST) was developed to enable an economic analysis using the INPRO methodology to easily calculate outputs including the TCIC, LUEC and other financial figures of merit including internal rate of return, return of investment and net present value. NEST is also Excel based and can be used to evaluate nuclear reactor systems using the open fuel cycle, MOX (mixed oxide) fuel recycling and closed cycles. A Super Critical Water-cooled Reactor system with an open fuel cycle and two Fast Reactor systems, one with a break-even fuel cycle and another with a burner fuel cycle, were selected for the benchmarking exercise. Published data on capital and operating costs were used for economics analyses using G4ECONS and NEST tools. Both G4ECONS and

  16. Updates to the Integrated Protein-Protein Interaction Benchmarks : Docking Benchmark Version 5 and Affinity Benchmark Version 2

    NARCIS (Netherlands)

    Vreven, Thom; Moal, Iain H.; Vangone, Anna|info:eu-repo/dai/nl/370549694; Pierce, Brian G.; Kastritis, Panagiotis L.|info:eu-repo/dai/nl/315886668; Torchala, Mieczyslaw; Chaleil, Raphael; Jiménez-García, Brian; Bates, Paul A.; Fernandez-Recio, Juan; Bonvin, Alexandre M J J|info:eu-repo/dai/nl/113691238; Weng, Zhiping

    2015-01-01

    We present an updated and integrated version of our widely used protein-protein docking and binding affinity benchmarks. The benchmarks consist of non-redundant, high-quality structures of protein-protein complexes along with the unbound structures of their components. Fifty-five new complexes were

  17. Human factors reliability Benchmark exercise

    International Nuclear Information System (INIS)

    Poucet, A.

    1989-06-01

    The Joint Research Centre of the European Commission has organized a Human Factors Reliability Benchmark Exercise (HF-RBE) with the aim of assessing the state of the art in human reliability modelling and assessment. Fifteen teams from eleven countries, representing industry, utilities, licensing organisations and research institutes, participated in the HF-RBE. The HF-RBE was organized around two study cases: (1) analysis of routine functional Test and Maintenance (T and M) procedures: with the aim of assessing the probability of test induced failures, the probability of failures to remain unrevealed and the potential to initiate transients because of errors performed in the test; (2) analysis of human actions during an operational transient: with the aim of assessing the probability that the operators will correctly diagnose the malfunctions and take proper corrective action. This report contains the final summary reports produced by the participants in the exercise

  18. Experimental and computational benchmark tests

    International Nuclear Information System (INIS)

    Gilliam, D.M.; Briesmeister, J.F.

    1994-01-01

    A program involving principally NIST, LANL, and ORNL has been in progress for about four years now to establish a series of benchmark measurements and calculations related to the moderation and leakage of 252 Cf neutrons from a source surrounded by spherical aqueous moderators of various thicknesses and compositions. The motivation for these studies comes from problems in criticality calculations concerning arrays of multiplying components, where the leakage from one component acts as a source for the other components. This talk compares experimental and calculated values for the fission rates of four nuclides - 235 U, 239 Pu, 238 U, and 237 Np - in the leakage spectrum from moderator spheres of diameters 76.2 mm, 101.6 mm, and 127.0 mm, with either pure water or enriched B-10 solutions as the moderator. Very detailed Monte Carlo calculations were done with the MCNP code, using a open-quotes light waterclose quotes S(α,β) scattering kernel

  19. Toxicological benchmarks for screening potential contaminants of concern for effects on terrestrial plants: 1994 revision

    Energy Technology Data Exchange (ETDEWEB)

    Will, M.E.; Suter, G.W. II

    1994-09-01

    One of the initial stages in ecological risk assessment for hazardous waste sites is screening contaminants to determine which of them are worthy of further consideration as contaminants of potential concern. This process is termed contaminant screening. It is performed by comparing measured ambient concentrations of chemicals to benchmark concentrations. Currently, no standard benchmark concentrations exist for assessing contaminants in soil with respect to their toxicity to plants. This report presents a standard method for deriving benchmarks for this purpose (phytotoxicity benchmarks), a set of data concerning effects of chemicals in soil or soil solution on plants, and a set of phytotoxicity benchmarks for 38 chemicals potentially associated with United States Department of Energy (DOE) sites. In addition, background information on the phytotoxicity and occurrence of the chemicals in soils is presented, and literature describing the experiments from which data were drawn for benchmark derivation is reviewed. Chemicals that are found in soil at concentrations exceeding both the phytotoxicity benchmark and the background concentration for the soil type should be considered contaminants of potential concern.

  20. Toxicological Benchmarks for Screening Potential Contaminants of Concern for Effects on Terrestrial Plants

    Energy Technology Data Exchange (ETDEWEB)

    Suter, G.W. II

    1993-01-01

    One of the initial stages in ecological risk assessment for hazardous waste sites is screening contaminants to determine which of them are worthy of further consideration as contaminants of potential concern. This process is termed contaminant screening. It is performed by comparing measured ambient concentrations of chemicals to benchmark concentrations. Currently, no standard benchmark concentrations exist for assessing contaminants in soil with respect to their toxicity to plants. This report presents a standard method for deriving benchmarks for this purpose (phytotoxicity benchmarks), a set of data concerning effects of chemicals in soil or soil solution on plants, and a set of phytotoxicity benchmarks for 38 chemicals potentially associated with United States Department of Energy (DOE) sites. In addition, background information on the phytotoxicity and occurrence of the chemicals in soils is presented, and literature describing the experiments from which data were drawn for benchmark derivation is reviewed. Chemicals that are found in soil at concentrations exceeding both the phytotoxicity benchmark and the background concentration for the soil type should be considered contaminants of potential concern.

  1. Toxicological benchmarks for screening potential contaminants of concern for effects on terrestrial plants: 1994 revision

    International Nuclear Information System (INIS)

    Will, M.E.; Suter, G.W. II.

    1994-09-01

    One of the initial stages in ecological risk assessment for hazardous waste sites is screening contaminants to determine which of them are worthy of further consideration as contaminants of potential concern. This process is termed contaminant screening. It is performed by comparing measured ambient concentrations of chemicals to benchmark concentrations. Currently, no standard benchmark concentrations exist for assessing contaminants in soil with respect to their toxicity to plants. This report presents a standard method for deriving benchmarks for this purpose (phytotoxicity benchmarks), a set of data concerning effects of chemicals in soil or soil solution on plants, and a set of phytotoxicity benchmarks for 38 chemicals potentially associated with United States Department of Energy (DOE) sites. In addition, background information on the phytotoxicity and occurrence of the chemicals in soils is presented, and literature describing the experiments from which data were drawn for benchmark derivation is reviewed. Chemicals that are found in soil at concentrations exceeding both the phytotoxicity benchmark and the background concentration for the soil type should be considered contaminants of potential concern

  2. Suites of dwarfs around Nearby giant galaxies

    International Nuclear Information System (INIS)

    Karachentsev, Igor D.; Kaisina, Elena I.; Makarov, Dmitry I.

    2014-01-01

    The Updated Nearby Galaxy Catalog (UNGC) contains the most comprehensive summary of distances, radial velocities, and luminosities for 800 galaxies located within 11 Mpc from us. The high density of observables in the UNGC makes this sample indispensable for checking results of N-body simulations of cosmic structures on a ∼1 Mpc scale. The environment of each galaxy in the UNGC was characterized by a tidal index Θ 1 , depending on the separation and mass of the galaxy's main disturber (MD). We grouped UNGC galaxies with a common MD in suites, and ranked suite members according to their Θ 1 . All suite members with positive Θ 1 are assumed to be physical companions of the MD. About 58% of the sample are members of physical groups. The distribution of suites by the number of members, n, follows a relation N(n) ∼ n –2 . The 20 most populated suites contain 468 galaxies, i.e., 59% of the UNGC sample. The fraction of MDs among the brightest galaxies is almost 100% and drops to 50% at M B = –18 m . We discuss various properties of MDs, as well as galaxies belonging to their suites. The suite abundance practically does not depend on the morphological type, linear diameter, or hydrogen mass of the MD, the tightest correlation being with the MD dynamical mass. Dwarf galaxies around MDs exhibit well-known segregation effects: the population of the outskirts has later morphological types, richer H I contents, and higher rates of star formation activity. Nevertheless, there are some intriguing cases where dwarf spheroidal galaxies occur at the far periphery of the suites, as well as some late-type dwarfs residing close to MDs. Comparing simulation results with galaxy groups, most studies assume the Local Group is fairly typical. However, we recognize that the nearby groups significantly differ from each other and there is considerable variation in their properties. The suites of companions around the Milky Way and M31, consisting of the Local Group, do not

  3. Using Participatory Action Research to Study the Implementation of Career Development Benchmarks at a New Zealand University

    Science.gov (United States)

    Furbish, Dale S.; Bailey, Robyn; Trought, David

    2016-01-01

    Benchmarks for career development services at tertiary institutions have been developed by Careers New Zealand. The benchmarks are intended to provide standards derived from international best practices to guide career development services. A new career development service was initiated at a large New Zealand university just after the benchmarks…

  4. Benchmarking Ortec ISOTOPIC measurements and calculations

    International Nuclear Information System (INIS)

    This paper represents a description of eight compiled benchmark tests conducted to probe and to demonstrate the extensive utility of the Ortec ISOTOPIC gamma-ray analysis software program. The paper describes tests of the programs capability to perform finite geometry correction factors and sample-matrix-container photon absorption correction factors. Favorable results are obtained in all benchmark tests. (author)

  5. Benchmarking set for domestic smart grid management

    NARCIS (Netherlands)

    Bosman, M.G.C.; Bakker, Vincent; Molderink, Albert; Hurink, Johann L.; Smit, Gerardus Johannes Maria

    2010-01-01

    In this paper we propose a benchmark for domestic smart grid management. It consists of an in-depth description of a domestic smart grid, in which local energy consumers, producers and buffers can be controlled. First, from this description a general benchmark framework is derived, which can be used

  6. Benchmarking nutrient use efficiency of dairy farms

    NARCIS (Netherlands)

    Mu, W.; Groen, E.A.; Middelaar, van C.E.; Bokkers, E.A.M.; Hennart, S.; Stilmant, D.; Boer, de I.J.M.

    2017-01-01

    The nutrient use efficiency (NUE) of a system, generally computed as the amount of nutrients in valuable outputs over the amount of nutrients in all inputs, is commonly used to benchmark the environmental performance of dairy farms. Benchmarking the NUE of farms, however, may lead to biased

  7. Benchmark analysis of railway networks and undertakings

    NARCIS (Netherlands)

    Hansen, I.A.; Wiggenraad, P.B.L.; Wolff, J.W.

    2013-01-01

    Benchmark analysis of railway networks and companies has been stimulated by the European policy of deregulation of transport markets, the opening of national railway networks and markets to new entrants and separation of infrastructure and train operation. Recent international railway benchmarking

  8. The Linked Data Benchmark Council Project

    NARCIS (Netherlands)

    P.A. Boncz (Peter); I. Fundulaki; A. Gubichev (Andrey); J. Larriba-Pey (Josep); T. Neumann (Thomas)

    2013-01-01

    htmlabstractDespite the fast growth and increasing popularity, the broad field of RDF and Graph database systems lacks an independent authority for developing benchmarks, and for neutrally assessing benchmark results through industry-strength auditing which would allow to quantify and compare the

  9. Aerodynamic Benchmarking of the Deepwind Design

    DEFF Research Database (Denmark)

    Bedona, Gabriele; Schmidt Paulsen, Uwe; Aagaard Madsen, Helge

    2015-01-01

    The aerodynamic benchmarking for the DeepWind rotor is conducted comparing different rotor geometries and solutions and keeping the comparison as fair as possible. The objective for the benchmarking is to find the most suitable configuration in order to maximize the power production and minimize...

  10. Benchmark Two-Good Utility Functions

    NARCIS (Netherlands)

    de Jaegher, K.

    Benchmark two-good utility functions involving a good with zero income elasticity and unit income elasticity are well known. This paper derives utility functions for the additional benchmark cases where one good has zero cross-price elasticity, unit own-price elasticity, and zero own price

  11. Z-2 Prototype Space Suit Development

    Science.gov (United States)

    Ross, Amy; Rhodes, Richard; Graziosi, David; Jones, Bobby; Lee, Ryan; Haque, Bazle Z.; Gillespie, John W., Jr.

    2014-01-01

    NASA's Z-2 prototype space suit is the highest fidelity pressure garment from both hardware and systems design perspectives since the Space Shuttle Extravehicular Mobility Unit (EMU) was developed in the late 1970's. Upon completion the Z-2 will be tested in the 11 foot human-rated vacuum chamber and the Neutral Buoyancy Laboratory (NBL) at the NASA Johnson Space Center to assess the design and to determine applicability of the configuration to micro-, low- (asteroid), and planetary- (surface) gravity missions. This paper discusses the 'firsts' that the Z-2 represents. For example, the Z-2 sizes to the smallest suit scye bearing plane distance for at least the last 25 years and is being designed with the most intensive use of human models with the suit model.

  12. Advanced EVA Suit Camera System Development Project

    Science.gov (United States)

    Mock, Kyla

    2016-01-01

    The National Aeronautics and Space Administration (NASA) at the Johnson Space Center (JSC) is developing a new extra-vehicular activity (EVA) suit known as the Advanced EVA Z2 Suit. All of the improvements to the EVA Suit provide the opportunity to update the technology of the video imagery. My summer internship project involved improving the video streaming capabilities of the cameras that will be used on the Z2 Suit for data acquisition. To accomplish this, I familiarized myself with the architecture of the camera that is currently being tested to be able to make improvements on the design. Because there is a lot of benefit to saving space, power, and weight on the EVA suit, my job was to use Altium Design to start designing a much smaller and simplified interface board for the camera's microprocessor and external components. This involved checking datasheets of various components and checking signal connections to ensure that this architecture could be used for both the Z2 suit and potentially other future projects. The Orion spacecraft is a specific project that may benefit from this condensed camera interface design. The camera's physical placement on the suit also needed to be determined and tested so that image resolution can be maximized. Many of the options of the camera placement may be tested along with other future suit testing. There are multiple teams that work on different parts of the suit, so the camera's placement could directly affect their research or design. For this reason, a big part of my project was initiating contact with other branches and setting up multiple meetings to learn more about the pros and cons of the potential camera placements we are analyzing. Collaboration with the multiple teams working on the Advanced EVA Z2 Suit is absolutely necessary and these comparisons will be used as further progress is made for the overall suit design. This prototype will not be finished in time for the scheduled Z2 Suit testing, so my time was

  13. HPC Analytics Support. Requirements for Uncertainty Quantification Benchmarks

    Energy Technology Data Exchange (ETDEWEB)

    Paulson, Patrick R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Purohit, Sumit [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Rodriguez, Luke R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2015-05-01

    This report outlines techniques for extending benchmark generation products so they support uncertainty quantification by benchmarked systems. We describe how uncertainty quantification requirements can be presented to candidate analytical tools supporting SPARQL. We describe benchmark data sets for evaluating uncertainty quantification, as well as an approach for using our benchmark generator to produce data sets for generating benchmark data sets.

  14. Benchmarking of refinery emissions performance : Executive summary

    International Nuclear Information System (INIS)

    2003-07-01

    This study was undertaken to collect emissions performance data for Canadian and comparable American refineries. The objective was to examine parameters that affect refinery air emissions performance and develop methods or correlations to normalize emissions performance. Another objective was to correlate and compare the performance of Canadian refineries to comparable American refineries. For the purpose of this study, benchmarking involved the determination of levels of emission performance that are being achieved for generic groups of facilities. A total of 20 facilities were included in the benchmarking analysis, and 74 American refinery emission correlations were developed. The recommended benchmarks, and the application of those correlations for comparison between Canadian and American refinery performance, were discussed. The benchmarks were: sulfur oxides, nitrogen oxides, carbon monoxide, particulate, volatile organic compounds, ammonia and benzene. For each refinery in Canada, benchmark emissions were developed. Several factors can explain differences in Canadian and American refinery emission performance. 4 tabs., 7 figs

  15. A SysML Test Model and Test Suite for the ETCS Ceiling Speed Monitor

    DEFF Research Database (Denmark)

    Braunstein, Cécile; Peleska, Jan; Schulze, Uwe

    2014-01-01

    dedicated to the publication of models that are of interest for the model-based testing (MBT) community, and may serve as benchmarks for comparing MBT tool capabilities. The model described here is of particular interest for analysing the capabilities of equivalence class testing strategies. The CSM...... application inputs velocity values from a domain which could not be completely enumerated for test purposes with reasonable effort. We describe a novel method for equivalence class testing that – despite the conceptually infinite cardinality of the input domains – is capable to produce finite test suites...... that are exhaustive under certain hypotheses about the internal structure of the system under test....

  16. EVA Suit Microbial Leakage Investigation Project

    Science.gov (United States)

    Falker, Jay; Baker, Christopher; Clayton, Ronald; Rucker, Michelle

    2016-01-01

    The objective of this project is to collect microbial samples from various EVA suits to determine how much microbial contamination is typically released during simulated planetary exploration activities. Data will be released to the planetary protection and science communities, and advanced EVA system designers. In the best case scenario, we will discover that very little microbial contamination leaks from our current or prototype suit designs, in the worst case scenario, we will identify leak paths, learn more about what affects leakage--and we'll have a new, flight-certified swab tool for our EVA toolbox.

  17. General squark flavour mixing: constraints, phenomenology and benchmarks

    CERN Document Server

    De Causmaecker, Karen; Herrmann, Bjoern; Mahmoudi, Farvah; O'Leary, Ben; Porod, Werner; Sekmen, Sezen; Strobbe, Nadja

    2015-11-19

    We present an extensive study of non-minimal flavour violation in the squark sector in the framework of the Minimal Supersymmetric Standard Model. We investigate the effects of multiple non-vanishing flavour-violating elements in the squark mass matrices by means of a Markov Chain Monte Carlo scanning technique and identify parameter combinations that are favoured by both current data and theoretical constraints. We then detail the resulting distributions of the flavour-conserving and flavour-violating model parameters. Based on this analysis, we propose a set of benchmark scenarios relevant for future studies of non-minimal flavour violation in the Minimal Supersymmetric Standard Model.

  18. Benchmarking and performance enhancement framework for multi-staging object-oriented languages

    Directory of Open Access Journals (Sweden)

    Ahmed H. Yousef

    2013-06-01

    Full Text Available This paper focuses on verifying the readiness, feasibility, generality and usefulness of multi-staging programming in software applications. We present a benchmark designed to evaluate the performance gain of different multi-staging programming (MSP languages implementations of object oriented languages. The benchmarks in this suite cover different tests that range from classic simple examples (like matrix algebra to advanced examples (like encryption and image processing. The benchmark is applied to compare the performance gain of two different MSP implementations (Mint and Metaphor that are built on object oriented languages (Java and C# respectively. The results concerning the application of this benchmark on these languages are presented and analysed. The measurement technique used in benchmarking leads to the development of a language independent performance enhancement framework that allows the programmer to select which code segments need staging. The framework also enables the programmer to verify the effectiveness of staging on the application performance. The framework is applied to a real case study. The case study results showed the effectiveness of the framework to achieve significant performance enhancement.

  19. Thermal Performance Benchmarking: Annual Report

    Energy Technology Data Exchange (ETDEWEB)

    Feng, Xuhui [National Renewable Energy Laboratory (NREL), Golden, CO (United States). Transportation and Hydrogen Systems Center

    2017-10-19

    In FY16, the thermal performance of the 2014 Honda Accord Hybrid power electronics thermal management systems were benchmarked. Both experiments and numerical simulation were utilized to thoroughly study the thermal resistances and temperature distribution in the power module. Experimental results obtained from the water-ethylene glycol tests provided the junction-to-liquid thermal resistance. The finite element analysis (FEA) and computational fluid dynamics (CFD) models were found to yield a good match with experimental results. Both experimental and modeling results demonstrate that the passive stack is the dominant thermal resistance for both the motor and power electronics systems. The 2014 Accord power electronics systems yield steady-state thermal resistance values around 42- 50 mm to the 2nd power K/W, depending on the flow rates. At a typical flow rate of 10 liters per minute, the thermal resistance of the Accord system was found to be about 44 percent lower than that of the 2012 Nissan LEAF system that was benchmarked in FY15. The main reason for the difference is that the Accord power module used a metalized-ceramic substrate and eliminated the thermal interface material layers. FEA models were developed to study the transient performance of 2012 Nissan LEAF, 2014 Accord, and two other systems that feature conventional power module designs. The simulation results indicate that the 2012 LEAF power module has lowest thermal impedance at a time scale less than one second. This is probably due to moving low thermally conductive materials further away from the heat source and enhancing the heat spreading effect from the copper-molybdenum plate close to the insulated gate bipolar transistors. When approaching steady state, the Honda system shows lower thermal impedance. Measurement results of the thermal resistance of the 2015 BMW i3 power electronic system indicate that the i3 insulated gate bipolar transistor module has significantly lower junction

  20. Advanced Sensor Platform to Evaluate Manloads for Exploration Suit Architectures

    Data.gov (United States)

    National Aeronautics and Space Administration — Space suit manloads are defined as the outer bounds of force that the human occupant of a suit is able to exert onto the suit during motion. They are defined on a...

  1. Prokofiev. "Romeo and Juliet" - Suites / Iran March

    Index Scriptorium Estoniae

    March, Iran

    1991-01-01

    Uuest heliplaadist "Prokofiev. "Romeo and Juliet" - Suites: N 1 Op. 64 bis a; N 2 Op. 64 ter b; N 3 Op. 101 c. Royal Scottish National Orchestra /Neeme Järvi" Chandos cassette ABTD 1536; CD CHAN 8940 (78 minutes) etc

  2. Clean room technology in surgery suites

    Science.gov (United States)

    1971-01-01

    The principles of clean room technology and the criteria for their application to surgery are discussed. The basic types of surgical clean rooms are presented along with their advantages and disadvantages. Topics discussed include: microbiology of surgery suites; principles of laminar airflow systems, and their use in surgery; and asepsis and the operating room.

  3. Cave Biosignature Suites: Microbes, Minerals, and Mars

    Science.gov (United States)

    Boston, P. J.; Spilde, M. N.; Northup, D. E.; Melim, L. A.; Soroka, D. S.; Kleina, L. G.; Lavoie, K. H.; Hose, L. D.; Mallory, L. M.; Dahm, C. N.; Crossey, L. J.; Schelble, R. T.

    2001-03-01

    Earth's subsurface offers one of the best possible sites to search for microbial life and the characteristic lithologies that life leaves behind. The subterrain may be equally valuable for astrobiology. Where surface conditions are particularly hostile, like on Mars, the subsurface may offer the only habitat for extant lifeforms and access to recognizable biosignatures. We have identified numerous unequivocally biogenic macroscopic, microscopic, and chemical/geochemical cave biosignatures. However, to be especially useful for astrobiology, we are looking for suites of characteristics. Ideally, "biosignature suites" should be both macroscopically and microscopically detectable, independently verifiable by nonmorphological means, and as independent as possible of specific details of life chemistries - demanding (and sometimes conflicting) criteria. Working in fragile, legally protected environments, we developed noninvasive and minimal impact techniques for life and biosignature detection/characterization analogous to Planetary Protection Protocols. Our difficult field conditions have shared limitations common to extraterrestrial robotic and human missions. Thus, the cave/subsurface astrobiology model addresses the most important goals from both scientific and operational points of view. We present details of cave biosignature suites involving manganese and iron oxides, calcite, and sulfur minerals. Suites include morphological fossils, mineral-coated filaments, living microbial mats and preserved biofabrics, 13C and 34S values consistent with microbial metabolism, genetic data, unusual elemental abundances and ratios, and crystallographic mineral forms.

  4. August Weizenbergi rahumõtted / Gustav Suits

    Index Scriptorium Estoniae

    Suits, Gustav, 1883-1956

    2002-01-01

    Esmakordselt ilmunud: Isamaa, 28., 30. Jun. 1906, nr. 49-50. Ajendatud A. Weizenbergi kirjutisest "Kihutused" ajalehes "Isamaa", 1906, nr. 44-47. Vt. ka: August Weizenberg: vastus härra Spectatorile, Suits, G. Vabaduse väraval, lk. 403-409

  5. 28 CFR 36.501 - Private suits.

    Science.gov (United States)

    2010-07-01

    ... ACCOMMODATIONS AND IN COMMERCIAL FACILITIES Enforcement § 36.501 Private suits. (a) General. Any person who is... order. Upon timely application, the court may, in its discretion, permit the Attorney General to... general public importance. Upon application by the complainant and in such circumstances as the court may...

  6. Hydrologic information server for benchmark precipitation dataset

    Science.gov (United States)

    McEnery, John A.; McKee, Paul W.; Shelton, Gregory P.; Ramsey, Ryan W.

    2013-01-01

    This paper will present the methodology and overall system development by which a benchmark dataset of precipitation information has been made available. Rainfall is the primary driver of the hydrologic cycle. High quality precipitation data is vital for hydrologic models, hydrometeorologic studies and climate analysis,and hydrologic time series observations are important to many water resources applications. Over the past two decades, with the advent of NEXRAD radar, science to measure and record rainfall has improved dramatically. However, much existing data has not been readily available for public access or transferable among the agricultural, engineering and scientific communities. This project takes advantage of the existing CUAHSI Hydrologic Information System ODM model and tools to bridge the gap between data storage and data access, providing an accepted standard interface for internet access to the largest time-series dataset of NEXRAD precipitation data ever assembled. This research effort has produced an operational data system to ingest, transform, load and then serve one of most important hydrologic variable sets.

  7. Spectral Relative Standard Deviation: A Practical Benchmark in Metabolomics

    Science.gov (United States)

    Metabolomics datasets, by definition, comprise of measurements of large numbers of metabolites. Both technical (analytical) and biological factors will induce variation within these measurements that is not consistent across all metabolites. Consequently, criteria are required to...

  8. Bioelectrochemical Systems Workshop:Standardized Analyses, Design Benchmarks, and Reporting

    Science.gov (United States)

    2012-01-01

    State University; Ms. Caroline Dale, Veolia Water Systems in USA; Dr. Pat Evans, CDM; Dr. Ashley Franks, University of Massachusetts / Latrobe University...generation using membrane and salt bridge microbial fuel cells. Water Research, 2005. 39(9): p. 1675-1686. 3. Logan, B.E., Scaling Up Microbial Fuel... Water Research, 2012. 46: p. 2425-2434. 17. Zhang, G.D., et al., Biocathode microbial fuel cell for efficient electricity recovery from dairy manure

  9. Benchmarking Commercial Conformer Ensemble Generators.

    Science.gov (United States)

    Friedrich, Nils-Ole; de Bruyn Kops, Christina; Flachsenberg, Florian; Sommer, Kai; Rarey, Matthias; Kirchmair, Johannes

    2017-11-27

    We assess and compare the performance of eight commercial conformer ensemble generators (ConfGen, ConfGenX, cxcalc, iCon, MOE LowModeMD, MOE Stochastic, MOE Conformation Import, and OMEGA) and one leading free algorithm, the distance geometry algorithm implemented in RDKit. The comparative study is based on a new version of the Platinum Diverse Dataset, a high-quality benchmarking dataset of 2859 protein-bound ligand conformations extracted from the PDB. Differences in the performance of commercial algorithms are much smaller than those observed for free algorithms in our previous study (J. Chem. Inf. 2017, 57, 529-539). For commercial algorithms, the median minimum root-mean-square deviations measured between protein-bound ligand conformations and ensembles of a maximum of 250 conformers are between 0.46 and 0.61 Å. Commercial conformer ensemble generators are characterized by their high robustness, with at least 99% of all input molecules successfully processed and few or even no substantial geometrical errors detectable in their output conformations. The RDKit distance geometry algorithm (with minimization enabled) appears to be a good free alternative since its performance is comparable to that of the midranked commercial algorithms. Based on a statistical analysis, we elaborate on which algorithms to use and how to parametrize them for best performance in different application scenarios.

  10. What Randomized Benchmarking Actually Measures

    Science.gov (United States)

    Proctor, Timothy; Rudinger, Kenneth; Young, Kevin; Sarovar, Mohan; Blume-Kohout, Robin

    2017-09-01

    Randomized benchmarking (RB) is widely used to measure an error rate of a set of quantum gates, by performing random circuits that would do nothing if the gates were perfect. In the limit of no finite-sampling error, the exponential decay rate of the observable survival probabilities, versus circuit length, yields a single error metric r . For Clifford gates with arbitrary small errors described by process matrices, r was believed to reliably correspond to the mean, over all Clifford gates, of the average gate infidelity between the imperfect gates and their ideal counterparts. We show that this quantity is not a well-defined property of a physical gate set. It depends on the representations used for the imperfect and ideal gates, and the variant typically computed in the literature can differ from r by orders of magnitude. We present new theories of the RB decay that are accurate for all small errors describable by process matrices, and show that the RB decay curve is a simple exponential for all such errors. These theories allow explicit computation of the error rate that RB measures (r ), but as far as we can tell it does not correspond to the infidelity of a physically allowed (completely positive) representation of the imperfect gates.

  11. Human factors reliability benchmark exercise

    International Nuclear Information System (INIS)

    Poucet, A.

    1989-08-01

    The Joint Research Centre of the European Commission has organised a Human Factors Reliability Benchmark Exercise (HF-RBE) with the aim of assessing the state of the art in human reliability modelling and assessment. Fifteen teams from eleven countries, representing industry, utilities, licensing organisations and research institutes, participated in the HF-RBE. The HF-RBE was organised around two study cases: (1) analysis of routine functional Test and Maintenance (TPM) procedures: with the aim of assessing the probability of test induced failures, the probability of failures to remain unrevealed and the potential to initiate transients because of errors performed in the test; (2) analysis of human actions during an operational transient: with the aim of assessing the probability that the operators will correctly diagnose the malfunctions and take proper corrective action. This report summarises the contributions received from the participants and analyses these contributions on a comparative basis. The aim of this analysis was to compare the procedures, modelling techniques and quantification methods used, to obtain insight in the causes and magnitude of the variability observed in the results, to try to identify preferred human reliability assessment approaches and to get an understanding of the current state of the art in the field identifying the limitations that are still inherent to the different approaches

  12. Benchmark tests of JENDL-1

    International Nuclear Information System (INIS)

    Kikuchi, Yasuyuki; Hasegawa, Akira; Takano, Hideki; Kamei, Takanobu; Hojuyama, Takeshi; Sasaki, Makoto; Seki, Yuji; Zukeran, Atsushi; Otake, Iwao.

    1982-02-01

    Various benchmark tests were made on JENDL-1. At the first stage, various core center characteristics were tested for many critical assemblies with one-dimensional model. At the second stage, applicability of JENDL-1 was further tested to more sophisticated problems for MOZART and ZPPR-3 assemblies with two-dimensional model. It was proved that JENDL-1 predicted various quantities of fast reactors satisfactorily as a whole. However, the following problems were pointed out: 1) There exists discrepancy of 0.9% in the k sub(eff)-values between the Pu- and U-cores. 2) The fission rate ratio of 239 Pu to 235 U is underestimated by 3%. 3) The Doppler reactivity coefficients are overestimated by about 10%. 4) The control rod worths are underestimated by 4%. 5) The fission rates of 235 U and 239 Pu are underestimated considerably in the outer core and radial blanket regions. 6) The negative sodium void reactivities are overestimated, when the sodium is removed from the outer core. As a whole, most of problems of JENDL-1 seem to be related with the neutron leakage and the neutron spectrum. It was found through the further study that most of these problems came from too small diffusion coefficients and too large elastic removal cross sections above 100 keV, which might be probably caused by overestimation of the total and elastic scattering cross sections for structural materials in the unresolved resonance region up to several MeV. (author)

  13. Open architecture of smart sensor suites

    Science.gov (United States)

    Müller, Wilmuth; Kuwertz, Achim; Grönwall, Christina; Petersson, Henrik; Dekker, Rob; Reinert, Frank; Ditzel, Maarten

    2017-10-01

    Experiences from recent conflicts show the strong need for smart sensor suites comprising different multi-spectral imaging sensors as core elements as well as additional non-imaging sensors. Smart sensor suites should be part of a smart sensor network - a network of sensors, databases, evaluation stations and user terminals. Its goal is to optimize the use of various information sources for military operations such as situation assessment, intelligence, surveillance, reconnaissance, target recognition and tracking. Such a smart sensor network will enable commanders to achieve higher levels of situational awareness. Within the study at hand, an open system architecture was developed in order to increase the efficiency of sensor suites. The open system architecture for smart sensor suites, based on a system-of-systems approach, enables combining different sensors in multiple physical configurations, such as distributed sensors, co-located sensors combined in a single package, tower-mounted sensors, sensors integrated in a mobile platform, and trigger sensors. The architecture was derived from a set of system requirements and relevant scenarios. Its mode of operation is adaptable to a series of scenarios with respect to relevant objects of interest, activities to be observed, available transmission bandwidth, etc. The presented open architecture is designed in accordance with the NATO Architecture Framework (NAF). The architecture allows smart sensor suites to be part of a surveillance network, linked e.g. to a sensor planning system and a C4ISR center, and to be used in combination with future RPAS (Remotely Piloted Aircraft Systems) for supporting a more flexible dynamic configuration of RPAS payloads.

  14. Workshop: Monte Carlo computational performance benchmark - Contributions

    International Nuclear Information System (INIS)

    Hoogenboom, J.E.; Petrovic, B.; Martin, W.R.; Sutton, T.; Leppaenen, J.; Forget, B.; Romano, P.; Siegel, A.; Hoogenboom, E.; Wang, K.; Li, Z.; She, D.; Liang, J.; Xu, Q.; Qiu, Y.; Yu, J.; Sun, J.; Fan, X.; Yu, G.; Bernard, F.; Cochet, B.; Jinaphanh, A.; Jacquet, O.; Van der Marck, S.; Tramm, J.; Felker, K.; Smith, K.; Horelik, N.; Capellan, N.; Herman, B.

    2013-01-01

    This series of slides is divided into 3 parts. The first part is dedicated to the presentation of the Monte-Carlo computational performance benchmark (aims, specifications and results). This benchmark aims at performing a full-size Monte Carlo simulation of a PWR core with axial and pin-power distribution. Many different Monte Carlo codes have been used and their results have been compared in terms of computed values and processing speeds. It appears that local power values mostly agree quite well. The first part also includes the presentations of about 10 participants in which they detail their calculations. In the second part, an extension of the benchmark is proposed in order to simulate a more realistic reactor core (for instance non-uniform temperature) and to assess feedback coefficients due to change of some parameters. The third part deals with another benchmark, the BEAVRS benchmark (Benchmark for Evaluation And Validation of Reactor Simulations). BEAVRS is also a full-core PWR benchmark for Monte Carlo simulations

  15. ICSBEP Benchmarks For Nuclear Data Applications

    Science.gov (United States)

    Briggs, J. Blair

    2005-05-01

    The International Criticality Safety Benchmark Evaluation Project (ICSBEP) was initiated in 1992 by the United States Department of Energy. The ICSBEP became an official activity of the Organization for Economic Cooperation and Development (OECD) — Nuclear Energy Agency (NEA) in 1995. Representatives from the United States, United Kingdom, France, Japan, the Russian Federation, Hungary, Republic of Korea, Slovenia, Serbia and Montenegro (formerly Yugoslavia), Kazakhstan, Spain, Israel, Brazil, Poland, and the Czech Republic are now participating. South Africa, India, China, and Germany are considering participation. The purpose of the ICSBEP is to identify, evaluate, verify, and formally document a comprehensive and internationally peer-reviewed set of criticality safety benchmark data. The work of the ICSBEP is published as an OECD handbook entitled "International Handbook of Evaluated Criticality Safety Benchmark Experiments." The 2004 Edition of the Handbook contains benchmark specifications for 3331 critical or subcritical configurations that are intended for use in validation efforts and for testing basic nuclear data. New to the 2004 Edition of the Handbook is a draft criticality alarm / shielding type benchmark that should be finalized in 2005 along with two other similar benchmarks. The Handbook is being used extensively for nuclear data testing and is expected to be a valuable resource for code and data validation and improvement efforts for decades to come. Specific benchmarks that are useful for testing structural materials such as iron, chromium, nickel, and manganese; beryllium; lead; thorium; and 238U are highlighted.

  16. Benchmarking Multilayer-HySEA model for landslide generated tsunami. HTHMP validation process.

    Science.gov (United States)

    Macias, J.; Escalante, C.; Castro, M. J.

    2017-12-01

    Landslide tsunami hazard may be dominant along significant parts of the coastline around the world, in particular in the USA, as compared to hazards from other tsunamigenic sources. This fact motivated NTHMP about the need of benchmarking models for landslide generated tsunamis, following the same methodology already used for standard tsunami models when the source is seismic. To perform the above-mentioned validation process, a set of candidate benchmarks were proposed. These benchmarks are based on a subset of available laboratory data sets for solid slide experiments and deformable slide experiments, and include both submarine and subaerial slides. A benchmark based on a historic field event (Valdez, AK, 1964) close the list of proposed benchmarks. A total of 7 benchmarks. The Multilayer-HySEA model including non-hydrostatic effects has been used to perform all the benchmarking problems dealing with laboratory experiments proposed in the workshop that was organized at Texas A&M University - Galveston, on January 9-11, 2017 by NTHMP. The aim of this presentation is to show some of the latest numerical results obtained with the Multilayer-HySEA (non-hydrostatic) model in the framework of this validation effort.Acknowledgements. This research has been partially supported by the Spanish Government Research project SIMURISK (MTM2015-70490-C02-01-R) and University of Malaga, Campus de Excelencia Internacional Andalucía Tech. The GPU computations were performed at the Unit of Numerical Methods (University of Malaga).

  17. Quality benchmarking methodology: Case study of finance and culture industries in Latvia

    Directory of Open Access Journals (Sweden)

    Ieva Zemīte

    2011-01-01

    Full Text Available Political, socio-economic and cultural changes that have taken place in the world during the last years have influenced all the spheres. Constant improvements are necessary to sustain in rival and shrinking markets. This sets high quality standards for the service industries. Therefore it is important to conduct comparison of quality criteria to ascertain which practices are achieving superior performance levels. At present companies in Latvia do not carry out mutual benchmarking, and as a result of that do not know how they rank against their peers in terms of quality, as well as they do not see benefits in sharing of information and in benchmarking.The purpose of this paper is to determine the criteria of qualitative benchmarking, and to investigate the use of the benchmarking quality in service industries, particularly: finance and culture sectors in Latvia in order to determine the key driving factors of quality, to explore internal and foreign benchmarks, and to reveal the full potential of inputs’ reduction and efficiency growth for the aforementioned industries.Case study and other tools are used to define the readiness of the company for benchmarking. Certain key factors are examined for their impact on quality criteria. The results are based on the research conducted in professional associations in defined fields (insurance and theatre.Originality/value – this is the first study that adopts the benchmarking models for measuring quality criteria and readiness for mutual comparison in insurance and theatre industries in Latvia.

  18. Developing integrated benchmarks for DOE performance measurement

    Energy Technology Data Exchange (ETDEWEB)

    Barancik, J.I.; Kramer, C.F.; Thode, Jr. H.C.

    1992-09-30

    The objectives of this task were to describe and evaluate selected existing sources of information on occupational safety and health with emphasis on hazard and exposure assessment, abatement, training, reporting, and control identifying for exposure and outcome in preparation for developing DOE performance benchmarks. Existing resources and methodologies were assessed for their potential use as practical performance benchmarks. Strengths and limitations of current data resources were identified. Guidelines were outlined for developing new or improved performance factors, which then could become the basis for selecting performance benchmarks. Data bases for non-DOE comparison populations were identified so that DOE performance could be assessed relative to non-DOE occupational and industrial groups. Systems approaches were described which can be used to link hazards and exposure, event occurrence, and adverse outcome factors, as needed to generate valid, reliable, and predictive performance benchmarks. Data bases were identified which contain information relevant to one or more performance assessment categories . A list of 72 potential performance benchmarks was prepared to illustrate the kinds of information that can be produced through a benchmark development program. Current information resources which may be used to develop potential performance benchmarks are limited. There is need to develop an occupational safety and health information and data system in DOE, which is capable of incorporating demonstrated and documented performance benchmarks prior to, or concurrent with the development of hardware and software. A key to the success of this systems approach is rigorous development and demonstration of performance benchmark equivalents to users of such data before system hardware and software commitments are institutionalized.

  19. IAEA sodium void reactivity benchmark calculations

    International Nuclear Information System (INIS)

    Hill, R.N.; Finck, P.J.

    1992-01-01

    In this paper, the IAEA-1 992 ''Benchmark Calculation of Sodium Void Reactivity Effect in Fast Reactor Core'' problem is evaluated. The proposed design is a large axially heterogeneous oxide-fueled fast reactor as described in Section 2; the core utilizes a sodium plenum above the core to enhance leakage effects. The calculation methods used in this benchmark evaluation are described in Section 3. In Section 4, the calculated core performance results for the benchmark reactor model are presented; and in Section 5, the influence of steel and interstitial sodium heterogeneity effects is estimated

  20. Benchmark Imagery FY11 Technical Report

    Energy Technology Data Exchange (ETDEWEB)

    Roberts, R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Pope, P. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2011-06-14

    This report details the work performed in FY11 under project LL11-GS-PD06, “Benchmark Imagery for Assessing Geospatial Semantic Extraction Algorithms.” The original LCP for the Benchmark Imagery project called for creating a set of benchmark imagery for verifying and validating algorithms that extract semantic content from imagery. More specifically, the first year was slated to deliver real imagery that had been annotated, the second year to deliver real imagery that had composited features, and the final year was to deliver synthetic imagery modeled after the real imagery.

  1. BENCHMARKING ORTEC ISOTOPIC MEASUREMENTS AND CALCULATIONS

    Energy Technology Data Exchange (ETDEWEB)

    Dewberry, R; Raymond Sigg, R; Vito Casella, V; Nitin Bhatt, N

    2008-09-29

    This report represents a description of compiled benchmark tests conducted to probe and to demonstrate the extensive utility of the Ortec ISOTOPIC {gamma}-ray analysis computer program. The ISOTOPIC program performs analyses of {gamma}-ray spectra applied to specific acquisition configurations in order to apply finite-geometry correction factors and sample-matrix-container photon absorption correction factors. The analysis program provides an extensive set of preset acquisition configurations to which the user can add relevant parameters in order to build the geometry and absorption correction factors that the program determines from calculus and from nuclear g-ray absorption and scatter data. The Analytical Development Section field nuclear measurement group of the Savannah River National Laboratory uses the Ortec ISOTOPIC analysis program extensively for analyses of solid waste and process holdup applied to passive {gamma}-ray acquisitions. Frequently the results of these {gamma}-ray acquisitions and analyses are to determine compliance with facility criticality safety guidelines. Another use of results is to designate 55-gallon drum solid waste as qualified TRU waste3 or as low-level waste. Other examples of the application of the ISOTOPIC analysis technique to passive {gamma}-ray acquisitions include analyses of standard waste box items and unique solid waste configurations. In many passive {gamma}-ray acquisition circumstances the container and sample have sufficient density that the calculated energy-dependent transmission correction factors have intrinsic uncertainties in the range 15%-100%. This is frequently the case when assaying 55-gallon drums of solid waste with masses of up to 400 kg and when assaying solid waste in extensive unique containers. Often an accurate assay of the transuranic content of these containers is not required, but rather a good defensible designation as >100 nCi/g (TRU waste) or <100 nCi/g (low level solid waste) is required. In

  2. ANALYSIS OF DESIGN ELEMENTS IN SKI SUITS

    Directory of Open Access Journals (Sweden)

    Birsen Çileroğlu

    2014-06-01

    Full Text Available Popularity of Ski Sport in 19th century necessitated a new perspective on protective skiing clothing ag ainst the mountain climates and excessive cold. Winter clothing were the basis of ski attire during this period. By the beginning of 20th century lining cloth were used to minimize the wind effect. The difference between the men and women’s ski attire of the time consisted of a knee - length skirts worn over the golf trousers. Subsequent to the First World War, skiing suit models were influenced by the period uniforms and the producers reflected the fashion trends to the ski clothing. In conformance with th e prevailing trends, ski trousers were designed and produced for the women thus leading to reduction in gender differences. Increases in the ski tourism and holding of the first winter olympics in 1924 resulted in variations in ski attires, development of design characteristics, growth in user numbers, and enlargement of production capacities. Designers emphasized in their collections combined presence of elegance and practicality in the skiing attire. In 1930s, the ski suits influenced by pilots’ uniforms included characteristics permitting freedom of motion, and the design elements exhibited changes in terms of style, material and aerodynamics. In time, the ski attires showed varying design features distinguishing professionals from the amateurs. While protective functionality was primary consideration for the amateurs, for professionals the aerodynamic design was also a leading factor. Eventually, the increased differences in design characteristics were exhibited in ski suit collections, World reknown brands were formed, production and sales volumes showed significant rise. During 20th century the ski suits influenced by fashion trends to acquire unique styles reached a position of dominance to impact current fashion trends, and apart from sports attir es they became a style determinant in the clothing of cold climates. Ski suits

  3. Competency based training in robotic surgery: benchmark scores for virtual reality robotic simulation.

    Science.gov (United States)

    Raison, Nicholas; Ahmed, Kamran; Fossati, Nicola; Buffi, Nicolò; Mottrie, Alexandre; Dasgupta, Prokar; Van Der Poel, Henk

    2017-05-01

    To develop benchmark scores of competency for use within a competency based virtual reality (VR) robotic training curriculum. This longitudinal, observational study analysed results from nine European Association of Urology hands-on-training courses in VR simulation. In all, 223 participants ranging from novice to expert robotic surgeons completed 1565 exercises. Competency was set at 75% of the mean expert score. Benchmark scores for all general performance metrics generated by the simulator were calculated. Assessment exercises were selected by expert consensus and through learning-curve analysis. Three basic skill and two advanced skill exercises were identified. Benchmark scores based on expert performance offered viable targets for novice and intermediate trainees in robotic surgery. Novice participants met the competency standards for most basic skill exercises; however, advanced exercises were significantly more challenging. Intermediate participants performed better across the seven metrics but still did not achieve the benchmark standard in the more difficult exercises. Benchmark scores derived from expert performances offer relevant and challenging scores for trainees to achieve during VR simulation training. Objective feedback allows both participants and trainers to monitor educational progress and ensures that training remains effective. Furthermore, the well-defined goals set through benchmarking offer clear targets for trainees and enable training to move to a more efficient competency based curriculum. © 2016 The Authors BJU International © 2016 BJU International Published by John Wiley & Sons Ltd.

  4. Reactor fuel depletion benchmark of TINDER

    International Nuclear Information System (INIS)

    Martin, W.J.; Oliveira, C.R.E. de; Hecht, A.A.

    2014-01-01

    Highlights: • A reactor burnup benchmark of TINDER, coupling MCNP6 to CINDER2008, was performed. • TINDER is a poor candidate for fuel depletion calculations using its current libraries. • Data library modification is necessary if fuel depletion is desired from TINDER. - Abstract: Accurate burnup calculations are key to proper nuclear reactor design, fuel cycle modeling, and disposal estimations. The TINDER code, originally designed for activation analyses, has been modified to handle full burnup calculations, including the widely used predictor–corrector feature. In order to properly characterize the performance of TINDER for this application, a benchmark calculation was performed. Although the results followed the trends of past benchmarked codes for a UO 2 PWR fuel sample from the Takahama-3 reactor, there were obvious deficiencies in the final result, likely in the nuclear data library that was used. Isotopic comparisons versus experiment and past code benchmarks are given, as well as hypothesized areas of deficiency and future work

  5. Medicare Contracting - Redacted Benchmark Metric Reports

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Centers for Medicare and Medicaid Services has compiled aggregate national benchmark cost and workload metrics using data submitted to CMS by the AB MACs and the...

  6. How benchmarking can improve patient nutrition.

    Science.gov (United States)

    Ellis, Jane

    Benchmarking is a tool that originated in business to enable organisations to compare their services with industry-wide best practice. Early last year the Department of Health published The Essence of Care, a benchmarking toolkit adapted for use in health care. It focuses on eight elements of care that are crucial to patients' experiences. Nurses and other health care professionals at a London NHS trust have begun a trust-wide benchmarking project. The aim is to improve patients' experiences of health care by sharing and comparing information, and by identifying examples of good practice and areas for improvement. The project began with two of the eight elements of The Essence of Care, with the intention of covering the rest later. This article describes the benchmarking process for nutrition and some of the consequent improvements in care.

  7. Measuring Distribution Performance? Benchmarking Warrants Your Attention

    Energy Technology Data Exchange (ETDEWEB)

    Ericson, Sean J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Alvarez, Paul [The Wired Group

    2018-04-13

    Identifying, designing, and measuring performance metrics is critical to securing customer value, but can be a difficult task. This article examines the use of benchmarks based on publicly available performance data to set challenging, yet fair, metrics and targets.

  8. Benchmarking Linked Open Data Management Systems

    NARCIS (Netherlands)

    R. Angles Rojas (Renzo); M.-D. Pham (Minh-Duc); P.A. Boncz (Peter)

    2014-01-01

    htmlabstractWith inherent support for storing and analysing highly interconnected data, graph and RDF databases appear as natural solutions for developing Linked Open Data applications. However, current benchmarks for these database technologies do not fully attain the desirable characteristics

  9. Professional Performance and Bureaucratic Benchmarking Information

    DEFF Research Database (Denmark)

    Schneider, Melanie L.; Mahlendorf, Matthias D.; Schäffer, Utz

    Prior research documents positive effects of benchmarking information provision on performance and attributes this to social comparisons. However, the effects on professional recipients are unclear. Studies of professional control indicate that professional recipients often resist bureaucratic...

  10. Professional Performance and Bureaucratic Benchmarking Information

    DEFF Research Database (Denmark)

    Schneider, Melanie L.; Mahlendorf, Matthias D.; Schäffer, Utz

    and professional performance but only if prior professional performance was low. Supplemental analyses support the robustness of our results. Findings indicate conditions under which bureaucratic benchmarking information may affect professional performance and advance research on professional control and social...

  11. [EC5-Space Suit Assembly Team- Internship

    Science.gov (United States)

    Maicke, Andrew

    2016-01-01

    There were three main projects in this internship. The first pertained to the Bearing Dust Cycle Test, in particular automating the test to allow for easier administration. The second concerned modifying the communication system setup in the Z2 suit, where speakers and mics were adjusted to allow for more space in the helmet. And finally, the last project concerned the tensile strength testing of fabrics deemed as candidates for space suit materials and desired to be sent off for radiation testing. The major duties here are split up between the major projects detailed above. For the Bearing Dust Cycle Test, the first objective was to find a way to automate administration of the test, as the previous version was long and tedious to perform. In order to do this, it was necessary to introduce additional electronics and perform programming to control the automation. Once this was done, it would be necessary to update documents concerning the test setup, procedure, and potential hazards. Finally, I was tasked with running tests using the new system to confirm system performance. For the Z2 communication system modifications, it was necessary to investigate alternative speakers and microphones which may have better performance than those currently used in the suit. Further, new speaker and microphone positions needed to be identified to keep them out of the way of the suit user. Once this was done, appropriate hardware (such as speaker or microphone cases and holders) could be prototyped and fabricated. For the suit material strength testing, the first task was to gather and document various test fabrics to identify the best suit material candidates. Then, it was needed to prepare samples for testing to establish baseline measurements and specify a testing procedure. Once the data was fully collected, additional test samples would be prepared and sent off-site to undergo irradiation before being tested again to observe changes in strength performance. For the Bearing

  12. The BTeV Software Tutorial Suite

    Energy Technology Data Exchange (ETDEWEB)

    Robert K. Kutschke

    2004-02-20

    The BTeV Collaboration is starting to develop its C++ based offline software suite, an integral part of which is a series of tutorials. These tutorials are targeted at a diverse audience, including new graduate students, experienced physicists with little or no C++ experience, those with just enough C++ to be dangerous, and experts who need only an overview of the available tools. The tutorials must both teach C++ in general and the BTeV specific tools in particular. Finally, they must teach physicists how to find and use the detailed documentation. This report will review the status of the BTeV experiment, give an overview of the plans for and the state of the software and will then describe the plans for the tutorial suite.

  13. Implementing Sentinels in the TARGIT BI Suite

    DEFF Research Database (Denmark)

    Middelfart, Morten; Pedersen, Torben Bach

    2011-01-01

    users based on previous observations, e.g., that revenue might drop within two months if an increase in customer problems combined with a decrease in website traffic is observed. In this paper we show how users, without any prior technical knowledge, can mine and use sentinels in the TARGIT BI Suite. We......This paper describes the implementation of socalled sentinels in the TARGIT BI Suite. Sentinels are a novel type of rules that can warn a user if one or more measure changes in a multi-dimensional data cube are expected to cause a change to another measure critical to the user. Sentinels notify...... pattern mining or correlation techniques. We demonstrate, through extensive experiments, that mining and usage of sentinels is feasible with good performance for the typical users on a real, operational data warehouse....

  14. The BTeV Software Tutorial Suite

    International Nuclear Information System (INIS)

    Kutschke, Robert K.

    2004-01-01

    The BTeV Collaboration is starting to develop its C++ based offline software suite, an integral part of which is a series of tutorials. These tutorials are targeted at a diverse audience, including new graduate students, experienced physicists with little or no C++ experience, those with just enough C++ to be dangerous, and experts who need only an overview of the available tools. The tutorials must both teach C++ in general and the BTeV specific tools in particular. Finally, they must teach physicists how to find and use the detailed documentation. This report will review the status of the BTeV experiment, give an overview of the plans for and the state of the software and will then describe the plans for the tutorial suite

  15. Extending and Enhancing SAS (Static Analysis Suite)

    CERN Document Server

    Ho, David

    2016-01-01

    The Static Analysis Suite (SAS) is an open-source software package used to perform static analysis on C and C++ code, helping to ensure safety, readability and maintainability. In this Summer Student project, SAS was enhanced to improve ease of use and user customisation. A straightforward method of integrating static analysis into a project at compilation time was provided using the automated build tool CMake. The process of adding checkers to the suite was streamlined and simplied by developing an automatic code generator. To make SAS more suitable for continuous integration, a reporting mechanism summarising results was added. This suitability has been demonstrated by inclusion of SAS in the Future Circular Collider Software nightly build system. Scalability of the improved package was demonstrated by using the tool to analyse the ROOT code base.

  16. AX-5 space suit bearing torque investigation

    Science.gov (United States)

    Loewenthal, Stuart; Vykukal, Vic; Mackendrick, Robert; Culbertson, Philip, Jr.

    1990-01-01

    The symptoms and eventual resolution of a torque increase problem occurring with ball bearings in the joints of the AX-5 space suit are described. Starting torques that rose 5 to 10 times initial levels were observed in crew evaluation tests of the suit in a zero-g water tank. This bearing problem was identified as a blocking torque anomaly, observed previously in oscillatory gimbal bearings. A large matrix of lubricants, ball separator designs and materials were evaluated. None of these combinations showed sufficient tolerance to lubricant washout when repeatedly cycled in water. The problem was resolved by retrofitting a pressure compensated, water exclusion seal to the outboard side of the bearing cavity. The symptoms and possible remedies to blocking are discussed.

  17. [Aspects of communication regarding medical malpractice suits].

    Science.gov (United States)

    Pilling, János; Erdélyi, Kamilla

    2016-04-24

    Due to problems experienced in health care, there is an increased amount of malpractice suits nowadays. Nevertheless, some physicians are more likely to be sued, or more frequently sued, than others. Numerous studies indicate that this phenomenon fundamentally results from a lack of interpersonal and communication skills on the part of the sued doctor, namely, deficiencies in questioning the patient, listening, conveying information, etc. Communication is of pivotal importance in patient care vis-à-vis medical errors as well. The majority of physicians aim to conceal the error, albeit this may lead to further deterioration of the patient's condition. In institutions where open communication regarding errors was introduced within the medical team and toward the patient and their family alike, the number of malpractice suits decreased. It is crucial to establish a means of support for doctors, and to promote communication trainings, as well as a supportive legal environment.

  18. Standard Model processes

    CERN Document Server

    Mangano, M.L.; Aguilar-Saavedra, Juan Antonio; Alekhin, S.; Badger, S.; Bauer, C.W.; Becher, T.; Bertone, V.; Bonvini, M.; Boselli, S.; Bothmann, E.; Boughezal, R.; Cacciari, M.; Carloni Calame, C.M.; Caola, F.; Campbell, J.M.; Carrazza, S.; Chiesa, M.; Cieri, L.; Cimaglia, F.; Febres Cordero, F.; Ferrarese, P.; D'Enterria, D.; Ferrera, G.; Garcia i Tormo, X.; Garzelli, M.V.; Germann, E.; Hirschi, V.; Han, T.; Ita, H.; Jäger, B.; Kallweit, S.; Karlberg, A.; Kuttimalai, S.; Krauss, F.; Larkoski, A.J.; Lindert, J.; Luisoni, G.; Maierhöfer, P.; Mattelaer, O.; Martinez, H.; Moch, S.; Montagna, G.; Moretti, M.; Nason, P.; Nicrosini, O.; Oleari, C.; Pagani, D.; Papaefstathiou, A.; Petriello, F.; Piccinini, F.; Pierini, M.; Pierog, T.; Pozzorini, S.; Re, E.; Robens, T.; Rojo, J.; Ruiz, R.; Sakurai, K.; Salam, G.P.; Salfelder, L.; Schönherr, M.; Schulze, M.; Schumann, S.; Selvaggi, M.; Shivaji, A.; Siodmok, A.; Skands, P.; Torrielli, P.; Tramontano, F.; Tsinikos, I.; Tweedie, B.; Vicini, A.; Westhoff, S.; Zaro, M.; Zeppenfeld, D.; CERN. Geneva. ATS Department

    2017-06-22

    This report summarises the properties of Standard Model processes at the 100 TeV pp collider. We document the production rates and typical distributions for a number of benchmark Standard Model processes, and discuss new dynamical phenomena arising at the highest energies available at this collider. We discuss the intrinsic physics interest in the measurement of these Standard Model processes, as well as their role as backgrounds for New Physics searches.

  19. Regenerative Blower for EVA Suit Ventilation Fan

    Science.gov (United States)

    Izenson, Michael G.; Chen, Weibo; Paul, Heather L.

    2010-01-01

    Portable life support systems in future space suits will include a ventilation subsystem driven by a dedicated fan. This ventilation fan must meet challenging requirements for pressure rise, flow rate, efficiency, size, safety, and reliability. This paper describes research and development that showed the feasibility of a regenerative blower that is uniquely suited to meet these requirements. We proved feasibility through component tests, blower tests, and design analysis. Based on the requirements for the Constellation Space Suit Element (CSSE) Portable Life Support System (PLSS) ventilation fan, we designed the critical elements of the blower. We measured the effects of key design parameters on blower performance using separate effects tests, and used the results of these tests to design a regenerative blower that will meet the ventilation fan requirements. We assembled a proof-of-concept blower and measured its performance at sub-atmospheric pressures that simulate a PLSS ventilation loop environment. Head/flow performance and maximum efficiency point data were used to specify the design and operating conditions for the ventilation fan. We identified materials for the blower that will enhance safety for operation in a lunar environment, and produced a solid model that illustrates the final design. The proof-of-concept blower produced the flow rate and pressure rise needed for the CSSE ventilation subsystem while running at 5400 rpm, consuming only 9 W of electric power using a non-optimized, commercial motor and controller and inefficient bearings. Scaling the test results to a complete design shows that a lightweight, compact, reliable, and low power regenerative blower can meet the performance requirements for future space suit life support systems.

  20. Benchmarking Danish Vocational Education and Training Programmes

    DEFF Research Database (Denmark)

    Bogetoft, Peter; Wittrup, Jesper

    This study paper discusses methods whereby Danish vocational education and training colleges can be benchmarked, and presents results from a number of models. It is conceptually complicated to benchmark vocational colleges, as the various colleges in Denmark offer a wide range of course programmes...... attempt to summarise the various effects that the colleges have in two relevant figures, namely retention rates of students and employment rates among students who have completed training programmes....

  1. Practice benchmarking in the age of targeted auditing.

    Science.gov (United States)

    Langdale, Ryan P; Holland, Ben F

    2012-11-01

    The frequency and sophistication of health care reimbursement auditing has progressed rapidly in recent years, leaving many oncologists wondering whether their private practices would survive a full-scale Office of the Inspector General (OIG) investigation. The Medicare Part B claims database provides a rich source of information for physicians seeking to understand how their billing practices measure up to their peers, both locally and nationally. This database was dissected by a team of cancer specialists to uncover important benchmarks related to targeted auditing. All critical Medicare charges, payments, denials, and service ratios in this article were derived from the full 2010 Medicare Part B claims database. Relevant claims were limited by using Medicare provider specialty codes 83 (hematology/oncology) and 90 (medical oncology), with an emphasis on claims filed from the physician office place of service (11). All charges, denials, and payments were summarized at the Current Procedural Terminology code level to drive practice benchmarking standards. A careful analysis of this data set, combined with the published audit priorities of the OIG, produced germane benchmarks from which medical oncologists can monitor, measure and improve on common areas of billing fraud, waste or abuse in their practices. Part II of this series and analysis will focus on information pertinent to radiation oncologists.

  2. OR-Benchmark: An Open and Reconfigurable Digital Watermarking Benchmarking Framework

    OpenAIRE

    Wang, Hui; Ho, Anthony TS; Li, Shujun

    2015-01-01

    Benchmarking digital watermarking algorithms is not an easy task because different applications of digital watermarking often have very different sets of requirements and trade-offs between conflicting requirements. While there have been some general-purpose digital watermarking benchmarking systems available, they normally do not support complicated benchmarking tasks and cannot be easily reconfigured to work with different watermarking algorithms and testing conditions. In this paper, we pr...

  3. Action-Oriented Benchmarking: Concepts and Tools

    Energy Technology Data Exchange (ETDEWEB)

    California Energy Commission; Mathew, Paul; Mills, Evan; Mathew, Paul; Piette, Mary Ann; Bourassa, Norman; Brook, Martha

    2008-02-13

    Most energy benchmarking tools provide static feedback on how one building compares to a larger set of loosely similar buildings, without providing information at the end-use level or on what can be done to reduce consumption, cost, or emissions. In this article--Part 1 of a two-part series--we describe an 'action-oriented benchmarking' approach, which extends whole-building energy benchmarking to include analysis of system and component energy use metrics and features. Action-oriented benchmarking thereby allows users to generate more meaningful metrics and to identify, screen and prioritize potential efficiency improvements. This opportunity assessment process can then be used to inform and optimize a full-scale audit or commissioning process. We introduce a new web-based action-oriented benchmarking system and associated software tool-EnergyIQ. The benchmarking methods, visualizations, and user interface design are informed by an end-user needs assessment survey and best-practice guidelines from ASHRAE.

  4. Pool critical assembly pressure vessel facility benchmark

    International Nuclear Information System (INIS)

    Remec, I.; Kam, F.B.K.

    1997-07-01

    This pool critical assembly (PCA) pressure vessel wall facility benchmark (PCA benchmark) is described and analyzed in this report. Analysis of the PCA benchmark can be used for partial fulfillment of the requirements for the qualification of the methodology for pressure vessel neutron fluence calculations, as required by the US Nuclear Regulatory Commission regulatory guide DG-1053. Section 1 of this report describes the PCA benchmark and provides all data necessary for the benchmark analysis. The measured quantities, to be compared with the calculated values, are the equivalent fission fluxes. In Section 2 the analysis of the PCA benchmark is described. Calculations with the computer code DORT, based on the discrete-ordinates method, were performed for three ENDF/B-VI-based multigroup libraries: BUGLE-93, SAILOR-95, and BUGLE-96. An excellent agreement of the calculated (C) and measures (M) equivalent fission fluxes was obtained. The arithmetic average C/M for all the dosimeters (total of 31) was 0.93 ± 0.03 and 0.92 ± 0.03 for the SAILOR-95 and BUGLE-96 libraries, respectively. The average C/M ratio, obtained with the BUGLE-93 library, for the 28 measurements was 0.93 ± 0.03 (the neptunium measurements in the water and air regions were overpredicted and excluded from the average). No systematic decrease in the C/M ratios with increasing distance from the core was observed for any of the libraries used

  5. Benchmarking for Cost Improvement. Final report

    Energy Technology Data Exchange (ETDEWEB)

    1993-09-01

    The US Department of Energy`s (DOE) Office of Environmental Restoration and Waste Management (EM) conducted the Benchmarking for Cost Improvement initiative with three objectives: Pilot test benchmarking as an EM cost improvement tool; identify areas for cost improvement and recommend actions to address these areas; provide a framework for future cost improvement. The benchmarking initiative featured the use of four principal methods (program classification, nationwide cost improvement survey, paired cost comparison and component benchmarking). Interested parties contributed during both the design and execution phases. The benchmarking initiative was conducted on an accelerated basis. Of necessity, it considered only a limited set of data that may not be fully representative of the diverse and complex conditions found at the many DOE installations. The initiative generated preliminary data about cost differences and it found a high degree of convergence on several issues. Based on this convergence, the report recommends cost improvement strategies and actions. This report describes the steps taken as part of the benchmarking initiative and discusses the findings and recommended actions for achieving cost improvement. The results and summary recommendations, reported below, are organized by the study objectives.

  6. Storage-Intensive Supercomputing Benchmark Study

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, J; Dossa, D; Gokhale, M; Hysom, D; May, J; Pearce, R; Yoo, A

    2007-10-30

    Critical data science applications requiring frequent access to storage perform poorly on today's computing architectures. This project addresses efficient computation of data-intensive problems in national security and basic science by exploring, advancing, and applying a new form of computing called storage-intensive supercomputing (SISC). Our goal is to enable applications that simply cannot run on current systems, and, for a broad range of data-intensive problems, to deliver an order of magnitude improvement in price/performance over today's data-intensive architectures. This technical report documents much of the work done under LDRD 07-ERD-063 Storage Intensive Supercomputing during the period 05/07-09/07. The following chapters describe: (1) a new file I/O monitoring tool iotrace developed to capture the dynamic I/O profiles of Linux processes; (2) an out-of-core graph benchmark for level-set expansion of scale-free graphs; (3) an entity extraction benchmark consisting of a pipeline of eight components; and (4) an image resampling benchmark drawn from the SWarp program in the LSST data processing pipeline. The performance of the graph and entity extraction benchmarks was measured in three different scenarios: data sets residing on the NFS file server and accessed over the network; data sets stored on local disk; and data sets stored on the Fusion I/O parallel NAND Flash array. The image resampling benchmark compared performance of software-only to GPU-accelerated. In addition to the work reported here, an additional text processing application was developed that used an FPGA to accelerate n-gram profiling for language classification. The n-gram application will be presented at SC07 at the High Performance Reconfigurable Computing Technologies and Applications Workshop. The graph and entity extraction benchmarks were run on a Supermicro server housing the NAND Flash 40GB parallel disk array, the Fusion-io. The Fusion system specs are as follows

  7. Benchmarking as a strategy policy tool for energy management

    NARCIS (Netherlands)

    Rienstra, S.A.; Nijkamp, P.

    2002-01-01

    In this paper we analyse to what extent benchmarking is a valuable tool in strategic energy policy analysis. First, the theory on benchmarking is concisely presented, e.g., by discussing the benchmark wheel and the benchmark path. Next, some results of surveys among business firms are presented. To

  8. Benchmarking von Krankenhausinformationssystemen – eine vergleichende Analyse deutschsprachiger Benchmarkingcluster

    Directory of Open Access Journals (Sweden)

    Jahn, Franziska

    2015-08-01

    Full Text Available Benchmarking is a method of strategic information management used by many hospitals today. During the last years, several benchmarking clusters have been established within the German-speaking countries. They support hospitals in comparing and positioning their information system’s and information management’s costs, performance and efficiency against other hospitals. In order to differentiate between these benchmarking clusters and to provide decision support in selecting an appropriate benchmarking cluster, a classification scheme is developed. The classification scheme observes both general conditions and examined contents of the benchmarking clusters. It is applied to seven benchmarking clusters which have been active in the German-speaking countries within the last years. Currently, performance benchmarking is the most frequent benchmarking type, whereas the observed benchmarking clusters differ in the number of benchmarking partners and their cooperation forms. The benchmarking clusters also deal with different benchmarking subjects. Assessing costs and quality application systems, physical data processing systems, organizational structures of information management and IT services processes are the most frequent benchmarking subjects. There is still potential for further activities within the benchmarking clusters to measure strategic and tactical information management, IT governance and quality of data and data-processing processes. Based on the classification scheme and the comparison of the benchmarking clusters, we derive general recommendations for benchmarking of hospital information systems.

  9. Clean Energy Manufacturing Analysis Center Benchmark Report: Framework and Methodologies

    Energy Technology Data Exchange (ETDEWEB)

    Sandor, Debra [National Renewable Energy Lab. (NREL), Golden, CO (United States); Chung, Donald [National Renewable Energy Lab. (NREL), Golden, CO (United States); Keyser, David [National Renewable Energy Lab. (NREL), Golden, CO (United States); Mann, Margaret [National Renewable Energy Lab. (NREL), Golden, CO (United States); Engel-Cox, Jill [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2017-05-23

    This report documents the CEMAC methodologies for developing and reporting annual global clean energy manufacturing benchmarks. The report reviews previously published manufacturing benchmark reports and foundational data, establishes a framework for benchmarking clean energy technologies, describes the CEMAC benchmark analysis methodologies, and describes the application of the methodologies to the manufacturing of four specific clean energy technologies.

  10. Full sphere hydrodynamic and dynamo benchmarks

    KAUST Repository

    Marti, P.

    2014-01-26

    Convection in planetary cores can generate fluid flow and magnetic fields, and a number of sophisticated codes exist to simulate the dynamic behaviour of such systems. We report on the first community activity to compare numerical results of computer codes designed to calculate fluid flow within a whole sphere. The flows are incompressible and rapidly rotating and the forcing of the flow is either due to thermal convection or due to moving boundaries. All problems defined have solutions that alloweasy comparison, since they are either steady, slowly drifting or perfectly periodic. The first two benchmarks are defined based on uniform internal heating within the sphere under the Boussinesq approximation with boundary conditions that are uniform in temperature and stress-free for the flow. Benchmark 1 is purely hydrodynamic, and has a drifting solution. Benchmark 2 is a magnetohydrodynamic benchmark that can generate oscillatory, purely periodic, flows and magnetic fields. In contrast, Benchmark 3 is a hydrodynamic rotating bubble benchmark using no slip boundary conditions that has a stationary solution. Results from a variety of types of code are reported, including codes that are fully spectral (based on spherical harmonic expansions in angular coordinates and polynomial expansions in radius), mixed spectral and finite difference, finite volume, finite element and also a mixed Fourier-finite element code. There is good agreement between codes. It is found that in Benchmarks 1 and 2, the approximation of a whole sphere problem by a domain that is a spherical shell (a sphere possessing an inner core) does not represent an adequate approximation to the system, since the results differ from whole sphere results. © The Authors 2014. Published by Oxford University Press on behalf of The Royal Astronomical Society.

  11. Comparison of results from the MCNP criticality validation suite using ENDF/B-VI and preliminary ENDF/B-VII nuclear data

    Energy Technology Data Exchange (ETDEWEB)

    Mosteller, R. D. (Russell D.)

    2004-01-01

    The MCNP Criticality Validation Suite is a collection of 31 benchmarks taken from the International Handbook of Evaluated Criticality Safety Benchmark Experiments. MCNP5 calculations clearly demonstrate that, overall, nuclear data for a preliminary version of ENDFB-VII produce better agreement with the benchmarks in the suite than do corresponding data from ENDF/B-VI. Additional calculations identify areas where improvements in the data still are needed. Based on results for the MCNP Criticality Validation Suite, the Pre-ENDF/B-VII nuclear data produce substantially better overall results than do their ENDF/B-VI counterparts. The calculated values for k{sub eff} for bare metal spheres and for an IEU cylinder reflected by normal uranium are in much better agreement with the benchmark values. In addition, the values of k{sub eff} for the bare metal spheres are much more consistent with those for corresponding metal spheres reflected by normal uranium or water. In addition, a long-standing controversy about the need for an ad hoc adjustment to the {sup 238}U resonance integral for thermal systems may finally be resolved. On the other hand, improvements still are needed in a number of areas. Those areas include intermediate-energy cross sections for {sup 235}U, angular distributions for elastic scattering in deuterium, and fast cross sections for {sup 237}Np.

  12. Telemetry Standards, IRIG Standard 106-17, Chapter 22, Network Based Protocol Suite

    Science.gov (United States)

    2017-07-01

    structure, field definitions , and media access control (MAC) conventions specified in IEEE 802.3-2012, Section 1, Clauses 2, 3, and 4. Data link...support for ICMP broadcast pings. 22.3.1.2 Internet Group Management Protocol (IGMP) NetworkNodes that consume or forward dynamically configured IPv4...selection of the exact SSL and TLS versions to use. Certificate generation and exchanges shall be in accordance with the profile identified in RFC

  13. Geophysical characterization from Itu intrusive suite

    International Nuclear Information System (INIS)

    Pascholati, M.E.

    1989-01-01

    The integrated use of geophysical, geological, geochemical, petrographical and remote sensing data resulted in a substantial increase in the knowledge of the Itu Intrusive Suite. The main geophysical method was gamma-ray spectrometry together with fluorimetry and autoradiography. Three methods were used for calculation of laboratory gamma-ray spectrometry data. For U, the regression method was the best one. For K and Th, equations system and absolute calibration presented the best results. Surface gamma-ray spectrometry allowed comparison with laboratory data and permitted important contribution to the study of environmental radiation. (author)

  14. STS-76 Cmdr Kevin Chilton suits up

    Science.gov (United States)

    1996-01-01

    STS-76 Mission Commander Kevin P. Chilton is donning his launch/entry suit in the Operations and Checkout Building. Chilton was assigned as pilot on his first two Shuttle flights; STS-76 will be his first as commander. Once suitup activities are completed the six-member STS-76 flight crew will depart for Launch Pad 39B, where the Space Shuttle Atlantis is undergoing final preparations for liftoff during an approximately seven- minute launch window opening around 3:13 a.m. EST, March 22.

  15. NIH bows to part of Rifkin suit.

    Science.gov (United States)

    Sun, M

    1984-11-30

    Having lost a round in its legal battle with Jeremy Rifkin over field tests of genetically engineered bacteria, the National Institutes of Health will conduct the simpler of two ecological analyses required by the National Environmental Policy Act on three proposed experiments. In May 1984 a federal district court ruling halted a University of California field test pending a decision on Rifkin's 1983 suit, which alleged that NIH had violated the Act by approving experiments without studying the ecological consequences. Still to be decided by the U.S. Court of Appeals is whether NIH must also issue full-scale environmental impact statements.

  16. Spinal Test Suites for Software Product Lines

    Directory of Open Access Journals (Sweden)

    Harsh Beohar

    2014-03-01

    Full Text Available A major challenge in testing software product lines is efficiency. In particular, testing a product line should take less effort than testing each and every product individually. We address this issue in the context of input-output conformance testing, which is a formal theory of model-based testing. We extend the notion of conformance testing on input-output featured transition systems with the novel concept of spinal test suites. We show how this concept dispenses with retesting the common behavior among different, but similar, products of a software product line.

  17. ANALYSIS OF DESIGN ELEMENTS IN SKI SUITS

    OpenAIRE

    Çileroğlu, Birsen; Kelleci Özeren, Figen; Kıvılcımlar, İnci Seda

    2015-01-01

    Popularity of Ski Sport in 19th century necessitated a new perspective on protective skiing clothing against the mountain climates and excessive cold. Winter clothing were the basis of ski attire during this period.  By the beginning of 20th century lining cloth were used to minimize the wind effect. The difference between the men and women’s ski attire of the time consisted of a knee-length skirts worn over the golf trousers.  Subsequent to the First World War, skiing suit models were influe...

  18. Durable Suit Bladder with Improved Water Permeability for Pressure and Environment Suits

    Science.gov (United States)

    Bue, Grant C.; Kuznetz, Larry; Orndoff, Evelyne; Tang, Henry; Aitchison, Lindsay; Ross, Amy

    2009-01-01

    Water vapor permeability is shown to be useful in rejecting heat and managing moisture accumulation in launch-and-entry pressure suits. Currently this is accomplished through a porous Gortex layer in the Advanced Crew and Escape Suit (ACES) and in the baseline design of the Constellation Suit System Element (CSSE) Suit 1. Non-porous dense monolithic membranes (DMM) that are available offer potential improvements for water vapor permeability with reduced gas leak. Accordingly, three different pressure bladder materials were investigated for water vapor permeability and oxygen leak: ElasthaneTM 80A (thermoplastic polyether urethane) provided from stock polymer material and two custom thermoplastic polyether urethanes. Water vapor, carbon dioxide and oxygen permeability of the DMM's was measured in a 0.13 mm thick stand-alone layer, a 0.08 mm and 0.05 mm thick layer each bonded to two different nylon and polyester woven reinforcing materials. Additional water vapor permeability and mechanical compression measurements were made with the reinforced 0.05 mm thick layers, further bonded with a polyester wicking and overlaid with moistened polyester fleece thermal underwear .This simulated the pressure from a supine crew person. The 0.05 mm thick nylon reinforced sample with polyester wicking layer was further mechanically tested for wear and abrasion. Concepts for incorporating these materials in launch/entry and Extravehicular Activity pressure suits are presented.

  19. The institutionalization of benchmarking in the Danish construction industry

    DEFF Research Database (Denmark)

    Rasmussen, Grane Mikael Gregaard; Gottlieb, Stefan Christoffer

    and disseminated to the construction industry. The fourth chapter demonstrates how benchmarking was concretized into a benchmarking system and articulated to address several political focus areas for the construction industry. BEC accordingly became a political arena where many local perspectives and strategic...... interests had to be managed. The fifth chapter is about the operationalization of benchmarking and demonstrates how the concretizing and implementation of benchmarking gave rise to reactions from different actors with different and diverse interests in the benchmarking initiative. Political struggles...

  20. Toxicological benchmarks for wildlife: 1994 Revision

    International Nuclear Information System (INIS)

    Opresko, D.M.; Sample, B.E.; Suter, G.W. II.

    1994-09-01

    The process by which ecological risks of environmental contaminants are evaluated is two-tiered. The first tier is a screening assessment where concentrations of contaminants in the environment are compared to toxicological benchmarks which represent concentrations of chemicals in environmental media (water, sediment, soil, food, etc.) that are presumed to be nonhazardous to the surrounding biota. The second tier is a baseline ecological risk assessment where toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. The report presents toxicological benchmarks for assessment of effects of 76 chemicals on 8 representative mammalian wildlife species and 31 chemicals on 9 avian wildlife species. The chemicals are some of those that occur at United States Department of Energy waste sites; the wildlife species were chosen because they are widely distributed and provide a representative range of body sizes and diets. Further descriptions of the chosen wildlife species and chemicals are provided in the report. The benchmarks presented in this report represent values believed to be nonhazardous for the listed wildlife species. These benchmarks only consider contaminant exposure through oral ingestion of contaminated media; exposure through inhalation or direct dermal exposure are not considered in this report

  1. Toxicological benchmarks for wildlife: 1994 Revision

    Energy Technology Data Exchange (ETDEWEB)

    Opresko, D.M.; Sample, B.E.; Suter, G.W. II

    1994-09-01

    The process by which ecological risks of environmental contaminants are evaluated is two-tiered. The first tier is a screening assessment where concentrations of contaminants in the environment are compared to toxicological benchmarks which represent concentrations of chemicals in environmental media (water, sediment, soil, food, etc.) that are presumed to be nonhazardous to the surrounding biota. The second tier is a baseline ecological risk assessment where toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. The report presents toxicological benchmarks for assessment of effects of 76 chemicals on 8 representative mammalian wildlife species and 31 chemicals on 9 avian wildlife species. The chemicals are some of those that occur at United States Department of Energy waste sites; the wildlife species were chosen because they are widely distributed and provide a representative range of body sizes and diets. Further descriptions of the chosen wildlife species and chemicals are provided in the report. The benchmarks presented in this report represent values believed to be nonhazardous for the listed wildlife species. These benchmarks only consider contaminant exposure through oral ingestion of contaminated media; exposure through inhalation or direct dermal exposure are not considered in this report.

  2. A benchmark server using high resolution protein structure data, and benchmark results for membrane helix predictions.

    Science.gov (United States)

    Rath, Emma M; Tessier, Dominique; Campbell, Alexander A; Lee, Hong Ching; Werner, Tim; Salam, Noeris K; Lee, Lawrence K; Church, W Bret

    2013-03-27

    Helical membrane proteins are vital for the interaction of cells with their environment. Predicting the location of membrane helices in protein amino acid sequences provides substantial understanding of their structure and function and identifies membrane proteins in sequenced genomes. Currently there is no comprehensive benchmark tool for evaluating prediction methods, and there is no publication comparing all available prediction tools. Current benchmark literature is outdated, as recently determined membrane protein structures are not included. Current literature is also limited to global assessments, as specialised benchmarks for predicting specific classes of membrane proteins were not previously carried out. We present a benchmark server at http://sydney.edu.au/pharmacy/sbio/software/TMH_benchmark.shtml that uses recent high resolution protein structural data to provide a comprehensive assessment of the accuracy of existing membrane helix prediction methods. The server further allows a user to compare uploaded predictions generated by novel methods, permitting the comparison of these novel methods against all existing methods compared by the server. Benchmark metrics include sensitivity and specificity of predictions for membrane helix location and orientation, and many others. The server allows for customised evaluations such as assessing prediction method performances for specific helical membrane protein subtypes.We report results for custom benchmarks which illustrate how the server may be used for specialised benchmarks. Which prediction method is the best performing method depends on which measure is being benchmarked. The OCTOPUS membrane helix prediction method is consistently one of the highest performing methods across all measures in the benchmarks that we performed. The benchmark server allows general and specialised assessment of existing and novel membrane helix prediction methods. Users can employ this benchmark server to determine the most

  3. Implementation, capabilities, and benchmarking of Shift, a massively parallel Monte Carlo radiation transport code

    Science.gov (United States)

    Pandya, Tara M.; Johnson, Seth R.; Evans, Thomas M.; Davidson, Gregory G.; Hamilton, Steven P.; Godfrey, Andrew T.

    2016-03-01

    This work discusses the implementation, capabilities, and validation of Shift, a massively parallel Monte Carlo radiation transport package authored at Oak Ridge National Laboratory. Shift has been developed to scale well from laptops to small computing clusters to advanced supercomputers and includes features such as support for multiple geometry and physics engines, hybrid capabilities for variance reduction methods such as the Consistent Adjoint-Driven Importance Sampling methodology, advanced parallel decompositions, and tally methods optimized for scalability on supercomputing architectures. The scaling studies presented in this paper demonstrate good weak and strong scaling behavior for the implemented algorithms. Shift has also been validated and verified against various reactor physics benchmarks, including the Consortium for Advanced Simulation of Light Water Reactors' Virtual Environment for Reactor Analysis criticality test suite and several Westinghouse AP1000® problems presented in this paper. These benchmark results compare well to those from other contemporary Monte Carlo codes such as MCNP5 and KENO.

  4. Subgroup Benchmark Calculations for the Intra-Pellet Nonuniform Temperature Cases

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Kang Seog [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Jung, Yeon Sang [Seoul National Univ. (Korea, Republic of); Liu, Yuxuan [Univ. of Michigan, Ann Arbor, MI (United States); Joo, Han Gyu [Seoul National Univ. (Korea, Republic of)

    2016-08-01

    A benchmark suite has been developed by Seoul National University (SNU) for intrapellet nonuniform temperature distribution cases based on the practical temperature profiles according to the thermal power levels. Though a new subgroup capability for nonuniform temperature distribution was implemented in MPACT, no validation calculation has been performed for the new capability. This study focuses on bench-marking the new capability through a code-to-code comparison. Two continuous-energy Monte Carlo codes, McCARD and CE-KENO, are engaged in obtaining reference solutions, and the MPACT results are compared to the SNU nTRACER using a similar cross section library and subgroup method to obtain self-shielded cross sections.

  5. An mHealth Tool Suite for Mobility Assessment

    Directory of Open Access Journals (Sweden)

    Priyanka Madhushri

    2016-07-01

    Full Text Available The assessment of mobility and functional impairments in the elderly is important for early detection and prevention of fall conditions. Falls create serious threats to health by causing disabling fractures that reduce independence in the elderly. Moreover, they exert heavy economic burdens on society due to high treatment costs. Modern smartphones enable the development of innovative mobile health (mHealth applications by integrating a growing number of inertial and environmental sensors along with the ever-increasing data processing and communication capabilities. Mobility assessment is one of the promising mHealth application domains. In this paper, we introduce a suite of smartphone applications for assessing mobility in the elderly population. The suite currently includes smartphone applications that automate and quantify the following standardized medical tests for assessing mobility: Timed Up and Go (TUG, 30-Second Chair Stand Test (30SCS, and 4-Stage Balance Test (4SBT. For each application, we describe its functionality and a list of parameters extracted by processing signals from smartphone’s inertial sensors. The paper shows the results from studies conducted on geriatric patients for TUG tests and from experiments conducted in the laboratory on healthy subjects for 30SCS and 4SBT tests.

  6. Benchmarking optimization solvers for structural topology optimization

    DEFF Research Database (Denmark)

    Rojas Labanda, Susana; Stolpe, Mathias

    2015-01-01

    sizes is developed for this benchmarking. The problems are based on a material interpolation scheme combined with a density filter. Different optimization solvers including Optimality Criteria (OC), the Method of Moving Asymptotes (MMA) and its globally convergent version GCMMA, the interior point...... solvers in IPOPT and FMINCON, and the sequential quadratic programming method in SNOPT, are benchmarked on the library using performance profiles. Whenever possible the methods are applied to both the nested and the Simultaneous Analysis and Design (SAND) formulations of the problem. The performance...... profiles conclude that general solvers are as efficient and reliable as classical structural topology optimization solvers. Moreover, the use of the exact Hessians in SAND formulations, generally produce designs with better objective function values. However, with the benchmarked implementations solving...

  7. Shielding Integral Benchmark Archive and Database (SINBAD)

    Energy Technology Data Exchange (ETDEWEB)

    Kirk, Bernadette Lugue [ORNL; Grove, Robert E [ORNL; Kodeli, I. [International Atomic Energy Agency (IAEA); Sartori, Enrico [ORNL; Gulliford, J. [OECD Nuclear Energy Agency

    2011-01-01

    The Shielding Integral Benchmark Archive and Database (SINBAD) collection of benchmarks was initiated in the early 1990 s. SINBAD is an international collaboration between the Organization for Economic Cooperation and Development s Nuclear Energy Agency Data Bank (OECD/NEADB) and the Radiation Safety Information Computational Center (RSICC) at Oak Ridge National Laboratory (ORNL). SINBAD is a major attempt to compile experiments and corresponding computational models with the goal of preserving institutional knowledge and expertise that need to be handed down to future scientists. SINBAD is also a learning tool for university students and scientists who need to design experiments or gain expertise in modeling and simulation. The SINBAD database is currently divided into three categories fission, fusion, and accelerator benchmarks. Where possible, each experiment is described and analyzed using deterministic or probabilistic (Monte Carlo) radiation transport software.

  8. Development of Advanced Suite of Deterministic Codes for VHTR Physics Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Kang Seog; Cho, J. Y.; Lee, K. H. (and others)

    2007-07-15

    Advanced Suites of deterministic codes for VHTR physics analysis has been developed for detailed analysis of current and advanced reactor designs as part of a US-ROK collaborative I-NERI project. These code suites include the conventional 2-step procedure in which a few group constants are generated by a transport lattice calculation, and the reactor physics analysis is performed by a 3-dimensional diffusion calculation, and a whole core transport code that can model local heterogeneities directly at the core level. Particular modeling issues in physics analysis of the gas-cooled VHTRs were resolved, which include a double heterogeneity of the coated fuel particles, a neutron streaming in the coolant channels, a strong core-reflector interaction, and large spectrum shifts due to changes of the surrounding environment, temperature and burnup. And the geometry handling capability of the DeCART code were extended to deal with the hexagonal fuel elements of the VHTR core. The developed code suites were validated and verified by comparing the computational results with those of the Monte Carlo calculations for the benchmark problems.

  9. OECD/DOE/CEA VVER-1000 Coolant Transient Benchmark. Summary Record of the First Workshop (V1000-CT1)

    International Nuclear Information System (INIS)

    2003-01-01

    The first workshop for the VVER-1000 Coolant Transient Benchmark TT Benchmark was hosted by the Commissariat a l'Energie Atomique, Centre d'Etudes de Saclay, France. The V1000CT benchmark defines standard problems for validation of coupled three-dimensional (3-D) neutron-kinetics/system thermal-hydraulics codes for application to Soviet-designed VVER-1000 reactors using actual plant data without any scaling. The overall objective is to access computer codes used in the safety analysis of VVER power plants, specifically for their use in reactivity transient simulations in a VVER-1000. The V1000CT benchmark consists of two phases: V1000CT-1 - simulation of the switching on of one main coolant pump (MCP) while the other three MCP are in operation, and V1000CT- 2 - calculation of coolant mixing tests and Main Steam Line Break (MSLB) scenario. Further background information on this benchmark can be found at the OECD/NEA benchmark web site . The purpose of the first workshop was to review the benchmark activities after the Starter Meeting held last year in Dresden, Germany: to discuss the participants' feedback and modifications introduced in the Benchmark Specifications on Phase 1; to present and to discuss modelling issues and preliminary results from the three exercises of Phase 1; to discuss the modelling issues of Exercise 1 of Phase 2; and to define work plan and schedule in order to complete the two phases

  10. A Benchmarking System for Domestic Water Use

    Directory of Open Access Journals (Sweden)

    Dexter V. L. Hunt

    2014-05-01

    Full Text Available The national demand for water in the UK is predicted to increase, exacerbated by a growing UK population, and home-grown demands for energy and food. When set against the context of overstretched existing supply sources vulnerable to droughts, particularly in increasingly dense city centres, the delicate balance of matching minimal demands with resource secure supplies becomes critical. When making changes to "internal" demands the role of technological efficiency and user behaviour cannot be ignored, yet existing benchmarking systems traditionally do not consider the latter. This paper investigates the practicalities of adopting a domestic benchmarking system (using a band rating that allows individual users to assess their current water use performance against what is possible. The benchmarking system allows users to achieve higher benchmarks through any approach that reduces water consumption. The sensitivity of water use benchmarks are investigated by making changes to user behaviour and technology. The impact of adopting localised supplies (i.e., Rainwater harvesting—RWH and Grey water—GW and including "external" gardening demands are investigated. This includes the impacts (in isolation and combination of the following: occupancy rates (1 to 4; roof size (12.5 m2 to 100 m2; garden size (25 m2 to 100 m2 and geographical location (North West, Midlands and South East, UK with yearly temporal effects (i.e., rainfall and temperature. Lessons learnt from analysis of the proposed benchmarking system are made throughout this paper, in particular its compatibility with the existing Code for Sustainable Homes (CSH accreditation system. Conclusions are subsequently drawn for the robustness of the proposed system.

  11. Benchmark Problems of the Geothermal Technologies Office Code Comparison Study

    Energy Technology Data Exchange (ETDEWEB)

    White, Mark D. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Podgorney, Robert [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Kelkar, Sharad M. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); McClure, Mark W. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Danko, George [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Ghassemi, Ahmad [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Fu, Pengcheng [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Bahrami, Davood [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Barbier, Charlotte [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Cheng, Qinglu [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Chiu, Kit-Kwan [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Detournay, Christine [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Elsworth, Derek [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Fang, Yi [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Furtney, Jason K. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Gan, Quan [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Gao, Qian [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Guo, Bin [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Hao, Yue [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Horne, Roland N. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Huang, Kai [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Im, Kyungjae [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Norbeck, Jack [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Rutqvist, Jonny [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Safari, M. R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Sesetty, Varahanaresh [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Sonnenthal, Eric [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Tao, Qingfeng [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); White, Signe K. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Wong, Yang [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Xia, Yidong [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2016-12-02

    A diverse suite of numerical simulators is currently being applied to predict or understand the performance of enhanced geothermal systems (EGS). To build confidence and identify critical development needs for these analytical tools, the United States Department of Energy, Geothermal Technologies Office has sponsored a Code Comparison Study (GTO-CCS), with participants from universities, industry, and national laboratories. A principal objective for the study was to create a community forum for improvement and verification of numerical simulators for EGS modeling. Teams participating in the study were those representing U.S. national laboratories, universities, and industries, and each team brought unique numerical simulation capabilities to bear on the problems. Two classes of problems were developed during the study, benchmark problems and challenge problems. The benchmark problems were structured to test the ability of the collection of numerical simulators to solve various combinations of coupled thermal, hydrologic, geomechanical, and geochemical processes. This class of problems was strictly defined in terms of properties, driving forces, initial conditions, and boundary conditions. Study participants submitted solutions to problems for which their simulation tools were deemed capable or nearly capable. Some participating codes were originally developed for EGS applications whereas some others were designed for different applications but can simulate processes similar to those in EGS. Solution submissions from both were encouraged. In some cases, participants made small incremental changes to their numerical simulation codes to address specific elements of the problem, and in other cases participants submitted solutions with existing simulation tools, acknowledging the limitations of the code. The challenge problems were based on the enhanced geothermal systems research conducted at Fenton Hill, near Los Alamos, New Mexico, between 1974 and 1995. The problems

  12. Toxicological benchmarks for wildlife: 1996 Revision

    International Nuclear Information System (INIS)

    Sample, B.E.; Opresko, D.M.; Suter, G.W., II.

    1996-06-01

    The purpose of this report is to present toxicological benchmarks for assessment of effects of certain chemicals on mammalian and avian wildlife species. Publication of this document meets a milestone for the Environmental Restoration (ER) Risk Assessment Program. This document provides the ER Program with toxicological benchmarks that may be used as comparative tools in screening assessments as well as lines of evidence to support or refute the presence of ecological effects in ecological risk assessments. The chemicals considered in this report are some that occur at US DOE waste sites, and the wildlife species evaluated herein were chosen because they represent a range of body sizes and diets

  13. Toxicological benchmarks for wildlife: 1996 Revision

    Energy Technology Data Exchange (ETDEWEB)

    Sample, B.E.; Opresko, D.M.; Suter, G.W., II

    1996-06-01

    The purpose of this report is to present toxicological benchmarks for assessment of effects of certain chemicals on mammalian and avian wildlife species. Publication of this document meets a milestone for the Environmental Restoration (ER) Risk Assessment Program. This document provides the ER Program with toxicological benchmarks that may be used as comparative tools in screening assessments as well as lines of evidence to support or refute the presence of ecological effects in ecological risk assessments. The chemicals considered in this report are some that occur at US DOE waste sites, and the wildlife species evaluated herein were chosen because they represent a range of body sizes and diets.

  14. Metallogenic aspects of Itu intrusive suite

    International Nuclear Information System (INIS)

    Amaral, G.; Pascholati, E.M.

    1990-01-01

    The integrated use of geological, geochemical, geophysical and remote sensing data is providing interesting new information on the metallogenic characteristics of the Itu Intrusive Suite. During World War II, up to 1959, a wolframite deposit was mined near the border of the northernmost body (Itupeva Granite). This deposit is formed by greisen veins associated with cassiterite and topaz, clearly linked with later phases of magmatic differentiation. Generally those veins are related to hydrothermal alteration of the granites and the above mentioned shear zone. U, Th and K determinations by field and laboratory gammaspectrometry were used for regional distribution analysis of those elements and its ratios and calculation of radioactivity heat production. In this aspects, the Itupeva Granite is the hottest and presents several anomalies in the Th/U ratio, indicative of late or post magmatic oxidation processes. (author)

  15. Vadose zone flow convergence test suite

    Energy Technology Data Exchange (ETDEWEB)

    Butcher, B. T. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2017-06-05

    Performance Assessment (PA) simulations for engineered disposal systems at the Savannah River Site involve highly contrasting materials and moisture conditions at and near saturation. These conditions cause severe convergence difficulties that typically result in unacceptable convergence or long simulation times or excessive analyst effort. Adequate convergence is usually achieved in a trial-anderror manner by applying under-relaxation to the Saturation or Pressure variable, in a series of everdecreasing RELAxation values. SRNL would like a more efficient scheme implemented inside PORFLOW to achieve flow convergence in a more reliable and efficient manner. To this end, a suite of test problems that illustrate these convergence problems is provided to facilitate diagnosis and development of an improved convergence strategy. The attached files are being transmitted to you describing the test problem and proposed resolution.

  16. STS-78 Pilot Kevin Kregel suits up

    Science.gov (United States)

    1996-01-01

    STS-78 Pilot Kevin R. Kregel gets a helping hand as he dons his launch/entry suit in the Operations and Checkout Building. A former instructor pilot in the Shuttle Training Aircraft, Kregel is embarking on his second trip into space. Along with six fellow crew members, he will depart the O&C in a short while and head for Launch Pad 39B, where the Space Shuttle Columbia awaits liftoff during a two-and-a-half hour launch window opening at 10:49 a.m. EDT, June 20. STS-78 will be an extended duration flight during which extensive research will be conducted in the Life and Microgravity Spacelab (LMS) located in the payload bay.

  17. Setting a benchmark in remediation

    International Nuclear Information System (INIS)

    Ong, Jacqueline

    2014-01-01

    cooled by a water spray.” While the soil is left with no organic content afterwards, it has some reuse potential in areas such as backfilling the excavated CPWE site and future site development works. Relying on its cumulative knowledge and experience gained in other clean-up projects over the past decade, Thiess also designed, constructed and operated two project-related buildings onsite. The first was the excavation soil building to enclose the CPWE, the other was the feed soil building, which held the soil for testing prior to treatment in the thermal desorption plant. The excavation soil building itself was proof of Thiess' engineering prowess, with Van Der Heiden saying its construction was challenging. “We had to build a structure over the entire encapsulation. It wasn't just a simple building. We had to underpin the building as we excavated. Essentially, we had to move the supporting structures internally as we dug from within,” he said. Thus, the building was constructed with internal walls and pillars that could be moved as the clean-up progressed. Thiess also designed and fitted both the buildings with emission control systems that filtered the air extracted from the buildings. The company then went a step further to make sure it was not tracking material out of the buildings. “Once the trucks were loaded and sealed in the excavation soil building, or unloaded in the feed soil building, they went into an airlock which housed a deluge wheel wash system for the tyres,” Van Der Heiden said. Thiess built a water treatment plant to clean up this water and also wastewater from the thermal desorption plant and worker decontamination facilities. Managing occupational health and hygiene was another major challenge HCBD has extremely low occupational exposure limits, so everyone who worked around the material was issued high- level personal protective equipment that included respirators and disposable gloves and suits. Thiess was reponsible for

  18. Benchmark results for the critical slab and sphere problem in one-speed neutron transport theory

    International Nuclear Information System (INIS)

    Rawat, Ajay; Mohankumar, N.

    2011-01-01

    Research highlights: → The critical slab and sphere problem in neutron transport under Case eigenfunction formalism is considered. → These equations reduce to integral expressions involving X functions. → Gauss quadrature is not ideal but DE quadrature is well-suited. → Several fold decrease in computational effort with improved accuracy is realisable. - Abstract: In this paper benchmark numerical results for the one-speed criticality problem with isotropic scattering for the slab and sphere are reported. The Fredholm integral equations of the second kind based on the Case eigenfunction formalism are numerically solved by Neumann iterations with the Double Exponential quadrature.

  19. Variable Vector Countermeasure Suit for Space Habitation and Exploration

    Data.gov (United States)

    National Aeronautics and Space Administration — The "Variable Vector Countermeasure Suit (V2Suit) for Space Habitation and Exploration" is a visionary system concept that will revolutionize space missions by...

  20. Instrumented Suit Hard Upper Torso (HUT) for Ergonomic Assessment

    Data.gov (United States)

    National Aeronautics and Space Administration — It is well known that the EVA suit (EMU) has the potential to cause crew injury and decreased performance. Engineering data on the suit interaction of the human...

  1. INL Results for Phases I and III of the OECD/NEA MHTGR-350 Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Gerhard Strydom; Javier Ortensi; Sonat Sen; Hans Hammer

    2013-09-01

    The Idaho National Laboratory (INL) Very High Temperature Reactor (VHTR) Technology Development Office (TDO) Methods Core Simulation group led the construction of the Organization for Economic Cooperation and Development (OECD) Modular High Temperature Reactor (MHTGR) 350 MW benchmark for comparing and evaluating prismatic VHTR analysis codes. The benchmark is sponsored by the OECD's Nuclear Energy Agency (NEA), and the project will yield a set of reference steady-state, transient, and lattice depletion problems that can be used by the Department of Energy (DOE), the Nuclear Regulatory Commission (NRC), and vendors to assess their code suits. The Methods group is responsible for defining the benchmark specifications, leading the data collection and comparison activities, and chairing the annual technical workshops. This report summarizes the latest INL results for Phase I (steady state) and Phase III (lattice depletion) of the benchmark. The INSTANT, Pronghorn and RattleSnake codes were used for the standalone core neutronics modeling of Exercise 1, and the results obtained from these codes are compared in Section 4. Exercise 2 of Phase I requires the standalone steady-state thermal fluids modeling of the MHTGR-350 design, and the results for the systems code RELAP5-3D are discussed in Section 5. The coupled neutronics and thermal fluids steady-state solution for Exercise 3 are reported in Section 6, utilizing the newly developed Parallel and Highly Innovative Simulation for INL Code System (PHISICS)/RELAP5-3D code suit. Finally, the lattice depletion models and results obtained for Phase III are compared in Section 7. The MHTGR-350 benchmark proved to be a challenging simulation set of problems to model accurately, and even with the simplifications introduced in the benchmark specification this activity is an important step in the code-to-code verification of modern prismatic VHTR codes. A final OECD/NEA comparison report will compare the Phase I and III

  2. Toxicological Benchmarks for Screening Potential Contaminants of Concern for Effects on Soil and Litter Invertebrates and Heterotrophic Process

    Energy Technology Data Exchange (ETDEWEB)

    Will, M.E.

    1994-01-01

    This report presents a standard method for deriving benchmarks for the purpose of ''contaminant screening,'' performed by comparing measured ambient concentrations of chemicals. The work was performed under Work Breakdown Structure 1.4.12.2.3.04.07.02 (Activity Data Sheet 8304). In addition, this report presents sets of data concerning the effects of chemicals in soil on invertebrates and soil microbial processes, benchmarks for chemicals potentially associated with United States Department of Energy sites, and literature describing the experiments from which data were drawn for benchmark derivation.

  3. Effectiveness of Cognitive-Behavioral Therapy for Adolescent Depression: A Benchmarking Investigation

    Science.gov (United States)

    Weersing, V. Robin; Iyengar, Satish; Kolko, David J.; Birmaher, Boris; Brent, David A.

    2006-01-01

    In this study, we examined the effectiveness of cognitive-behavioral therapy (CBT) for adolescent depression. Outcomes of 80 youth treated with CBT in an outpatient depression specialty clinic, the Services for Teens at Risk Center (STAR), were compared to a "gold standard" CBT research benchmark. On average, youths treated with CBT in STAR…

  4. LDBC Graphalytics: A Benchmark for Large-Scale Graph Analysis on Parallel and Distributed Platforms

    NARCIS (Netherlands)

    Iosup, Alexandru; Hegeman, Tim; Ngai, Wing Lung; Heldens, Stijn; Prat-Pérez, Arnau; Manhardt, Thomas; Chafi, Hassan; Capota, Mihai; Sundaram, Narayanan; Anderson, Michael J.; Tanase, Ilie Gabriel; Xia, Yinglong; Nai, Lifeng; Boncz, Peter A.

    2016-01-01

    In this paper we introduce LDBC Graphalytics, a new industrial-grade benchmark for graph analysis platforms. It consists of six deterministic algorithms, standard datasets, synthetic dataset generators, and reference output, that enable the objective comparison of graph analysis platforms. Its test

  5. Benchmarking Reference Desk Service in Academic Health Science Libraries: A Preliminary Survey.

    Science.gov (United States)

    Robbins, Kathryn; Daniels, Kathleen

    2001-01-01

    This preliminary study was designed to benchmark patron perceptions of reference desk services at academic health science libraries, using a standard questionnaire. Responses were compared to determine the library that provided the highest-quality service overall and along five service dimensions. All libraries were rated very favorably, but none…

  6. Metabolic Assessment of Suited Mobility Using Functional Tasks

    Science.gov (United States)

    Norcross, J. R.; McFarland, S. M.; Ploutz-Snyder, Robert

    2016-01-01

    Existing methods for evaluating extravehicular activity (EVA) suit mobility have typically focused on isolated joint range of motion or torque, but these techniques have little to do with how well a crewmember functionally performs in an EVA suit. To evaluate suited mobility at the system level through measuring metabolic cost (MC) of functional tasks.

  7. Benchmarking European Gas Transmission System Operators

    DEFF Research Database (Denmark)

    Agrell, Per J.; Bogetoft, Peter; Trinkner, Urs

    This is the final report for the pan-European efficiency benchmarking of gas transmission system operations commissioned by the Netherlands Authority for Consumers and Markets (ACM), Den Haag, on behalf of the Council of European Energy Regulators (CEER) under the supervision of the authors....

  8. Benchmarks for dynamic multi-objective optimisation

    CSIR Research Space (South Africa)

    Helbig, M

    2013-06-01

    Full Text Available of dynamic multi-objective optimisation algorithms (DMOAs) are highlighted. In addition, new DMOO benchmark functions with complicated Pareto-optimal sets (POSs) and approaches to develop DMOOPs with either an isolated or deceptive Pareto-optimal front (POF...

  9. Benchmarking 2009: Trends in Education Philanthropy

    Science.gov (United States)

    Bearman, Jessica; Kilgore, Gin

    2009-01-01

    "Benchmarking 2009: Trends in Education Philanthropy" is Grantmakers for Education's (GFE) second annual study of grantmaking trends and priorities among members. As a national network dedicated to improving education outcomes through philanthropy, GFE members are mindful of their role in fostering greater knowledge in the field. They believe it's…

  10. Resolution for the Loviisa benchmark problem

    International Nuclear Information System (INIS)

    Garcia, C.R.; Quintero, R.; Milian, D.

    1992-01-01

    In the present paper, the Loviisa benchmark problem for cycles 11 and 8, and reactor blocks 1 and 2 from Loviisa NPP, is calculated. This problem user law leakage reload patterns and was posed at the second thematic group of TIC meeting held in Rheinsberg GDR, march 1989. SPPS-1 coarse mesh code has been used for the calculations

  11. Parton Distribution Benchmarking with LHC Data

    NARCIS (Netherlands)

    Ball, Richard D.; Carrazza, Stefano; Debbio, Luigi Del; Forte, Stefano; Gao, Jun; Hartland, Nathan; Huston, Joey; Nadolsky, Pavel; Rojo, Juan; Stump, Daniel; Thorne, Robert S.; Yuan, C. -P.

    2012-01-01

    We present a detailed comparison of the most recent sets of NNLO PDFs from the ABM, CT, HERAPDF, MSTW and NNPDF collaborations. We compare parton distributions at low and high scales and parton luminosities relevant for LHC phenomenology. We study the PDF dependence of LHC benchmark inclusive cross

  12. A Benchmark and Simulator for UAV Tracking

    KAUST Repository

    Mueller, Matthias

    2016-09-16

    In this paper, we propose a new aerial video dataset and benchmark for low altitude UAV target tracking, as well as, a photorealistic UAV simulator that can be coupled with tracking methods. Our benchmark provides the first evaluation of many state-of-the-art and popular trackers on 123 new and fully annotated HD video sequences captured from a low-altitude aerial perspective. Among the compared trackers, we determine which ones are the most suitable for UAV tracking both in terms of tracking accuracy and run-time. The simulator can be used to evaluate tracking algorithms in real-time scenarios before they are deployed on a UAV “in the field”, as well as, generate synthetic but photo-realistic tracking datasets with automatic ground truth annotations to easily extend existing real-world datasets. Both the benchmark and simulator are made publicly available to the vision community on our website to further research in the area of object tracking from UAVs. (https://ivul.kaust.edu.sa/Pages/pub-benchmark-simulator-uav.aspx.). © Springer International Publishing AG 2016.

  13. Prague texture segmentation data generator and benchmark

    Czech Academy of Sciences Publication Activity Database

    Mikeš, Stanislav; Haindl, Michal

    2006-01-01

    Roč. 2006, č. 64 (2006), s. 67-68 ISSN 0926-4981 R&D Projects: GA MŠk(CZ) 1M0572; GA AV ČR(CZ) 1ET400750407; GA AV ČR IAA2075302 Institutional research plan: CEZ:AV0Z10750506 Keywords : image segmentation * texture * benchmark * web Subject RIV: BD - Theory of Information

  14. Benchmarks in Tacit Knowledge Skills Instruction

    DEFF Research Database (Denmark)

    Tackney, Charles T.; Strömgren, Ole; Sato, Toyoko

    2006-01-01

    experience more empowering of essential tacit knowledge skills than that found in educational institutions in other national settings. We specify the program forms and procedures for consensus-based governance and group work (as benchmarks) that demonstrably instruct undergraduates in the tacit skill...... dimensions of knowledge thought to be essential for success following graduation....

  15. Determination of Benchmarks Stability within Ahmadu Bello ...

    African Journals Online (AJOL)

    Heights of six geodetic benchmarks over a total distance of 8.6km at the Ahmadu Bello University (ABU), Zaria, Nigeria were recomputed and analysed using least squares adjustment technique. The network computations were tied to two fix primary reference pillars situated outside the campus. The two-tail Chi-square ...

  16. Benchmarking and performance management in health care

    OpenAIRE

    Buttigieg, Sandra; ; EHMA Annual Conference : Public Health Care : Who Pays, Who Provides?

    2012-01-01

    Current economic conditions challenge health care providers globally. Healthcare organizations need to deliver optimal financial, operational, and clinical performance to sustain quality of service delivery. Benchmarking is one of the most potent and under-utilized management tools available and an analytic tool to understand organizational performance. Additionally, it is required for financial survival and organizational excellence.

  17. How Many Letters Should Preschoolers in Public Programs Know? The Diagnostic Efficiency of Various Preschool Letter-Naming Benchmarks for Predicting First-Grade Literacy Achievement

    Science.gov (United States)

    Piasta, Shayne B.; Petscher, Yaacov; Justice, Laura M.

    2012-01-01

    Review of current federal and state standards indicates little consensus or empirical justification regarding appropriate goals, often referred to as benchmarks, for preschool letter-name learning. The present study investigated the diagnostic efficiency of various letter-naming benchmarks using a longitudinal database of 371 children who attended…

  18. Chapter 1: Standard Model processes

    OpenAIRE

    Becher, Thomas

    2017-01-01

    This chapter documents the production rates and typical distributions for a number of benchmark Standard Model processes, and discusses new dynamical phenomena arising at the highest energies available at this collider. We discuss the intrinsic physics interest in the measurement of these Standard Model processes, as well as their role as backgrounds for New Physics searches.

  19. The risk of developing a contact allergy to materials present in diving suits and diving equipment

    Directory of Open Access Journals (Sweden)

    Gadomski Krzysztof

    2017-06-01

    Full Text Available Allergic contact eczema is the most common occupational skin disease caused by allergens. Thus far, no research has been conducted in Poland in relation to the development of contact allergies amongst divers resulting from particular diving suit components. A group of 86 divers were examined using allergy patch tests. Standard products of contact allergy diagnostics were used containing 40 allergens.

  20. The risk of developing a contact allergy to materials present in diving suits and diving equipment

    OpenAIRE

    Gadomski Krzysztof; Siermontowski Piotr; Dąbrowiecki Zbigniew; Olszaski Romuald

    2017-01-01

    Allergic contact eczema is the most common occupational skin disease caused by allergens. Thus far, no research has been conducted in Poland in relation to the development of contact allergies amongst divers resulting from particular diving suit components. A group of 86 divers were examined using allergy patch tests. Standard products of contact allergy diagnostics were used containing 40 allergens.

  1. Benchmarking burnup reconstruction methods for dynamically operated research reactors

    Energy Technology Data Exchange (ETDEWEB)

    Sternat, Matthew R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Charlton, William S. [Univ. of Nebraska, Lincoln, NE (United States). National Strategic Research Institute; Nichols, Theodore F. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2016-03-01

    The burnup of an HEU fueled dynamically operated research reactor, the Oak Ridge Research Reactor, was experimentally reconstructed using two different analytic methodologies and a suite of signature isotopes to evaluate techniques for estimating burnup for research reactor fuel. The methods studied include using individual signature isotopes and the complete mass spectrometry spectrum to recover the sample’s burnup. The individual, or sets of, isotopes include 148Nd, 137Cs+137Ba, 139La, and 145Nd+146Nd. The storage documentation from the analyzed fuel material provided two different measures of burnup: burnup percentage and the total power generated from the assembly in MWd. When normalized to conventional units, these two references differed by 7.8% (395.42GWd/MTHM and 426.27GWd/MTHM) in the resulting burnup for the spent fuel element used in the benchmark. Among all methods being evaluated, the results were within 11.3% of either reference burnup. The results were mixed in closeness to both reference burnups; however, consistent results were achieved from all three experimental samples.

  2. Computational fluid dynamics benchmark dataset of airflow in tracheas

    Directory of Open Access Journals (Sweden)

    A.J. Bates

    2017-02-01

    Full Text Available Computational Fluid Dynamics (CFD is fast becoming a useful tool to aid clinicians in pre-surgical planning through the ability to provide information that could otherwise be extremely difficult if not impossible to obtain. However, in order to provide clinically relevant metrics, the accuracy of the computational method must be sufficiently high. There are many alternative methods employed in the process of performing CFD simulations within the airways, including different segmentation and meshing strategies, as well as alternative approaches to solving the Navier–Stokes equations. However, as in vivo validation of the simulated flow patterns within the airways is not possible, little exists in the way of validation of the various simulation techniques. The data presented here consists of very highly resolved flow data. The degree of resolution is compared to the highest necessary resolutions of the Kolmogorov length and time scales. Therefore this data is ideally suited to act as a benchmark case to which cheaper computational methods may be compared. A dataset and solution setup for one such more efficient method, large eddy simulation (LES, is also presented.

  3. Benchmarking transaction and analytical processing systems the creation of a mixed workload benchmark and its application

    CERN Document Server

    Bog, Anja

    2014-01-01

    This book introduces a new benchmark for hybrid database systems, gauging the effect of adding OLAP to an OLTP workload and analyzing the impact of commonly used optimizations in historically separate OLTP and OLAP domains in mixed-workload scenarios.

  4. Thermal Analysis and Design of an Advanced Space Suit

    Science.gov (United States)

    Lin, Chin H.; Campbell, Anthony B.; French, Jonathan D.; French, D.; Nair, Satish S.; Miles, John B.

    2000-01-01

    The thermal dynamics and design of an Advanced Space Suit are considered. A transient model of the Advanced Space Suit has been developed and implemented using MATLAB/Simulink to help with sizing, with design evaluation, and with the development of an automatic thermal comfort control strategy. The model is described and the thermal characteristics of the Advanced Space suit are investigated including various parametric design studies. The steady state performance envelope for the Advanced Space Suit is defined in terms of the thermal environment and human metabolic rate and the transient response of the human-suit-MPLSS system is analyzed.

  5. Improved airline-type supplied-air plastic suit

    International Nuclear Information System (INIS)

    Jolley, L. Jr.; Zippler, D.B.; Cofer, C.H.; Harper, J.A.

    1978-06-01

    Two piece supplied-air plastic suits are used extensively at the Savannah River Plant for personnel protection against inhalation of airborne plutonium and tritium. Worker comfort and noise level problems gave impetus to development of an improved suit and aid distribution system. The resulting plastic suit and development work are discussed. The plastic suit unit cost is less than $20, the hearing zone noise level is less than 75 dBA, protection factors exceed 10,000, and user comfort is approved. This suit is expected to meet performance requirements for unrestricted use

  6. Benchmark calculations of the solution-fuel criticality experiments by SRAC code system

    International Nuclear Information System (INIS)

    Senuma, Ichiro; Miyoshi, Yoshinori; Suzaki, Takenori; Kobayashi, Iwao

    1984-06-01

    Benchmark calculations were performed by using newly developed SRAC (Standard Reactor Analysis Code) system and nuclear data library based upon JENDL-2. The 34 benchmarks include variety of composition, concentration and configuration of Pu homogeneous and U/Pu homogeneous systems (nitrate, mainly), also include UO 2 /PuO 2 rods in fissile solution: a simplified model of the dissolver process of the fuel reprocessing plant. Calculation results shows good agreement with Monte Carlo method. This code-evaluation work has been done for the the part of the Detailed Design of CSEF (Critical Satety Experimental Facility), which is now in Progress. (author)

  7. CBLIB 2014: a benchmark library for conic mixed-integer and continuous optimization

    DEFF Research Database (Denmark)

    Friberg, Henrik Alsing

    2016-01-01

    The Conic Benchmark Library is an ongoing community-driven project aiming to challenge commercial and open source solvers on mainstream cone support. In this paper, 121 mixed-integer and continuous second-order cone problem instances have been selected from 11 categories as representative for the...... for the instances available online. Since current file formats were found incapable, we embrace the new Conic Benchmark Format as standard for conic optimization. Tools are provided to aid integration of this format with other software packages....

  8. Review of microscopic integral cross section data in fundamental reactor dosimetry benchmark neutron fields

    International Nuclear Information System (INIS)

    Fabry, A.; McElroy, W.N.; Kellogg, L.S.; Lippincott, E.P.; Grundl, J.A.; Gilliam, D.M.; Hansen, G.E.

    1976-01-01

    This paper is intended to review and critically discuss microscopic integral cross section measurement and calculation data for fundamental reactor dosimetry benchmark neutron fields. Specifically the review covers the following fundamental benchmarks: the spontaneous californium-252 fission neutron spectrum standard field; the thermal-neutron induced uranium-235 fission neutron spectrum standard field; the (secondary) intermediate-energy standard neutron field at the center of the Mol-ΣΣ, NISUS, and ITN-ΣΣ facilities; the reference neutron field at the center of the Coupled Fast Reactor Measurement Facility; the reference neutron field at the center of the 10% enriched uranium metal, cylindrical, fast critical; the (primary) Intermediate-Energy Standard Neutron Field

  9. Review of microscopic integral cross section data in fundamental reactor dosimetry benchmark neutron fields

    International Nuclear Information System (INIS)

    Fabry, A.; McElroy, W.N.; Kellogg, L.S.; Lippincott, E.P.; Grundl, J.A.; Gilliam, D.M.; Hansen, G.E.

    1976-10-01

    The paper is intended to review and critically discuss microscopic integral cross section measurement and calculation data for fundamental reactor dosimetry benchmark neutron fields. Specifically the review covers the following fundamental benchmarks: (1) the spontaneous californium-252 fission neutron spectrum standard field; (2) the thermal-neutron induced uranium-235 fission neutron spectrum standard field; (3) the (secondary) intermediate-energy standard neutron field at the center of the Mol-ΣΣ, NISUS, and ITN--ΣΣ facilities; (4) the reference neutron field at the center of the Coupled Fast Reactor Measurement Facility (CFRMF); (5) the reference neutron field at the center of the 10 percent enriched uranium metal, cylindrical, fast critical; and (6) the (primary) Intermediate-Energy Standard Neutron Field

  10. Automated Structure Solution with the PHENIX Suite

    Energy Technology Data Exchange (ETDEWEB)

    Zwart, Peter H.; Zwart, Peter H.; Afonine, Pavel; Grosse-Kunstleve, Ralf W.; Hung, Li-Wei; Ioerger, Tom R.; McCoy, A.J.; McKee, Eric; Moriarty, Nigel; Read, Randy J.; Sacchettini, James C.; Sauter, Nicholas K.; Storoni, L.C.; Terwilliger, Tomas C.; Adams, Paul D.

    2008-06-09

    Significant time and effort are often required to solve and complete a macromolecular crystal structure. The development of automated computational methods for the analysis, solution and completion of crystallographic structures has the potential to produce minimally biased models in a short time without the need for manual intervention. The PHENIX software suite is a highly automated system for macromolecular structure determination that can rapidly arrive at an initial partial model of a structure without significant human intervention, given moderate resolution and good quality data. This achievement has been made possible by the development of new algorithms for structure determination, maximum-likelihood molecular replacement (PHASER), heavy-atom search (HySS), template and pattern-based automated model-building (RESOLVE, TEXTAL), automated macromolecular refinement (phenix.refine), and iterative model-building, density modification and refinement that can operate at moderate resolution (RESOLVE, AutoBuild). These algorithms are based on a highly integrated and comprehensive set of crystallographic libraries that have been built and made available to the community. The algorithms are tightly linked and made easily accessible to users through the PHENIX Wizards and the PHENIX GUI.

  11. Engineering Software Suite Validates System Design

    Science.gov (United States)

    2007-01-01

    EDAptive Computing Inc.'s (ECI) EDAstar engineering software tool suite, created to capture and validate system design requirements, was significantly funded by NASA's Ames Research Center through five Small Business Innovation Research (SBIR) contracts. These programs specifically developed Syscape, used to capture executable specifications of multi-disciplinary systems, and VectorGen, used to automatically generate tests to ensure system implementations meet specifications. According to the company, the VectorGen tests considerably reduce the time and effort required to validate implementation of components, thereby ensuring their safe and reliable operation. EDASHIELD, an additional product offering from ECI, can be used to diagnose, predict, and correct errors after a system has been deployed using EDASTAR -created models. Initial commercialization for EDASTAR included application by a large prime contractor in a military setting, and customers include various branches within the U.S. Department of Defense, industry giants like the Lockheed Martin Corporation, Science Applications International Corporation, and Ball Aerospace and Technologies Corporation, as well as NASA's Langley and Glenn Research Centers

  12. The Inelastic Instrument suite at the SNS

    International Nuclear Information System (INIS)

    Granroth, Garrett E; Abernathy, Douglas L; Ehlers, Georg; Hagen, Mark E; Herwig, Kenneth W; Mamontov, Eugene; Ohl, Michael E; Wildgruber, Christoph U

    2008-01-01

    The instruments in the extensive suite of spectrometers at the SNS are in various stages of installation and commissioning. The Back Scattering Spectrometer (BASIS) is installed and is in commissioning. It's near backscattering analyzer crystals provide the 3 eV resolution as expected. BASIS will enter the user program in the fall of 2007. The ARCS wide angular-range thermal to epithermal neutron spectrometer will come on line in the fall of 2007 followed shortly by the Cold Neutron Chopper Spectrometer. These two direct geometry instruments provide moderate resolution and the ability to trade resolution for flux. In addition both instruments have detector coverage out to 140o to provide a large Q range. The SEQUOIA spectrometer, complete in 2008, is the direct geometry instrument that will provide fine resolution in the thermal to epithermal range. The Spin-Echo spectrometer, to be completed on a similar time scale, will provide the finest energy resolution worldwide. The HYSPEC spectrometer, available no later than 2011, will provide polarized capabilities and optimized flux in the thermal energy range. Finally, the Vision chemical spectrometer will use crystal analyzers to study energy transfers into the epithermal range

  13. Benchmarking in Identifying Priority Directions of Development of Telecommunication Operators

    Directory of Open Access Journals (Sweden)

    Zaharchenko Lolita A.

    2013-12-01

    Full Text Available The article analyses evolution of development and possibilities of application of benchmarking in the telecommunication sphere. It studies essence of benchmarking on the basis of generalisation of approaches of different scientists to definition of this notion. In order to improve activity of telecommunication operators, the article identifies the benchmarking technology and main factors, that determine success of the operator in the modern market economy, and the mechanism of benchmarking and component stages of carrying out benchmarking by a telecommunication operator. It analyses the telecommunication market and identifies dynamics of its development and tendencies of change of the composition of telecommunication operators and providers. Having generalised the existing experience of benchmarking application, the article identifies main types of benchmarking of telecommunication operators by the following features: by the level of conduct of (branch, inter-branch and international benchmarking; by relation to participation in the conduct (competitive and joint; and with respect to the enterprise environment (internal and external.

  14. The Variable Vector Countermeasure Suit (V2Suit) for space habitation and exploration.

    Science.gov (United States)

    Duda, Kevin R; Vasquez, Rebecca A; Middleton, Akil J; Hansberry, Mitchell L; Newman, Dava J; Jacobs, Shane E; West, John J

    2015-01-01

    The "Variable Vector Countermeasure Suit (V2Suit) for Space Habitation and Exploration" is a novel system concept that provides a platform for integrating sensors and actuators with daily astronaut intravehicular activities to improve health and performance, while reducing the mass and volume of the physiologic adaptation countermeasure systems, as well as the required exercise time during long-duration space exploration missions. The V2Suit system leverages wearable kinematic monitoring technology and uses inertial measurement units (IMUs) and control moment gyroscopes (CMGs) within miniaturized modules placed on body segments to provide a "viscous resistance" during movements against a specified direction of "down"-initially as a countermeasure to the sensorimotor adaptation performance decrements that manifest themselves while living and working in microgravity and during gravitational transitions during long-duration spaceflight, including post-flight recovery and rehabilitation. Several aspects of the V2Suit system concept were explored and simulated prior to developing a brassboard prototype for technology demonstration. This included a system architecture for identifying the key components and their interconnects, initial identification of key human-system integration challenges, development of a simulation architecture for CMG selection and parameter sizing, and the detailed mechanical design and fabrication of a module. The brassboard prototype demonstrates closed-loop control from "down" initialization through CMG actuation, and provides a research platform for human performance evaluations to mitigate sensorimotor adaptation, as well as a tool for determining the performance requirements when used as a musculoskeletal deconditioning countermeasure. This type of countermeasure system also has Earth benefits, particularly in gait or movement stabilization and rehabilitation.

  15. The Variable Vector Countermeasure Suit (V2Suit for Space Habitation and Exploration

    Directory of Open Access Journals (Sweden)

    Kevin R Duda

    2015-04-01

    Full Text Available The Variable Vector Countermeasure Suit (V2Suit for Space Habitation and Exploration is a novel system concept that provides a platform for integrating sensors and actuators with daily astronaut intravehicular activities to improve health and performance, while reducing the mass and volume of the physiologic adaptation countermeasure systems, as well as the required exercise time during long-duration space exploration missions. The V2Suit system leverages wearable kinematic monitoring technology and uses inertial measurement units (IMUs and control moment gyroscopes (CMGs within miniaturized modules placed on body segments to provide a viscous resistance during movements against a specified direction of down – initially as a countermeasure to the sensorimotor adaptation performance decrements that manifest themselves while living and working in microgravity and during gravitational transitions during long-duration spaceflight, including post-flight recovery and rehabilitation. Several aspects of the V2Suit system concept were explored and simulated prior to developing a brassboard prototype for technology demonstration. This included a system architecture for identifying the key components and their interconnects, initial identification of key human-system integration challenges, development of a simulation architecture for CMG selection and parameter sizing, and the detailed mechanical design and fabrication of a module. The brassboard prototype demonstrates closed-loop control from down initialization through CMG actuation, and provides a research platform for human performance evaluations to mitigate sensorimotor adaptation, as well as a tool for determining the performance requirements when used as a musculoskeletal deconditioning countermeasure. This type of countermeasure system also has Earth benefits, particularly in gait or movement stabilization and rehabilitation.

  16. A proposal for benchmark tests for underactuated or compliant hands

    Directory of Open Access Journals (Sweden)

    G. A. Kragten

    2010-12-01

    Full Text Available There is a lack of agreement in the literature as to what exactly quantifies the performance of underactuated hands. This paper proposes two benchmark tests to measure the ability of underactuated hands to grasp different objects and the ability to hold the objects when force disturbances apply. The first test determines the smallest and largest cylindrical object which can be successfully grasped in an enveloping grasp or in a pinch grasp. The second test provides the maximal allowable force which can be applied to a grasped object without loosing it. A setup was constructed consisting of standard components. Exemplary tests were applied to the Delft Hand 2. The proposed benchmark tests are representative to quantify the performance of pick and place operations with underactuated hands. The results of the tests can be applied to evaluate, compare, and improve the performance of robotic hands.

    This paper was presented at the IFToMM/ASME International Workshop on Underactuated Grasping (UG2010, 19 August 2010, Montréal, Canada.

  17. Integration of oncology and palliative care: setting a benchmark.

    Science.gov (United States)

    Vayne-Bossert, P; Richard, E; Good, P; Sullivan, K; Hardy, J R

    2017-10-01

    Integration of oncology and palliative care (PC) should be the standard model of care for patients with advanced cancer. An expert panel developed criteria that constitute integration. This study determined whether the PC service within this Health Service, which is considered to be fully "integrated", could be benchmarked against these criteria. A survey was undertaken to determine the perceived level of integration of oncology and palliative care by all health care professionals (HCPs) within our cancer centre. An objective determination of integration was obtained from chart reviews of deceased patients. Integration was defined as >70% of all respondents answered "agree" or "strongly agree" to each indicator and >70% of patient charts supported each criteria. Thirty-four HCPs participated in the survey (response rate 69%). Over 90% were aware of the outpatient PC clinic, interdisciplinary and consultation team, PC senior leadership, and the acceptance of concurrent anticancer therapy. None of the other criteria met the 70% agreement mark but many respondents lacked the necessary knowledge to respond. The chart review included 67 patients, 92% of whom were seen by the PC team prior to death. The median time from referral to death was 103 days (range 0-1347). The level of agreement across all criteria was below our predefined definition of integration. The integration criteria relating to service delivery are medically focused and do not lend themselves to interdisciplinary review. The objective criteria can be audited and serve both as a benchmark and a basis for improvement activities.

  18. Utilizing a Suited Manikin Test Apparatus and Space Suit Ventilation Loop to Evaluate Carbon Dioxide Washout

    Science.gov (United States)

    Chullen, Cinda; Conger, Bruce; Korona, Adam; Kanne, Bryan; McMillin, Summer; Paul, Thomas; Norcross, Jason; Alonso, Jesus Delgado; Swickrath, Mike

    2015-01-01

    NASA is pursuing technology development of an Advanced Extravehicular Mobility Unit (AEMU) which is an integrated assembly made up of primarily a pressure garment system and a portable life support subsystem (PLSS). The PLSS is further composed of an oxygen subsystem, a ventilation subsystem, and a thermal subsystem. One of the key functions of the ventilation system is to remove and control the carbon dioxide (CO2) delivered to the crewmember. Carbon dioxide washout is the mechanism by which CO2 levels are controlled within the space suit helmet to limit the concentration of CO2 inhaled by the crew member. CO2 washout performance is a critical parameter needed to ensure proper and robust designs that are insensitive to human variabilities in a space suit. A suited manikin test apparatus (SMTA) was developed to augment testing of the PLSS ventilation loop in order to provide a lower cost and more controlled alternative to human testing. The CO2 removal function is performed by the regenerative Rapid Cycle Amine (RCA) within the PLSS ventilation loop and its performance is evaluated within the integrated SMTA and Ventilation Loop test system. This paper will provide a detailed description of the schematics, test configurations, and hardware components of this integrated system. Results and analysis of testing performed with this integrated system will be presented within this paper.

  19. Revaluering benchmarking - A topical theme for the construction industry

    DEFF Research Database (Denmark)

    Rasmussen, Grane Mikael Gregaard

    2011-01-01

    Over the past decade, benchmarking has increasingly gained foothold in the construction industry. The predominant research, perceptions and uses of benchmarking are valued so strongly and uniformly, that what may seem valuable, is actually abstaining researchers and practitioners from studying...... this perspective develops more thorough knowledge about benchmarking and challenges the current dominating rationales. Hereby, it is argued that benchmarking is not a neutral practice. On the contrary it is highly influenced by organizational ambitions and strategies, with the potentials to transform...

  20. Verification of the code DYN3D/R with the help of international benchmarks

    International Nuclear Information System (INIS)

    Grundmann, U.; Rohde, U.

    1997-10-01

    Different benchmarks for reactors with quadratic fuel assemblies were calculated with the code DYN3D/R. In this report comparisons with the results of the reference solutions are carried out. The results of DYN3D/R and the reference calculation for the eigenvalue k eff and the power distribution are shown for the steady-state 3-dimensional IAEA-Benchmark. The results of NEACRP-Benchmarks on control rod ejections in a standard PWR were compared with the reference solutions published by the NEA Data Bank. For assessing the accuracy of DYN3D/R results in comparison to other codes the deviations to the reference solutions are considered. Detailed comparisons with the published reference solutions of the NEA-NSC Benchmarks on uncontrolled withdrawal of control rods are made. The influence of the axial nodalization is also investigated. All in all, a good agreement of the DYN3D/R results with the reference solutions can be seen for the considered benchmark problems. (orig.) [de

  1. Variation In Accountable Care Organization Spending And Sensitivity To Risk Adjustment: Implications For Benchmarking.

    Science.gov (United States)

    Rose, Sherri; Zaslavsky, Alan M; McWilliams, J Michael

    2016-03-01

    Spending targets (or benchmarks) for accountable care organizations (ACOs) participating in the Medicare Shared Savings Program must be set carefully to encourage program participation while achieving fiscal goals and minimizing unintended consequences, such as penalizing ACOs for serving sicker patients. Recently proposed regulatory changes include measures to make benchmarks more similar for ACOs in the same area with different historical spending levels. We found that ACOs vary widely in how their spending levels compare with those of other local providers after standard case-mix adjustments. Additionally adjusting for survey measures of patient health meaningfully reduced the variation in differences between ACO spending and local average fee-for-service spending, but substantial variation remained, which suggests that differences in care efficiency between ACOs and local non-ACO providers vary widely. Accordingly, measures to equilibrate benchmarks between high- and low-spending ACOs--such as setting benchmarks to risk-adjusted average fee-for-service spending in an area--should be implemented gradually to maintain participation by ACOs with high spending. Use of survey information also could help mitigate perverse incentives for risk selection and upcoding and limit unintended consequences of new benchmarking methodologies for ACOs serving sicker patients. Project HOPE—The People-to-People Health Foundation, Inc.

  2. Benchmarking in the globalised world and its impact on South ...

    African Journals Online (AJOL)

    In order to understand the potential impact of international benchmarking on South African institutions, it is important to explore the future role of benchmarking on the international level. In this regard, examples of transnational benchmarking activities will be considered. As a result of the involvement of South African ...

  3. Benchmarking a signpost to excellence in quality and productivity

    CERN Document Server

    Karlof, Bengt

    1993-01-01

    According to the authors, benchmarking exerts a powerful leverage effect on an organization and they consider some of the factors which justify their claim. Describes how to implement benchmarking and exactly what to benchmark. Explains benchlearning which integrates education, leadership development and organizational dynamics with the actual work being done and how to make it work more efficiently in terms of quality and productivity.

  4. MTCB: A Multi-Tenant Customizable database Benchmark

    NARCIS (Netherlands)

    van der Zijden, WIm; Hiemstra, Djoerd; van Keulen, Maurice

    2017-01-01

    We argue that there is a need for Multi-Tenant Customizable OLTP systems. Such systems need a Multi-Tenant Customizable Database (MTC-DB) as a backing. To stimulate the development of such databases, we propose the benchmark MTCB. Benchmarks for OLTP exist and multi-tenant benchmarks exist, but no

  5. OCAMS: The OSIRIS-REx Camera Suite

    Science.gov (United States)

    Rizk, B.; Drouet d'Aubigny, C.; Golish, D.; Fellows, C.; Merrill, C.; Smith, P.; Walker, M. S.; Hendershot, J. E.; Hancock, J.; Bailey, S. H.; DellaGiustina, D. N.; Lauretta, D. S.; Tanner, R.; Williams, M.; Harshman, K.; Fitzgibbon, M.; Verts, W.; Chen, J.; Connors, T.; Hamara, D.; Dowd, A.; Lowman, A.; Dubin, M.; Burt, R.; Whiteley, M.; Watson, M.; McMahon, T.; Ward, M.; Booher, D.; Read, M.; Williams, B.; Hunten, M.; Little, E.; Saltzman, T.; Alfred, D.; O'Dougherty, S.; Walthall, M.; Kenagy, K.; Peterson, S.; Crowther, B.; Perry, M. L.; See, C.; Selznick, S.; Sauve, C.; Beiser, M.; Black, W.; Pfisterer, R. N.; Lancaster, A.; Oliver, S.; Oquest, C.; Crowley, D.; Morgan, C.; Castle, C.; Dominguez, R.; Sullivan, M.

    2018-02-01

    The OSIRIS-REx Camera Suite (OCAMS) will acquire images essential to collecting a sample from the surface of Bennu. During proximity operations, these images will document the presence of satellites and plumes, record spin state, enable an accurate model of the asteroid's shape, and identify any surface hazards. They will confirm the presence of sampleable regolith on the surface, observe the sampling event itself, and image the sample head in order to verify its readiness to be stowed. They will document Bennu's history as an example of early solar system material, as a microgravity body with a planetesimal size-scale, and as a carbonaceous object. OCAMS is fitted with three cameras. The MapCam will record color images of Bennu as a point source on approach to the asteroid in order to connect Bennu's ground-based point-source observational record to later higher-resolution surface spectral imaging. The SamCam will document the sample site before, during, and after it is disturbed by the sample mechanism. The PolyCam, using its focus mechanism, will observe the sample site at sub-centimeter resolutions, revealing surface texture and morphology. While their imaging requirements divide naturally between the three cameras, they preserve a strong degree of functional overlap. OCAMS and the other spacecraft instruments will allow the OSIRIS-REx mission to collect a sample from a microgravity body on the same visit during which it was first optically acquired from long range, a useful capability as humanity reaches out to explore near-Earth, Main-Belt and Jupiter Trojan asteroids.

  6. Benchmark test cases for evaluation of computer-based methods for detection of setup errors: realistic digitally reconstructed electronic portal images with known setup errors

    International Nuclear Information System (INIS)

    Fritsch, Daniel S.; Raghavan, Suraj; Boxwala, Aziz; Earnhart, Jon; Tracton, Gregg; Cullip, Timothy; Chaney, Edward L.

    1997-01-01

    Purpose: The purpose of this investigation was to develop methods and software for computing realistic digitally reconstructed electronic portal images with known setup errors for use as benchmark test cases for evaluation and intercomparison of computer-based methods for image matching and detecting setup errors in electronic portal images. Methods and Materials: An existing software tool for computing digitally reconstructed radiographs was modified to compute simulated megavoltage images. An interface was added to allow the user to specify which setup parameter(s) will contain computer-induced random and systematic errors in a reference beam created during virtual simulation. Other software features include options for adding random and structured noise, Gaussian blurring to simulate geometric unsharpness, histogram matching with a 'typical' electronic portal image, specifying individual preferences for the appearance of the 'gold standard' image, and specifying the number of images generated. The visible male computed tomography data set from the National Library of Medicine was used as the planning image. Results: Digitally reconstructed electronic portal images with known setup errors have been generated and used to evaluate our methods for automatic image matching and error detection. Any number of different sets of test cases can be generated to investigate setup errors involving selected setup parameters and anatomic volumes. This approach has proved to be invaluable for determination of error detection sensitivity under ideal (rigid body) conditions and for guiding further development of image matching and error detection methods. Example images have been successfully exported for similar use at other sites. Conclusions: Because absolute truth is known, digitally reconstructed electronic portal images with known setup errors are well suited for evaluation of computer-aided image matching and error detection methods. High-quality planning images, such as

  7. BENCHMARKING – BETWEEN TRADITIONAL & MODERN BUSINESS ENVIRONMENT

    Directory of Open Access Journals (Sweden)

    Mihaela Ungureanu

    2011-09-01

    Full Text Available The concept of benchmarking requires a continuous process of performance improvement of different organizations in order to obtain superiority towards those perceived as market leader’s competitors. This superiority can always be questioned, its relativity originating in the quick growing evolution of the economic environment. The approach supports innovation in relation with traditional methods and it is based on the will of those managers who want to determine limits and seek excellence. The end of the twentieth century is the period of broad expression of benchmarking in various areas and its transformation from a simple quantitative analysis tool, to a resource of information on performance and quality of goods and services.

  8. Benchmark and Continuous Improvement of Performance

    Directory of Open Access Journals (Sweden)

    Alina Alecse Stanciu

    2017-12-01

    Full Text Available The present Economic Environment is challenge us to perform, to think and re-think our personal strategies in according with our entities strategies, even if we are simply employed or we are entrepreneurs. Is an environment characterised by Volatility, Uncertainity, Complexity and Ambiguity - a VUCA World in which the entities must fight for their position gained in the market, disrupt new markets and new economies, developing their client portofolio, with the Performance as one final goal. The pressure of driving forces known as the 2030 Megatrends: Globalization 2.0, Environmental Crisis and the Scarcity of Resources, Individualism and Value Pluralism, Demographic Change, This paper examines whether using benchmark is an opportunity to increase the competitiveness of Romanian SMEs and the results show that benchmark is therefore a powerful instrument, combining reduced negative impact on the environment with a positive impact on the economy and society.

  9. Direct data access protocols benchmarking on DPM

    CERN Document Server

    Furano, Fabrizio; Keeble, Oliver; Mancinelli, Valentina

    2015-01-01

    The Disk Pool Manager is an example of a multi-protocol, multi-VO system for data access on the Grid that went though a considerable technical evolution in the last years. Among other features, its architecture offers the opportunity of testing its different data access frontends under exactly the same conditions, including hardware and backend software. This characteristic inspired the idea of collecting monitoring information from various testbeds in order to benchmark the behaviour of the HTTP and Xrootd protocols for the use case of data analysis, batch or interactive. A source of information is the set of continuous tests that are run towards the worldwide endpoints belonging to the DPM Collaboration, which accumulated relevant statistics in its first year of activity. On top of that, the DPM releases are based on multiple levels of automated testing that include performance benchmarks of various kinds, executed regularly every day. At the same time, the recent releases of DPM can report monitoring infor...

  10. HPL and STREAM Benchmarks on SANAM Supercomputer

    KAUST Repository

    Bin Sulaiman, Riman A.

    2017-03-13

    SANAM supercomputer was jointly built by KACST and FIAS in 2012 ranking second that year in the Green500 list with a power efficiency of 2.3 GFLOPS/W (Rohr et al., 2014). It is a heterogeneous accelerator-based HPC system that has 300 compute nodes. Each node includes two Intel Xeon E5?2650 CPUs, two AMD FirePro S10000 dual GPUs and 128 GiB of main memory. In this work, the seven benchmarks of HPCC were installed and configured to reassess the performance of SANAM, as part of an unpublished master thesis, after it was reassembled in the Kingdom of Saudi Arabia. We present here detailed results of HPL and STREAM benchmarks.

  11. IOP Physics benchmarks of the VELO upgrade

    CERN Document Server

    AUTHOR|(CDS)2068636

    2017-01-01

    The LHCb Experiment at the LHC is successfully performing precision measurements primarily in the area of flavour physics. The collaboration is preparing an upgrade that will start taking data in 2021 with a trigger-less readout at five times the current luminosity. The vertex locator has been crucial in the success of the experiment and will continue to be so for the upgrade. It will be replaced by a hybrid pixel detector and this paper discusses the performance benchmarks of the upgraded detector. Despite the challenging experimental environment, the vertex locator will maintain or improve upon its benchmark figures compared to the current detector. Finally the long term plans for LHCb, beyond those of the upgrade currently in preparation, are discussed.

  12. Toxicological benchmarks for wildlife. Environmental Restoration Program

    Energy Technology Data Exchange (ETDEWEB)

    Opresko, D.M.; Sample, B.E.; Suter, G.W.

    1993-09-01

    This report presents toxicological benchmarks for assessment of effects of 55 chemicals on six representative mammalian wildlife species (short-tailed shrew, white-footed mouse, cottontail ink, red fox, and whitetail deer) and eight avian wildlife species (American robin, woodcock, wild turkey, belted kingfisher, great blue heron, barred owl, Cooper`s hawk, and redtailed hawk) (scientific names are presented in Appendix C). These species were chosen because they are widely distributed and provide a representative range of body sizes and diets. The chemicals are some of those that occur at United States Department of Energy (DOE) waste sites. The benchmarks presented in this report are values believed to be nonhazardous for the listed wildlife species.

  13. Benchmark On Sensitivity Calculation (Phase III)

    Energy Technology Data Exchange (ETDEWEB)

    Ivanova, Tatiana [IRSN; Laville, Cedric [IRSN; Dyrda, James [Atomic Weapons Establishment; Mennerdahl, Dennis [E. Mennerdahl Systems; Golovko, Yury [Institute of Physics and Power Engineering (IPPE), Obninsk, Russia; Raskach, Kirill [Institute of Physics and Power Engineering (IPPE), Obninsk, Russia; Tsiboulia, Anatoly [Institute of Physics and Power Engineering (IPPE), Obninsk, Russia; Lee, Gil Soo [Korea Institute of Nuclear Safety (KINS); Woo, Sweng-Woong [Korea Institute of Nuclear Safety (KINS); Bidaud, Adrien [Labratoire de Physique Subatomique et de Cosmolo-gie (LPSC); Patel, Amrit [NRC; Bledsoe, Keith C [ORNL; Rearden, Bradley T [ORNL; Gulliford, J. [OECD Nuclear Energy Agency

    2012-01-01

    The sensitivities of the keff eigenvalue to neutron cross sections have become commonly used in similarity studies and as part of the validation algorithm for criticality safety assessments. To test calculations of the sensitivity coefficients, a benchmark study (Phase III) has been established by the OECD-NEA/WPNCS/EG UACSA (Expert Group on Uncertainty Analysis for Criticality Safety Assessment). This paper presents some sensitivity results generated by the benchmark participants using various computational tools based upon different computational methods: SCALE/TSUNAMI-3D and -1D, MONK, APOLLO2-MORET 5, DRAGON-SUSD3D and MMKKENO. The study demonstrates the performance of the tools. It also illustrates how model simplifications impact the sensitivity results and demonstrates the importance of 'implicit' (self-shielding) sensitivities. This work has been a useful step towards verification of the existing and developed sensitivity analysis methods.

  14. Advanced Sensor Platform to Evaluate Manloads For Exploration Suit Architectures

    Science.gov (United States)

    McFarland, Shane; Pierce, Gregory

    2016-01-01

    Space suit manloads are defined as the outer bounds of force that the human occupant of a suit is able to exert onto the suit during motion. They are defined on a suit-component basis as a unit of maximum force that the suit component in question must withstand without failure. Existing legacy manloads requirements are specific to the suit architecture of the EMU and were developed in an iterative fashion; however, future exploration needs dictate a new suit architecture with bearings, load paths, and entry capability not previously used in any flight suit. No capability currently exists to easily evaluate manloads imparted by a suited occupant, which would be required to develop requirements for a flight-rated design. However, sensor technology has now progressed to the point where an easily-deployable, repeatable and flexible manloads measuring technique could be developed leveraging recent advances in sensor technology. INNOVATION: This development positively impacts schedule, cost and safety risk associated with new suit exploration architectures. For a final flight design, a comprehensive and accurate man loads requirements set must be communicated to the contractor; failing that, a suit design which does not meet necessary manloads limits is prone to failure during testing or worse, during an EVA, which could cause catastrophic failure of the pressure garment posing risk to the crew. This work facilitates a viable means of developing manloads requirements using a range of human sizes & strengths. OUTCOME / RESULTS: Performed sensor market research. Highlighted three viable options (primary, secondary, and flexible packaging option). Designed/fabricated custom bracket to evaluate primary option on a single suit axial. Manned suited manload testing completed and general approach verified.

  15. Benchmark problems for repository siting models

    International Nuclear Information System (INIS)

    Ross, B.; Mercer, J.W.; Thomas, S.D.; Lester, B.H.

    1982-12-01

    This report describes benchmark problems to test computer codes used in siting nuclear waste repositories. Analytical solutions, field problems, and hypothetical problems are included. Problems are included for the following types of codes: ground-water flow in saturated porous media, heat transport in saturated media, ground-water flow in saturated fractured media, heat and solute transport in saturated porous media, solute transport in saturated porous media, solute transport in saturated fractured media, and solute transport in unsaturated porous media

  16. SINBAD: Shielding integral benchmark archive and database

    International Nuclear Information System (INIS)

    Hunter, H.T.; Ingersoll, D.T.; Roussin, R.W.

    1996-01-01

    SINBAD is a new electronic database developed to store a variety of radiation shielding benchmark data so that users can easily retrieve and incorporate the data into their calculations. SINBAD is an excellent data source for users who require the quality assurance necessary in developing cross-section libraries or radiation transport codes. The future needs of the scientific community are best served by the electronic database format of SINBAD and its user-friendly interface, combined with its data accuracy and integrity

  17. Benchmarking of Remote Sensing Segmentation Methods

    Czech Academy of Sciences Publication Activity Database

    Mikeš, Stanislav; Haindl, Michal; Scarpa, G.; Gaetano, R.

    2015-01-01

    Roč. 8, č. 5 (2015), s. 2240-2248 ISSN 1939-1404 R&D Projects: GA ČR(CZ) GA14-10911S Institutional support: RVO:67985556 Keywords : benchmark * remote sensing segmentation * unsupervised segmentation * supervised segmentation Subject RIV: BD - Theory of Information Impact factor: 2.145, year: 2015 http://library.utia.cas.cz/separaty/2015/RO/haindl-0445995.pdf

  18. Benchmarking the Multidimensional Stellar Implicit Code MUSIC

    Science.gov (United States)

    Goffrey, T.; Pratt, J.; Viallet, M.; Baraffe, I.; Popov, M. V.; Walder, R.; Folini, D.; Geroux, C.; Constantino, T.

    2017-04-01

    We present the results of a numerical benchmark study for the MUltidimensional Stellar Implicit Code (MUSIC) based on widely applicable two- and three-dimensional compressible hydrodynamics problems relevant to stellar interiors. MUSIC is an implicit large eddy simulation code that uses implicit time integration, implemented as a Jacobian-free Newton Krylov method. A physics based preconditioning technique which can be adjusted to target varying physics is used to improve the performance of the solver. The problems used for this benchmark study include the Rayleigh-Taylor and Kelvin-Helmholtz instabilities, and the decay of the Taylor-Green vortex. Additionally we show a test of hydrostatic equilibrium, in a stellar environment which is dominated by radiative effects. In this setting the flexibility of the preconditioning technique is demonstrated. This work aims to bridge the gap between the hydrodynamic test problems typically used during development of numerical methods and the complex flows of stellar interiors. A series of multidimensional tests were performed and analysed. Each of these test cases was analysed with a simple, scalar diagnostic, with the aim of enabling direct code comparisons. As the tests performed do not have analytic solutions, we verify MUSIC by comparing it to established codes including ATHENA and the PENCIL code. MUSIC is able to both reproduce behaviour from established and widely-used codes as well as results expected from theoretical predictions. This benchmarking study concludes a series of papers describing the development of the MUSIC code and provides confidence in future applications.

  19. Confidential benchmarking based on multiparty computation

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Damgård, Kasper Lyneborg; Nielsen, Kurt

    We report on the design and implementation of a system that uses multiparty computation to enable banks to benchmark their customers' confidential performance data against a large representative set of confidential performance data from a consultancy house. The system ensures that both the banks'...... state during the computation. We ran the system with two servers doing the secure computation using a database with information on about 2500 users. Answers arrived in about 25 seconds.......We report on the design and implementation of a system that uses multiparty computation to enable banks to benchmark their customers' confidential performance data against a large representative set of confidential performance data from a consultancy house. The system ensures that both the banks...... debt. We propose a model based on linear programming for doing the benchmarking and implement it using the SPDZ protocol by Damgård et al., which we modify using a new idea that allows clients to supply data and get output without having to participate in the preprocessing phase and without keeping...

  20. A PWR Thorium Pin Cell Burnup Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Weaver, Kevan Dean; Zhao, X.; Pilat, E. E; Hejzlar, P.

    2000-05-01

    As part of work to evaluate the potential benefits of using thorium in LWR fuel, a thorium fueled benchmark comparison was made in this study between state-of-the-art codes, MOCUP (MCNP4B + ORIGEN2), and CASMO-4 for burnup calculations. The MOCUP runs were done individually at MIT and INEEL, using the same model but with some differences in techniques and cross section libraries. Eigenvalue and isotope concentrations were compared on a PWR pin cell model up to high burnup. The eigenvalue comparison as a function of burnup is good: the maximum difference is within 2% and the average absolute difference less than 1%. The isotope concentration comparisons are better than a set of MOX fuel benchmarks and comparable to a set of uranium fuel benchmarks reported in the literature. The actinide and fission product data sources used in the MOCUP burnup calculations for a typical thorium fuel are documented. Reasons for code vs code differences are analyzed and discussed.

  1. Equilibrium Partitioning Sediment Benchmarks (ESBs) for the ...

    Science.gov (United States)

    This document describes procedures to determine the concentrations of nonionic organic chemicals in sediment interstitial waters. In previous ESB documents, the general equilibrium partitioning (EqP) approach was chosen for the derivation of sediment benchmarks because it accounts for the varying bioavailability of chemicals in different sediments and allows for the incorporation of the appropriate biological effects concentration. This provides for the derivation of benchmarks that are causally linked to the specific chemical, applicable across sediments, and appropriately protective of benthic organisms.  This equilibrium partitioning sediment benchmark (ESB) document was prepared by scientists from the Atlantic Ecology Division, Mid-Continent Ecology Division, and Western Ecology Division, the Office of Water, and private consultants. The document describes procedures to determine the interstitial water concentrations of nonionic organic chemicals in contaminated sediments. Based on these concentrations, guidance is provided on the derivation of toxic units to assess whether the sediments are likely to cause adverse effects to benthic organisms. The equilibrium partitioning (EqP) approach was chosen because it is based on the concentrations of chemical(s) that are known to be harmful and bioavailable in the environment.  This document, and five others published over the last nine years, will be useful for the Program Offices, including Superfund, a

  2. Status on benchmark testing of CENDL-3

    CERN Document Server

    Liu Ping

    2002-01-01

    CENDL-3, the newest version of China Evaluated Nuclear Data Library has been finished, and distributed for some benchmarks analysis recently. The processing was carried out using the NJOY nuclear data processing code system. The calculations and analysis of benchmarks were done with Monte Carlo code MCNP and reactor lattice code WIMSD5A. The calculated results were compared with the experimental results based on ENDF/B6. In most thermal and fast uranium criticality benchmarks, the calculated k sub e sub f sub f values with CENDL-3 were in good agreements with experimental results. In the plutonium fast cores, the k sub e sub f sub f values were improved significantly with CENDL-3. This is duo to reevaluation of the fission spectrum and elastic angular distributions of sup 2 sup 3 sup 9 Pu and sup 2 sup 4 sup 0 Pu. CENDL-3 underestimated the k sub e sub f sub f values compared with other evaluated data libraries for most spherical or cylindrical assemblies of plutonium or uranium with beryllium

  3. The Benchmarking of Integrated Business Structures

    Directory of Open Access Journals (Sweden)

    Nifatova Olena M.

    2017-12-01

    Full Text Available The aim of the article is to study the role of the benchmarking in the process of integration of business structures in the aspect of knowledge sharing. The results of studying the essential content of the concept “integrated business structure” and its semantic analysis made it possible to form our own understanding of this category with an emphasis on the need to consider it in the plane of three projections — legal, economic and organizational one. The economic projection of the essential content of integration associations of business units is supported by the organizational projection, which is expressed through such essential aspects as existence of a single center that makes key decisions; understanding integration as knowledge sharing; using the benchmarking as exchange of experience on key business processes. Understanding the process of integration of business units in the aspect of knowledge sharing involves obtaining certain information benefits. Using the benchmarking as exchange of experience on key business processes in integrated business structures will help improve the basic production processes, increase the efficiency of activity of both the individual business unit and the IBS as a whole.

  4. A community benchmark for viscoplastic thermal convection in a 2-D square box

    Science.gov (United States)

    Tosi, N.; Stein, C.; Noack, L.; Hüttig, C.; Maierová, P.; Samuel, H.; Davies, D. R.; Wilson, C. R.; Kramer, S. C.; Thieulot, C.; Glerum, A.; Fraters, M.; Spakman, W.; Rozel, A.; Tackley, P. J.

    2015-07-01

    Numerical simulations of thermal convection in the Earth's mantle often employ a pseudoplastic rheology in order to mimic the plate-like behavior of the lithosphere. Yet the benchmark tests available in the literature are largely based on simple linear rheologies in which the viscosity is either assumed to be constant or weakly dependent on temperature. Here we present a suite of simple tests based on nonlinear rheologies featuring temperature, pressure, and strain rate-dependent viscosity. Eleven different codes based on the finite volume, finite element, or spectral methods have been used to run five benchmark cases leading to stagnant lid, mobile lid, and periodic convection in a 2-D square box. For two of these cases, we also show resolution tests from all contributing codes. In addition, we present a bifurcation analysis, describing the transition from a mobile lid regime to a periodic regime, and from a periodic regime to a stagnant lid regime, as a function of the yield stress. At a resolution of around 100 cells or elements in both vertical and horizontal directions, all codes reproduce the required diagnostic quantities with a discrepancy of at most ˜3% in the presence of both linear and nonlinear rheologies. Furthermore, they consistently predict the critical value of the yield stress at which the transition between different regimes occurs. As the most recent mantle convection codes can handle a number of different geometries within a single solution framework, this benchmark will also prove useful when validating viscoplastic thermal convection simulations in such geometries.

  5. An overview of recent projects to study thermal protection in life rafts, lifeboats and immersion suits

    Energy Technology Data Exchange (ETDEWEB)

    Mak, L.; DuCharme, M. B.; Farnworth, B.; Wissler, E. H.; Brown, R.; Kuczora, A. [Maritime and Arctic Survival Scientific and Engineering Ressearch Team (Canada)

    2011-07-01

    Survival during a marine evacuation in cold regions is very challenging. However international regulations do not require specific thermal protection or ventilation performance criteria for lifeboats. In the same way, the testing methods for approval testing of immersion suits are not standardised. This paper investigated recent projects completed or on-going to study thermal protection in life rafts, lifeboats and immersion suits. An overview of several projects from the Maritime and Arctic Survival Scientific and Engineering Research Team (MASSERT) was conducted. This review provided the necessary knowledge to advance international standards and develop the thermal protection requirements for survival in the Arctic. The results showed the MASSERT correlated thermal insulation values between human subjects and thermal manikins in life rafts and in immersion suits. It was found that the manikins are a valuable evaluation tool, as well as the computerised models used as prediction tools.

  6. Innovative technology summary report: Sealed-seam sack suits

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-09-01

    Sealed-seam sack suits are an improved/innovative safety and industrial hygiene technology designed to protect workers from dermal exposure to contamination. Most of these disposable, synthetic-fabric suits are more protective than cotton suits, and are also water-resistant and gas permeable. Some fabrics provide a filter to aerosols, which is important to protection against contamination, while allowing air to pass, increasing comfort level of workers. It is easier to detect body-moisture breakthrough with the disposable suits than with cotton, which is also important to protecting workers from contamination. These suits present a safe and cost-effective (6% to 17% less expensive than the baseline) alternative to traditional protective clothing. This report covers the period from October 1996 to August 1997. During that time, sealed-seam sack suits were demonstrated during daily activities under normal working conditions at the C Reactor and under environmentally controlled conditions at the Los Alamos National Laboratory (LANL).

  7. Innovative technology summary report: Sealed-seam sack suits

    International Nuclear Information System (INIS)

    1998-09-01

    Sealed-seam sack suits are an improved/innovative safety and industrial hygiene technology designed to protect workers from dermal exposure to contamination. Most of these disposable, synthetic-fabric suits are more protective than cotton suits, and are also water-resistant and gas permeable. Some fabrics provide a filter to aerosols, which is important to protection against contamination, while allowing air to pass, increasing comfort level of workers. It is easier to detect body-moisture breakthrough with the disposable suits than with cotton, which is also important to protecting workers from contamination. These suits present a safe and cost-effective (6% to 17% less expensive than the baseline) alternative to traditional protective clothing. This report covers the period from October 1996 to August 1997. During that time, sealed-seam sack suits were demonstrated during daily activities under normal working conditions at the C Reactor and under environmentally controlled conditions at the Los Alamos National Laboratory (LANL)

  8. Toxicological benchmarks for screening potential contaminants of concern for effects on soil and litter invertebrates and heterotrophic process

    International Nuclear Information System (INIS)

    Will, M.E.; Suter, G.W. II.

    1994-09-01

    One of the initial stages in ecological risk assessments for hazardous waste sites is the screening of contaminants to determine which of them are worthy of further consideration as open-quotes contaminants of potential concern.close quotes This process is termed open-quotes contaminant screening.close quotes It is performed by comparing measured ambient concentrations of chemicals to benchmark concentrations. Currently, no standard benchmark concentrations exist for assessing contaminants in soil with respect to their toxicity to soil- and litter-dwelling invertebrates, including earthworms, other micro- and macroinvertebrates, or heterotrophic bacteria and fungi. This report presents a standard method for deriving benchmarks for this purpose, sets of data concerning effects of chemicals in soil on invertebrates and soil microbial processes, and benchmarks for chemicals potentially associated with United States Department of Energy sites. In addition, literature describing the experiments from which data were drawn for benchmark derivation. Chemicals that are found in soil at concentrations exceeding both the benchmarks and the background concentration for the soil type should be considered contaminants of potential concern

  9. Toxicological benchmarks for screening potential contaminants of concern for effects on soil and litter invertebrates and heterotrophic process

    Energy Technology Data Exchange (ETDEWEB)

    Will, M.E.; Suter, G.W. II

    1994-09-01

    One of the initial stages in ecological risk assessments for hazardous waste sites is the screening of contaminants to determine which of them are worthy of further consideration as {open_quotes}contaminants of potential concern.{close_quotes} This process is termed {open_quotes}contaminant screening.{close_quotes} It is performed by comparing measured ambient concentrations of chemicals to benchmark concentrations. Currently, no standard benchmark concentrations exist for assessing contaminants in soil with respect to their toxicity to soil- and litter-dwelling invertebrates, including earthworms, other micro- and macroinvertebrates, or heterotrophic bacteria and fungi. This report presents a standard method for deriving benchmarks for this purpose, sets of data concerning effects of chemicals in soil on invertebrates and soil microbial processes, and benchmarks for chemicals potentially associated with United States Department of Energy sites. In addition, literature describing the experiments from which data were drawn for benchmark derivation. Chemicals that are found in soil at concentrations exceeding both the benchmarks and the background concentration for the soil type should be considered contaminants of potential concern.

  10. SPICE benchmark for global tomographic methods

    Science.gov (United States)

    Qin, Yilong; Capdeville, Yann; Maupin, Valerie; Montagner, Jean-Paul; Lebedev, Sergei; Beucler, Eric

    2008-11-01

    The existing global tomographic methods result in different models due to different parametrization, scale resolution and theoretical approach. To test how current imaging techniques are limited by approximations in theory and by the inadequacy of data quality and coverage, it is necessary to perform a global-scale benchmark to understand the resolving properties of each specific imaging algorithm. In the framework of the Seismic wave Propagation and Imaging in Complex media: a European network (SPICE) project, it was decided to perform a benchmark experiment of global inversion algorithms. First, a preliminary benchmark with a simple isotropic model is carried out to check the feasibility in terms of acquisition geometry and numerical accuracy. Then, to fully validate tomographic schemes with a challenging synthetic data set, we constructed one complex anisotropic global model, which is characterized by 21 elastic constants and includes 3-D heterogeneities in velocity, anisotropy (radial and azimuthal anisotropy), attenuation, density, as well as surface topography and bathymetry. The intermediate-period (>32 s), high fidelity anisotropic modelling was performed by using state-of-the-art anisotropic anelastic modelling code, that is, coupled spectral element method (CSEM), on modern massively parallel computing resources. The benchmark data set consists of 29 events and three-component seismograms are recorded by 256 stations. Because of the limitation of the available computing power, synthetic seismograms have a minimum period of 32 s and a length of 10 500 s. The inversion of the benchmark data set demonstrates several well-known problems of classical surface wave tomography, such as the importance of crustal correction to recover the shallow structures, the loss of resolution with depth, the smearing effect, both horizontal and vertical, the inaccuracy of amplitude of isotropic S-wave velocity variation, the difficulty of retrieving the magnitude of azimuthal

  11. OXBench: A benchmark for evaluation of protein multiple sequence alignment accuracy

    Directory of Open Access Journals (Sweden)

    Searle Stephen MJ

    2003-10-01

    Full Text Available Abstract Background The alignment of two or more protein sequences provides a powerful guide in the prediction of the protein structure and in identifying key functional residues, however, the utility of any prediction is completely dependent on the accuracy of the alignment. In this paper we describe a suite of reference alignments derived from the comparison of protein three-dimensional structures together with evaluation measures and software that allow automatically generated alignments to be benchmarked. We test the OXBench benchmark suite on alignments generated by the AMPS multiple alignment method, then apply the suite to compare eight different multiple alignment algorithms. The benchmark shows the current state-of-the art for alignment accuracy and provides a baseline against which new alignment algorithms may be judged. Results The simple hierarchical multiple alignment algorithm, AMPS, performed as well as or better than more modern methods such as CLUSTALW once the PAM250 pair-score matrix was replaced by a BLOSUM series matrix. AMPS gave an accuracy in Structurally Conserved Regions (SCRs of 89.9% over a set of 672 alignments. The T-COFFEE method on a data set of families with http://www.compbio.dundee.ac.uk. Conclusions The OXBench suite of reference alignments, evaluation software and results database provide a convenient method to assess progress in sequence alignment techniques. Evaluation measures that were dependent on comparison to a reference alignment were found to give good discrimination between methods. The STAMP Sc Score which is independent of a reference alignment also gave good discrimination. Application of OXBench in this paper shows that with the exception of T-COFFEE, the majority of the improvement in alignment accuracy seen since 1985 stems from improved pair-score matrices rather than algorithmic refinements. The maximum theoretical alignment accuracy obtained by pooling results over all methods was 94

  12. Miniature Flexible Humidity Sensitive Patches for Space Suits, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Advanced space suit technologies demand improved, simplified, long-life regenerative sensing technologies, including humidity sensors, that exceed the performance of...

  13. Leveraging Active Knit Technologies for Aerospace Pressure Suit Applications

    Data.gov (United States)

    National Aeronautics and Space Administration — Anti-Gravity Suits (AGS) are garments used in astronautics to prevent crew from experiencing orthostatic intolerance (OI) and consequential blackouts while...

  14. Benchmark datasets for phylogenomic pipeline validation, applications for foodborne pathogen surveillance.

    Science.gov (United States)

    Timme, Ruth E; Rand, Hugh; Shumway, Martin; Trees, Eija K; Simmons, Mustafa; Agarwala, Richa; Davis, Steven; Tillman, Glenn E; Defibaugh-Chavez, Stephanie; Carleton, Heather A; Klimke, William A; Katz, Lee S

    2017-01-01

    As next generation sequence technology has advanced, there have been parallel advances in genome-scale analysis programs for determining evolutionary relationships as proxies for epidemiological relationship in public health. Most new programs skip traditional steps of ortholog determination and multi-gene alignment, instead identifying variants across a set of genomes, then summarizing results in a matrix of single-nucleotide polymorphisms or alleles for standard phylogenetic analysis. However, public health authorities need to document the performance of these methods with appropriate and comprehensive datasets so they can be validated for specific purposes, e.g., outbreak surveillance. Here we propose a set of benchmark datasets to be used for comparison and validation of phylogenomic pipelines. We identified four well-documented foodborne pathogen events in which the epidemiology was concordant with routine phylogenomic analyses (reference-based SNP and wgMLST approaches). These are ideal benchmark datasets, as the trees, WGS data, and epidemiological data for each are all in agreement. We have placed these sequence data, sample metadata, and "known" phylogenetic trees in publicly-accessible databases and developed a standard descriptive spreadsheet format describing each dataset. To facilitate easy downloading of these benchmarks, we developed an automated script that uses the standard descriptive spreadsheet format. Our "outbreak" benchmark datasets represent the four major foodborne bacterial pathogens ( Listeria monocytogenes , Salmonella enterica , Escherichia coli , and Campylobacter jejuni ) and one simulated dataset where the "known tree" can be accurately called the "true tree". The downloading script and associated table files are available on GitHub: https://github.com/WGS-standards-and-analysis/datasets. These five benchmark datasets will help standardize comparison of current and future phylogenomic pipelines, and facilitate important cross

  15. Benchmark datasets for phylogenomic pipeline validation, applications for foodborne pathogen surveillance

    Directory of Open Access Journals (Sweden)

    Ruth E. Timme

    2017-10-01

    Full Text Available Background As next generation sequence technology has advanced, there have been parallel advances in genome-scale analysis programs for determining evolutionary relationships as proxies for epidemiological relationship in public health. Most new programs skip traditional steps of ortholog determination and multi-gene alignment, instead identifying variants across a set of genomes, then summarizing results in a matrix of single-nucleotide polymorphisms or alleles for standard phylogenetic analysis. However, public health authorities need to document the performance of these methods with appropriate and comprehensive datasets so they can be validated for specific purposes, e.g., outbreak surveillance. Here we propose a set of benchmark datasets to be used for comparison and validation of phylogenomic pipelines. Methods We identified four well-documented foodborne pathogen events in which the epidemiology was concordant with routine phylogenomic analyses (reference-based SNP and wgMLST approaches. These are ideal benchmark datasets, as the trees, WGS data, and epidemiological data for each are all in agreement. We have placed these sequence data, sample metadata, and “known” phylogenetic trees in publicly-accessible databases and developed a standard descriptive spreadsheet format describing each dataset. To facilitate easy downloading of these benchmarks, we developed an automated script that uses the standard descriptive spreadsheet format. Results Our “outbreak” benchmark datasets represent the four major foodborne bacterial pathogens (Listeria monocytogenes, Salmonella enterica, Escherichia coli, and Campylobacter jejuni and one simulated dataset where the “known tree” can be accurately called the “true tree”. The downloading script and associated table files are available on GitHub: https://github.com/WGS-standards-and-analysis/datasets. Discussion These five benchmark datasets will help standardize comparison of current and

  16. Morphing: A Novel Approach to Astronaut Suit Sizing

    Science.gov (United States)

    Margerum, Sarah; Clowers, Kurt; Rajulu, Sudhakar

    2006-01-01

    The fitting of a spacesuit to an astronaut is an iterative process consisting of two parts. The first uses anthropometric data to provide an approximation of the suit components that will fit the astronaut. The second part is the subjective fitting, where small adjustments are made based on the astronaut s preference. By providing a better approximation of the correct suit components, the entire fit process time can be reduced significantly. The goals of this project are twofold: (1) To evaluate the effectiveness of the existing sizing algorithm for the Mark III Hybrid suit and (2) to determine what additional components are needed in order to provide adequate sizing for the existing astronaut population. A single subject was scanned using a 3D whole-body scanner (VITUS 3D) in the Mark III suit in eight different poses and four subjects in minimal clothing were also scanned in similar poses. The 3D external body scans of the suit and the subject are overlaid and visually aligned in a customized MATLAB program. The suit components were contracted or expanded linearly along the subjects limbs to match the subjects segmental lengths. Two independent measures were obtained from the morphing program on four subjects and compared with the existing sizing information. Two of the four subjects were in correspondence with the sizing algorithm and morphing results. The morphing outcome for a third subject, incompatible with the suit, suggested that an additional arm element at least 6 inches smaller than the existing smallest suit component would need to be acquired. The morphing result of the fourth subject, deemed incompatible with the suit using the sizing algorithm, indicated a different suit configuration which would be compatible. This configuration matched with the existing suit fit check data.

  17. Use MACES IVA Suit for EVA Mobility Evaluations

    Science.gov (United States)

    Watson, Richard D.

    2014-01-01

    The use of an Intra-Vehicular Activity (IVA) suit for a spacewalk or Extra-Vehicular Activity (EVA) was evaluated for mobility and usability in the Neutral Buoyancy Lab (NBL) environment. The Space Shuttle Advanced Crew Escape Suit (ACES) has been modified (MACES) to integrate with the Orion spacecraft. The first several missions of the Orion MPCV spacecraft will not have mass available to carry an EVA specific suit so any EVA required will have to be performed by the MACES. Since the MACES was not designed with EVA in mind, it was unknown what mobility the suit would be able to provide for an EVA or if a person could perform useful tasks for an extended time inside the pressurized suit. The suit was evaluated in multiple NBL runs by a variety of subjects including crewmembers with significant EVA experience. Various functional mobility tasks performed included: translation, body positioning, carrying tools, body stabilization, equipment handling, and use of tools. Hardware configurations included with and without TMG, suit with IVA gloves and suit with EVA gloves. Most tasks were completed on ISS mockups with existing EVA tools. Some limited tasks were completed with prototype tools on a simulated rocky surface. Major findings include: demonstration of the ability to weigh-out the suit, understanding the need to have subjects perform multiple runs prior to getting feedback, determination of critical sizing factors, and need for adjustment of suit work envelop. The early testing has demonstrated the feasibility of EVA's limited duration and limited scope. Further testing is required with more flight like tasking and constraints to validate these early results. If the suit is used for EVA, it will require mission specific modifications for umbilical management or PLSS integration, safety tether attachment, and tool interfaces. These evaluations are continuing through calendar year 2014.

  18. Model-Based Engineering and Manufacturing CAD/CAM Benchmark

    International Nuclear Information System (INIS)

    Domm, T.D.; Underwood, R.S.

    1999-01-01

    than a single computer-aided manufacturing (CAM) system. The Inteznet was a technology that all companies were considering to either transport information more easily throughout the corporation or as a conduit for business, as the small firm was doing Successfully. Because PrdEngineer is the de facto CAD standard fbr the NWC, the Benchmark Team targeted companies using Parametric Technology Corporation (PTC) software tools. Most of the companies used Pm'Engineer for design to some degree, but found the PTC CAM product, PdManufacture lacking as compared to alternate CAM solutions. All of the companies visited found the data exchange between CAD/CAM systems problematic. It was apparent that these companies were trying to consolidate their software tools to reduce translation but had not been able to do so because no single solution had all the needed capabilities. In regard to organizational structure and human resources, two companies were found to be using product or program teams. These teams consisted of the technical staff capable of completing the entire task and were xmintained throughout the project. This same strategy was evident at another of the companies but with more mobility of members. For all companies visited except the small work structure breakdown and responsibility were essentially the same as Y-12's at this time. The functions of numerical control (NC), desi and process planning were separate and distinct. The team made numerous recommendations that are detailed in the report

  19. ANN-Benchmarks: A Benchmarking Tool for Approximate Nearest Neighbor Algorithms

    DEFF Research Database (Denmark)

    Aumüller, Martin; Bernhardsson, Erik; Faithfull, Alexander

    2017-01-01

    visualise these as images, Open image in new window plots, and websites with interactive plots. ANN-Benchmarks aims to provide a constantly updated overview of the current state of the art of k-NN algorithms. In the short term, this overview allows users to choose the correct k-NN algorithm and parameters...... for their similarity search task; in the longer term, algorithm designers will be able to use this overview to test and refine automatic parameter tuning. The paper gives an overview of the system, evaluates the results of the benchmark, and points out directions for future work. Interestingly, very different...

  20. Regression Tree-Based Methodology for Customizing Building Energy Benchmarks to Individual Commercial Buildings

    Science.gov (United States)

    Kaskhedikar, Apoorva Prakash

    According to the U.S. Energy Information Administration, commercial buildings represent about 40% of the United State's energy consumption of which office buildings consume a major portion. Gauging the extent to which an individual building consumes energy in excess of its peers is the first step in initiating energy efficiency improvement. Energy Benchmarking offers initial building energy performance assessment without rigorous evaluation. Energy benchmarking tools based on the Commercial Buildings Energy Consumption Survey (CBECS) database are investigated in this thesis. This study proposes a new benchmarking methodology based on decision trees, where a relationship between the energy use intensities (EUI) and building parameters (continuous and categorical) is developed for different building types. This methodology was applied to medium office and school building types contained in the CBECS database. The Random Forest technique was used to find the most influential parameters that impact building energy use intensities. Subsequently, correlations which were significant were identified between EUIs and CBECS variables. Other than floor area, some of the important variables were number of workers, location, number of PCs and main cooling equipment. The coefficient of variation was used to evaluate the effectiveness of the new model. The customization technique proposed in this thesis was compared with another benchmarking model that is widely used by building owners and designers namely, the ENERGY STAR's Portfolio Manager. This tool relies on the standard Linear Regression methods which is only able to handle continuous variables. The model proposed uses data mining technique and was found to perform slightly better than the Portfolio Manager. The broader impacts of the new benchmarking methodology proposed is that it allows for identifying important categorical variables, and then incorporating them in a local, as against a global, model framework for EUI

  1. Regional restoration benchmarks for Acropora cervicornis

    Science.gov (United States)

    Schopmeyer, Stephanie A.; Lirman, Diego; Bartels, Erich; Gilliam, David S.; Goergen, Elizabeth A.; Griffin, Sean P.; Johnson, Meaghan E.; Lustic, Caitlin; Maxwell, Kerry; Walter, Cory S.

    2017-12-01

    Coral gardening plays an important role in the recovery of depleted populations of threatened Acropora cervicornis in the Caribbean. Over the past decade, high survival coupled with fast growth of in situ nursery corals have allowed practitioners to create healthy and genotypically diverse nursery stocks. Currently, thousands of corals are propagated and outplanted onto degraded reefs on a yearly basis, representing a substantial increase in the abundance, biomass, and overall footprint of A. cervicornis. Here, we combined an extensive dataset collected by restoration practitioners to document early (1-2 yr) restoration success metrics in Florida and Puerto Rico, USA. By reporting region-specific data on the impacts of fragment collection on donor colonies, survivorship and productivity of nursery corals, and survivorship and productivity of outplanted corals during normal conditions, we provide the basis for a stop-light indicator framework for new or existing restoration programs to evaluate their performance. We show that current restoration methods are very effective, that no excess damage is caused to donor colonies, and that once outplanted, corals behave just as wild colonies. We also provide science-based benchmarks that can be used by programs to evaluate successes and challenges of their efforts, and to make modifications where needed. We propose that up to 10% of the biomass can be collected from healthy, large A. cervicornis donor colonies for nursery propagation. We also propose the following benchmarks for the first year of activities for A. cervicornis restoration: (1) >75% live tissue cover on donor colonies; (2) >80% survivorship of nursery corals; and (3) >70% survivorship of outplanted corals. Finally, we report productivity means of 4.4 cm yr-1 for nursery corals and 4.8 cm yr-1 for outplants as a frame of reference for ranking performance within programs. Such benchmarks, and potential subsequent adaptive actions, are needed to fully assess the

  2. A Uranium Bioremediation Reactive Transport Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Yabusaki, Steven B.; Sengor, Sevinc; Fang, Yilin

    2015-06-01

    A reactive transport benchmark problem set has been developed based on in situ uranium bio-immobilization experiments that have been performed at a former uranium mill tailings site in Rifle, Colorado, USA. Acetate-amended groundwater stimulates indigenous microorganisms to catalyze the reduction of U(VI) to a sparingly soluble U(IV) mineral. The interplay between the flow, acetate loading periods and rates, microbially-mediated and geochemical reactions leads to dynamic behavior in metal- and sulfate-reducing bacteria, pH, alkalinity, and reactive mineral surfaces. The benchmark is based on an 8.5 m long one-dimensional model domain with constant saturated flow and uniform porosity. The 159-day simulation introduces acetate and bromide through the upgradient boundary in 14-day and 85-day pulses separated by a 10 day interruption. Acetate loading is tripled during the second pulse, which is followed by a 50 day recovery period. Terminal electron accepting processes for goethite, phyllosilicate Fe(III), U(VI), and sulfate are modeled using Monod-type rate laws. Major ion geochemistry modeled includes mineral reactions, as well as aqueous and surface complexation reactions for UO2++, Fe++, and H+. In addition to the dynamics imparted by the transport of the acetate pulses, U(VI) behavior involves the interplay between bioreduction, which is dependent on acetate availability, and speciation-controlled surface complexation, which is dependent on pH, alkalinity and available surface complexation sites. The general difficulty of this benchmark is the large number of reactions (74), multiple rate law formulations, a multisite uranium surface complexation model, and the strong interdependency and sensitivity of the reaction processes. Results are presented for three simulators: HYDROGEOCHEM, PHT3D, and PHREEQC.

  3. Benchmarking urban energy efficiency in the UK

    International Nuclear Information System (INIS)

    Keirstead, James

    2013-01-01

    This study asks what is the ‘best’ way to measure urban energy efficiency. There has been recent interest in identifying efficient cities so that best practices can be shared, a process known as benchmarking. Previous studies have used relatively simple metrics that provide limited insight on the complexity of urban energy efficiency and arguably fail to provide a ‘fair’ measure of urban performance. Using a data set of 198 urban UK local administrative units, three methods are compared: ratio measures, regression residuals, and data envelopment analysis. The results show that each method has its own strengths and weaknesses regarding the ease of interpretation, ability to identify outliers and provide consistent rankings. Efficient areas are diverse but are notably found in low income areas of large conurbations such as London, whereas industrial areas are consistently ranked as inefficient. The results highlight the shortcomings of the underlying production-based energy accounts. Ideally urban energy efficiency benchmarks would be built on consumption-based accounts, but interim recommendations are made regarding the use of efficiency measures that improve upon current practice and facilitate wider conversations about what it means for a specific city to be energy-efficient within an interconnected economy. - Highlights: • Benchmarking is a potentially valuable method for improving urban energy performance. • Three different measures of urban energy efficiency are presented for UK cities. • Most efficient areas are diverse but include low-income areas of large conurbations. • Least efficient areas perform industrial activities of national importance. • Improve current practice with grouped per capita metrics or regression residuals

  4. Freud: a software suite for high-throughput simulation analysis

    Science.gov (United States)

    Harper, Eric; Spellings, Matthew; Anderson, Joshua; Glotzer, Sharon

    Computer simulation is an indispensable tool for the study of a wide variety of systems. As simulations scale to fill petascale and exascale supercomputing clusters, so too does the size of the data produced, as well as the difficulty in analyzing these data. We present Freud, an analysis software suite for efficient analysis of simulation data. Freud makes no assumptions about the system being analyzed, allowing for general analysis methods to be applied to nearly any type of simulation. Freud includes standard analysis methods such as the radial distribution function, as well as new methods including the potential of mean force and torque and local crystal environment analysis. Freud combines a Python interface with fast, parallel C + + analysis routines to run efficiently on laptops, workstations, and supercomputing clusters. Data analysis on clusters reduces data transfer requirements, a prohibitive cost for petascale computing. Used in conjunction with simulation software, Freud allows for smart simulations that adapt to the current state of the system, enabling the study of phenomena such as nucleation and growth, intelligent investigation of phases and phase transitions, and determination of effective pair potentials.

  5. International Benchmarking of Electricity Transmission System Operators

    DEFF Research Database (Denmark)

    Agrell, Per J.; Bogetoft, Peter

    2014-01-01

    Electricity transmission system operators (TSO) in Europe are increasing subject to high-powered performance-based regulation, such as revenue-cap regimes. The determination of the parameters in such regimes is challenging for national regulatory authorities (NRA), since there is normally a single...... TSO operating in each jurisdiction. The solution for European regulators has been found in international regulatory benchmarking, organized in collaboration with the Council of European Energy Regulators (CEER) in 2008 and 2012 for 22 and 23 TSOs, respectively. The frontier study provides static cost...

  6. An OpenMP Compiler Benchmark

    Directory of Open Access Journals (Sweden)

    Matthias S. Müller

    2003-01-01

    Full Text Available The purpose of this benchmark is to propose several optimization techniques and to test their existence in current OpenMP compilers. Examples are the removal of redundant synchronization constructs, effective constructs for alternative code and orphaned directives. The effectiveness of the compiler generated code is measured by comparing different OpenMP constructs and compilers. If possible, we also compare with the hand coded "equivalent" solution. Six out of seven proposed optimization techniques are already implemented in different compilers. However, most compilers implement only one or two of them.

  7. Benchmarks of Global Clean Energy Manufacturing

    Energy Technology Data Exchange (ETDEWEB)

    Sandor, Debra [National Renewable Energy Lab. (NREL), Golden, CO (United States); Chung, Donald [National Renewable Energy Lab. (NREL), Golden, CO (United States); Keyser, David [National Renewable Energy Lab. (NREL), Golden, CO (United States); Mann, Margaret [National Renewable Energy Lab. (NREL), Golden, CO (United States); Engel-Cox, Jill [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2017-01-01

    The Clean Energy Manufacturing Analysis Center (CEMAC), sponsored by the U.S. Department of Energy (DOE) Office of Energy Efficiency and Renewable Energy (EERE), provides objective analysis and up-to-date data on global supply chains and manufacturing of clean energy technologies. Benchmarks of Global Clean Energy Manufacturing sheds light on several fundamental questions about the global clean technology manufacturing enterprise: How does clean energy technology manufacturing impact national economies? What are the economic opportunities across the manufacturing supply chain? What are the global dynamics of clean energy technology manufacturing?

  8. A Benchmark for Virtual Camera Control

    DEFF Research Database (Denmark)

    Burelli, Paolo; Yannakakis, Georgios N.

    2015-01-01

    Automatically animating and placing the virtual camera in a dynamic environment is a challenging task. The camera is expected to maximise and maintain a set of properties — i.e. visual composition — while smoothly moving through the environment and avoiding obstacles. A large number of different....... For this reason, in this paper, we propose a benchmark for the problem of virtual camera control and we analyse a number of different problems in different virtual environments. Each of these scenarios is described through a set of complexity measures and, as a result of this analysis, a subset of scenarios...

  9. Benchmark simulations of ICRF antenna coupling

    International Nuclear Information System (INIS)

    Louche, F.; Lamalle, P. U.; Messiaen, A. M.; Compernolle, B. van; Milanesio, D.; Maggiora, R.

    2007-01-01

    The paper reports on ongoing benchmark numerical simulations of antenna input impedance parameters in the ion cyclotron range of frequencies with different coupling codes: CST Microwave Studio, TOPICA and ANTITER 2. In particular we study the validity of the approximation of a magnetized plasma slab by a dielectric medium of suitably chosen permittivity. Different antenna models are considered: a single-strap antenna, a 4-strap antenna and the 24-strap ITER antenna array. Whilst the diagonal impedances are mostly in good agreement, some differences between the mutual terms predicted by Microwave Studio and TOPICA have yet to be resolved

  10. COVE 2A Benchmarking calculations using NORIA

    International Nuclear Information System (INIS)

    Carrigan, C.R.; Bixler, N.E.; Hopkins, P.L.; Eaton, R.R.

    1991-10-01

    Six steady-state and six transient benchmarking calculations have been performed, using the finite element code NORIA, to simulate one-dimensional infiltration into Yucca Mountain. These calculations were made to support the code verification (COVE 2A) activity for the Yucca Mountain Site Characterization Project. COVE 2A evaluates the usefulness of numerical codes for analyzing the hydrology of the potential Yucca Mountain site. Numerical solutions for all cases were found to be stable. As expected, the difficulties and computer-time requirements associated with obtaining solutions increased with infiltration rate. 10 refs., 128 figs., 5 tabs

  11. The BRITNeY Suite: A Platfor for Experiments

    DEFF Research Database (Denmark)

    Westergaard, Michael

    2006-01-01

    This paper describes a platform, the BRITNeY Suite, for experimenting with Coloured Petri nets. The BRITNeY Suite provides access to data-structures and a simulator for Coloured Petri nets via a powerful scripting language and plug-in-mechanism, thereby making it easy to perform customized simula...

  12. The International Standards Organisation offshore structures standard

    International Nuclear Information System (INIS)

    Snell, R.O.

    1994-01-01

    The International Standards Organisation has initiated a program to develop a suite of ISO Codes and Standards for the Oil Industry. The Offshore Structures Standard is one of seven topics being addressed. The scope of the standard will encompass fixed steel and concrete structures, floating structures, Arctic structures and the site specific assessment of mobile drilling and accommodation units. The standard will use as base documents the existing recommended practices and standards most frequently used for each type of structure, and will develop them to incorporate best published and recognized practice and knowledge where it provides a significant improvement on the base document. Work on the Code has commenced under the direction of an internationally constituted sub-committee comprising representatives from most of the countries with a substantial offshore oil and gas industry. This paper outlines the background to the code and the format, content and work program

  13. Benchmarking the internal combustion engine and hydrogen

    International Nuclear Information System (INIS)

    Wallace, J.S.

    2006-01-01

    The internal combustion engine is a cost-effective and highly reliable energy conversion technology. Exhaust emission regulations introduced in the 1970's triggered extensive research and development that has significantly improved in-use fuel efficiency and dramatically reduced exhaust emissions. The current level of gasoline vehicle engine development is highlighted and representative emissions and efficiency data are presented as benchmarks. The use of hydrogen fueling for IC engines has been investigated over many decades and the benefits and challenges arising are well-known. The current state of hydrogen-fueled engine development will be reviewed and evaluated against gasoline-fueled benchmarks. The prospects for further improvements to hydrogen-fueled IC engines will be examined. While fuel cells are projected to offer greater energy efficiency than IC engines and zero emissions, the availability of fuel cells in quantity at reasonable cost is a barrier to their widespread adaptation for the near future. In their current state of development, hydrogen fueled IC engines are an effective technology to create demand for hydrogen fueling infrastructure until fuel cells become available in commercial quantities. During this transition period, hydrogen fueled IC engines can achieve PZEV/ULSLEV emissions. (author)

  14. Simple mathematical law benchmarks human confrontations

    Science.gov (United States)

    Johnson, Neil F.; Medina, Pablo; Zhao, Guannan; Messinger, Daniel S.; Horgan, John; Gill, Paul; Bohorquez, Juan Camilo; Mattson, Whitney; Gangi, Devon; Qi, Hong; Manrique, Pedro; Velasquez, Nicolas; Morgenstern, Ana; Restrepo, Elvira; Johnson, Nicholas; Spagat, Michael; Zarama, Roberto

    2013-12-01

    Many high-profile societal problems involve an individual or group repeatedly attacking another - from child-parent disputes, sexual violence against women, civil unrest, violent conflicts and acts of terror, to current cyber-attacks on national infrastructure and ultrafast cyber-trades attacking stockholders. There is an urgent need to quantify the likely severity and timing of such future acts, shed light on likely perpetrators, and identify intervention strategies. Here we present a combined analysis of multiple datasets across all these domains which account for >100,000 events, and show that a simple mathematical law can benchmark them all. We derive this benchmark and interpret it, using a minimal mechanistic model grounded by state-of-the-art fieldwork. Our findings provide quantitative predictions concerning future attacks; a tool to help detect common perpetrators and abnormal behaviors; insight into the trajectory of a `lone wolf' identification of a critical threshold for spreading a message or idea among perpetrators; an intervention strategy to erode the most lethal clusters; and more broadly, a quantitative starting point for cross-disciplinary theorizing about human aggression at the individual and group level, in both real and online worlds.

  15. Swiss electricity grid - Benchmarking pilot project

    International Nuclear Information System (INIS)

    2001-01-01

    This article is a short version of the ENET number 210369. This report for the Swiss Federal Office of Energy (SFOE) describes a benchmarking pilot project carried out as a second phase in the development of a formula for the regulation of an open electricity market in Switzerland. It follows on from an initial phase involving the definition of a 'blue print' and a basic concept. The aims of the pilot project - to check out the practicability of the concept - are discussed. The collection of anonymised data for the benchmarking model from over 30 electricity utilities operating on all 7 Swiss grid levels and their integration in the three areas 'Technology', 'Grid Costs' and 'Capital Invested' are discussed in detail. In particular, confidentiality and data protection aspects are looked at. The methods used in the analysis of the data are described and the results of an efficiency analysis of various utilities are presented. The report is concluded with a listing of questions concerning data collection and analysis as well as operational and capital costs that are still to be answered

  16. REVISED STREAM CODE AND WASP5 BENCHMARK

    International Nuclear Information System (INIS)

    Chen, K

    2005-01-01

    STREAM is an emergency response code that predicts downstream pollutant concentrations for releases from the SRS area to the Savannah River. The STREAM code uses an algebraic equation to approximate the solution of the one dimensional advective transport differential equation. This approach generates spurious oscillations in the concentration profile when modeling long duration releases. To improve the capability of the STREAM code to model long-term releases, its calculation module was replaced by the WASP5 code. WASP5 is a US EPA water quality analysis program that simulates one-dimensional pollutant transport through surface water. Test cases were performed to compare the revised version of STREAM with the existing version. For continuous releases, results predicted by the revised STREAM code agree with physical expectations. The WASP5 code was benchmarked with the US EPA 1990 and 1991 dye tracer studies, in which the transport of the dye was measured from its release at the New Savannah Bluff Lock and Dam downstream to Savannah. The peak concentrations predicted by the WASP5 agreed with the measurements within ±20.0%. The transport times of the dye concentration peak predicted by the WASP5 agreed with the measurements within ±3.6%. These benchmarking results demonstrate that STREAM should be capable of accurately modeling releases from SRS outfalls

  17. UAV CAMERAS: OVERVIEW AND GEOMETRIC CALIBRATION BENCHMARK

    Directory of Open Access Journals (Sweden)

    M. Cramer

    2017-08-01

    Full Text Available Different UAV platforms and sensors are used in mapping already, many of them equipped with (sometimes modified cameras as known from the consumer market. Even though these systems normally fulfil their requested mapping accuracy, the question arises, which system performs best? This asks for a benchmark, to check selected UAV based camera systems in well-defined, reproducible environments. Such benchmark is tried within this work here. Nine different cameras used on UAV platforms, representing typical camera classes, are considered. The focus is laid on the geometry here, which is tightly linked to the process of geometrical calibration of the system. In most applications the calibration is performed in-situ, i.e. calibration parameters are obtained as part of the project data itself. This is often motivated because consumer cameras do not keep constant geometry, thus, cannot be seen as metric cameras. Still, some of the commercial systems are quite stable over time, as it was proven from repeated (terrestrial calibrations runs. Already (pre-calibrated systems may offer advantages, especially when the block geometry of the project does not allow for a stable and sufficient in-situ calibration. Especially for such scenario close to metric UAV cameras may have advantages. Empirical airborne test flights in a calibration field have shown how block geometry influences the estimated calibration parameters and how consistent the parameters from lab calibration can be reproduced.

  18. Uav Cameras: Overview and Geometric Calibration Benchmark

    Science.gov (United States)

    Cramer, M.; Przybilla, H.-J.; Zurhorst, A.

    2017-08-01

    Different UAV platforms and sensors are used in mapping already, many of them equipped with (sometimes) modified cameras as known from the consumer market. Even though these systems normally fulfil their requested mapping accuracy, the question arises, which system performs best? This asks for a benchmark, to check selected UAV based camera systems in well-defined, reproducible environments. Such benchmark is tried within this work here. Nine different cameras used on UAV platforms, representing typical camera classes, are considered. The focus is laid on the geometry here, which is tightly linked to the process of geometrical calibration of the system. In most applications the calibration is performed in-situ, i.e. calibration parameters are obtained as part of the project data itself. This is often motivated because consumer cameras do not keep constant geometry, thus, cannot be seen as metric cameras. Still, some of the commercial systems are quite stable over time, as it was proven from repeated (terrestrial) calibrations runs. Already (pre-)calibrated systems may offer advantages, especially when the block geometry of the project does not allow for a stable and sufficient in-situ calibration. Especially for such scenario close to metric UAV cameras may have advantages. Empirical airborne test flights in a calibration field have shown how block geometry influences the estimated calibration parameters and how consistent the parameters from lab calibration can be reproduced.

  19. Multisensor benchmark data for riot control

    Science.gov (United States)

    Jäger, Uwe; Höpken, Marc; Dürr, Bernhard; Metzler, Jürgen; Willersinn, Dieter

    2008-10-01

    Quick and precise response is essential for riot squads when coping with escalating violence in crowds. Often it is just a single person, known as the leader of the gang, who instigates other people and thus is responsible of excesses. Putting this single person out of action in most cases leads to a de-escalating situation. Fostering de-escalations is one of the main tasks of crowd and riot control. To do so, extensive situation awareness is mandatory for the squads and can be promoted by technical means such as video surveillance using sensor networks. To develop software tools for situation awareness appropriate input data with well-known quality is needed. Furthermore, the developer must be able to measure algorithm performance and ongoing improvements. Last but not least, after algorithm development has finished and marketing aspects emerge, meeting of specifications must be proved. This paper describes a multisensor benchmark which exactly serves this purpose. We first define the underlying algorithm task. Then we explain details about data acquisition and sensor setup and finally we give some insight into quality measures of multisensor data. Currently, the multisensor benchmark described in this paper is applied to the development of basic algorithms for situational awareness, e.g. tracking of individuals in a crowd.

  20. Benchmarking database performance for genomic data.

    Science.gov (United States)

    Khushi, Matloob

    2015-06-01

    Genomic regions represent features such as gene annotations, transcription factor binding sites and epigenetic modifications. Performing various genomic operations such as identifying overlapping/non-overlapping regions or nearest gene annotations are common research needs. The data can be saved in a database system for easy management, however, there is no comprehensive database built-in algorithm at present to identify overlapping regions. Therefore I have developed a novel region-mapping (RegMap) SQL-based algorithm to perform genomic operations and have benchmarked the performance of different databases. Benchmarking identified that PostgreSQL extracts overlapping regions much faster than MySQL. Insertion and data uploads in PostgreSQL were also better, although general searching capability of both databases was almost equivalent. In addition, using the algorithm pair-wise, overlaps of >1000 datasets of transcription factor binding sites and histone marks, collected from previous publications, were reported and it was found that HNF4G significantly co-locates with cohesin subunit STAG1 (SA1).Inc. © 2015 Wiley Periodicals, Inc.

  1. Benchmarking of SIMULATE-3 on engineering workstations

    International Nuclear Information System (INIS)

    Karlson, C.F.; Reed, M.L.; Webb, J.R.; Elzea, J.D.

    1990-01-01

    The nuclear fuel management department of Arizona Public Service Company (APS) has evaluated various computer platforms for a departmental engineering and business work-station local area network (LAN). Historically, centralized mainframe computer systems have been utilized for engineering calculations. Increasing usage and the resulting longer response times on the company mainframe system and the relative cost differential between a mainframe upgrade and workstation technology justified the examination of current workstations. A primary concern was the time necessary to turn around routine reactor physics reload and analysis calculations. Computers ranging from a Definicon 68020 processing board in an AT compatible personal computer up to an IBM 3090 mainframe were benchmarked. The SIMULATE-3 advanced nodal code was selected for benchmarking based on its extensive use in nuclear fuel management. SIMULATE-3 is used at APS for reload scoping, design verification, core follow, and providing predictions of reactor behavior under nominal conditions and planned reactor maneuvering, such as axial shape control during start-up and shutdown

  2. 'Wasteaware' benchmark indicators for integrated sustainable waste management in cities.

    Science.gov (United States)

    Wilson, David C; Rodic, Ljiljana; Cowing, Michael J; Velis, Costas A; Whiteman, Andrew D; Scheinberg, Anne; Vilches, Recaredo; Masterson, Darragh; Stretz, Joachim; Oelz, Barbara

    2015-01-01

    This paper addresses a major problem in international solid waste management, which is twofold: a lack of data, and a lack of consistent data to allow comparison between cities. The paper presents an indicator set for integrated sustainable waste management (ISWM) in cities both North and South, to allow benchmarking of a city's performance, comparing cities and monitoring developments over time. It builds on pioneering work for UN-Habitat's solid waste management in the World's cities. The comprehensive analytical framework of a city's solid waste management system is divided into two overlapping 'triangles' - one comprising the three physical components, i.e. collection, recycling, and disposal, and the other comprising three governance aspects, i.e. inclusivity; financial sustainability; and sound institutions and proactive policies. The indicator set includes essential quantitative indicators as well as qualitative composite indicators. This updated and revised 'Wasteaware' set of ISWM benchmark indicators is the cumulative result of testing various prototypes in more than 50 cities around the world. This experience confirms the utility of indicators in allowing comprehensive performance measurement and comparison of both 'hard' physical components and 'soft' governance aspects; and in prioritising 'next steps' in developing a city's solid waste management system, by identifying both local strengths that can be built on and weak points to be addressed. The Wasteaware ISWM indicators are applicable to a broad range of cities with very different levels of income and solid waste management practices. Their wide application as a standard methodology will help to fill the historical data gap. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. Pollution prevention opportunity assessment benchmarking: Recommendations for Hanford

    Energy Technology Data Exchange (ETDEWEB)

    Engel, J.A.

    1994-05-01

    Pollution Prevention Opportunity Assessments (P2OAs) are an important first step in any pollution prevention program. While P2OAs have been and are being conducted at Hanford, there exists no standard guidance, training, tracking, or systematic approach to identifying and addressing the most important waste streams. The purpose of this paper then is to serve as a guide to the Pollution Prevention group at Westinghouse Hanford in developing and implementing P2OAs at Hanford. By searching the literature and benchmarks other sites and agencies, the best elements from those programs can be incorporated and pitfalls more easily avoided. This search began with the 1988 document that introduces P2OAs (then called Process Waste Assessments, PWAS) by the Environmental Protection Agency. This important document presented the basic framework of P20A features which appeared in almost all later programs. Major Department of Energy programs were also examined, with particular attention to the Defense Programs P20A method of a graded approach, as presented at the Kansas City Plant. The graded approach is a system of conducting P2OAs of varying levels of detail depending on the size and importance of the waste stream. Finally, private industry programs were examined briefly. While all the benchmarked programs had excellent features, it was determined that the size and mission of Hanford precluded lifting any one program for use. Thus, a series of recommendations were made, based on the literature review, in order to begin an extensive program of P2OAs at Hanford. These recommendations are in the areas of: facility Pollution Prevention teams, P20A scope and methodology, guidance documents, training for facilities (and management), technical and informational support, tracking and measuring success, and incentives.

  4. Final Report of the NASA Office of Safety and Mission Assurance Agile Benchmarking Team

    Science.gov (United States)

    Wetherholt, Martha

    2016-01-01

    To ensure that the NASA Safety and Mission Assurance (SMA) community remains in a position to perform reliable Software Assurance (SA) on NASAs critical software (SW) systems with the software industry rapidly transitioning from waterfall to Agile processes, Terry Wilcutt, Chief, Safety and Mission Assurance, Office of Safety and Mission Assurance (OSMA) established the Agile Benchmarking Team (ABT). The Team's tasks were: 1. Research background literature on current Agile processes, 2. Perform benchmark activities with other organizations that are involved in software Agile processes to determine best practices, 3. Collect information on Agile-developed systems to enable improvements to the current NASA standards and processes to enhance their ability to perform reliable software assurance on NASA Agile-developed systems, 4. Suggest additional guidance and recommendations for updates to those standards and processes, as needed. The ABT's findings and recommendations for software management, engineering and software assurance are addressed herein.

  5. Development of a California commercial building benchmarking database

    Energy Technology Data Exchange (ETDEWEB)

    Kinney, Satkartar; Piette, Mary Ann

    2002-05-17

    Building energy benchmarking is a useful starting point for commercial building owners and operators to target energy savings opportunities. There are a number of tools and methods for benchmarking energy use. Benchmarking based on regional data can provides more relevant information for California buildings than national tools such as Energy Star. This paper discusses issues related to benchmarking commercial building energy use and the development of Cal-Arch, a building energy benchmarking database for California. Currently Cal-Arch uses existing survey data from California's Commercial End Use Survey (CEUS), a largely underutilized wealth of information collected by California's major utilities. Doe's Commercial Building Energy Consumption Survey (CBECS) is used by a similar tool, Arch, and by a number of other benchmarking tools. Future versions of Arch/Cal-Arch will utilize additional data sources including modeled data and individual buildings to expand the database.

  6. Curved characteristics best suited for Growth rates, Relative strength and Performance Time of female Olympic weightlifters

    OpenAIRE

    EBADA, Khaled

    2014-01-01

    This study aims to identify Growth rates, Relative strength, and performance time for female lifters and defining Curved characteristics best suited for growth rates, relative strength, and performance Time of female Olympic weightlifters and evaluate the performance of snatch and Clean & Jerk for female lifters, coaches use the curved characteristics as standard guide them through planning and preparing training programs. The study Applied on a sample of 88 female lifter participants...

  7. A Dwarf-based Scalable Big Data Benchmarking Methodology

    OpenAIRE

    Gao, Wanling; Wang, Lei; Zhan, Jianfeng; Luo, Chunjie; Zheng, Daoyi; Jia, Zhen; Xie, Biwei; Zheng, Chen; Yang, Qiang; Wang, Haibin

    2017-01-01

    Different from the traditional benchmarking methodology that creates a new benchmark or proxy for every possible workload, this paper presents a scalable big data benchmarking methodology. Among a wide variety of big data analytics workloads, we identify eight big data dwarfs, each of which captures the common requirements of each class of unit of computation while being reasonably divorced from individual implementations. We implement the eight dwarfs on different software stacks, e.g., Open...

  8. Application of dynamic benchmarking of rotating machinery for eMaintenance

    OpenAIRE

    Galar, Diego; Berges, Luis

    2010-01-01

    The vibration analysis and condition monitoring technology is based on comparison of measurements obtained with benchmarks suggested by manufacturers or standards. In this case, the references provided by current rules are static and independent of parameters such as age or environmental conditions in which the machine is analyzed.New communication technologies allow the integration of eMaintenance systems, production and real-time data or the result of vibration routes. The integration of al...

  9. Health services research related to performance indicators and benchmarking in Europe.

    Science.gov (United States)

    Klazinga, Niek; Fischer, Claudia; ten Asbroek, Augustinus

    2011-07-01

    Measuring quality of care through performance indicators and subsequently using these to compare, learn, and improve (benchmarking) has become a central component of health care policy. This paper aims to identify the main themes of health services research in this area and focuses on opportunities for improving the evidence underpinning performance indicators. A literature survey was carried out to identify research activities and main research themes in Europe in the years 2000-09. Identified literature was categorized into sub-topics and for each topic the main methodological issues were identified and discussed. Experts validated the findings and explored the potential for related further European research. The distribution of research on performance and benchmarking across EU member states varies in time, scope and settings with a large amount of studies focusing on hospitals. Eight specific fields of research were identified (research on concepts and performance frameworks; performance indicators and benchmarking using mortality data; performance indicators and benchmarking related to cancer care; performance indicators and benchmarking on care delivered in hospitals; patient safety indicators; performance indicators in primary care; patient experience; research on the practice of benchmarking and performance improvement). Expert discussions confirmed that research on performance indicators and benchmarking should focus on the development of indicators, as well as their use. The research should involve the potential users and incorporate scientific approaches from biomedicine and epidemiology as well as the social sciences. Further progress is hampered by data availability. Issues which need to be addressed include the use of unique patient identifiers (UPIs) to facilitate linkages between separate databases; standardized measurement of the experiences of patients and others; and deepening collaboration between Eurostat, the World Health Organization (WHO

  10. Evaluation of mobile phone camera benchmarking using objective camera speed and image quality metrics

    Science.gov (United States)

    Peltoketo, Veli-Tapani

    2014-11-01

    When a mobile phone camera is tested and benchmarked, the significance of image quality metrics is widely acknowledged. There are also existing methods to evaluate the camera speed. However, the speed or rapidity metrics of the mobile phone's camera system has not been used with the quality metrics even if the camera speed has become a more and more important camera performance feature. There are several tasks in this work. First, the most important image quality and speed-related metrics of a mobile phone's camera system are collected from the standards and papers and, also, novel speed metrics are identified. Second, combinations of the quality and speed metrics are validated using mobile phones on the market. The measurements are done toward application programming interface of different operating systems. Finally, the results are evaluated and conclusions are made. The paper defines a solution to combine different image quality and speed metrics to a single benchmarking score. A proposal of the combined benchmarking metric is evaluated using measurements of 25 mobile phone cameras on the market. The paper is a continuation of a previous benchmarking work expanded with visual noise measurement and updates of the latest mobile phone versions.

  11. Benchmarking Best Practices in Transformation for Sea Enterprise

    National Research Council Canada - National Science Library

    Brook, Douglas A; Hudgens, Bryan; Nguyen, Nam; Walsh, Katherine

    2006-01-01

    ... applied to reinvestment and recapitalization. Sea Enterprise contracted the Center for Defense Management Reform to research transformation and benchmarking best practices in the private sector...

  12. A Secure Communication Suite for Underwater Acoustic Sensor Networks

    Directory of Open Access Journals (Sweden)

    Angelica Lo Duca

    2012-11-01

    Full Text Available In this paper we describe a security suite for Underwater Acoustic Sensor Networks comprising both fixed and mobile nodes. The security suite is composed of a secure routing protocol and a set of cryptographic primitives aimed at protecting the confidentiality and the integrity of underwater communication while taking into account the unique characteristics and constraints of the acoustic channel. By means of experiments and simulations based on real data, we show that the suite is suitable for an underwater networking environment as it introduces limited, and sometimes negligible, communication and power consumption overhead.

  13. Development and Applications of Benchmark Examples for Static Delamination Propagation Predictions

    Science.gov (United States)

    Krueger, Ronald

    2013-01-01

    The development and application of benchmark examples for the assessment of quasistatic delamination propagation capabilities was demonstrated for ANSYS (TradeMark) and Abaqus/Standard (TradeMark). The examples selected were based on finite element models of Double Cantilever Beam (DCB) and Mixed-Mode Bending (MMB) specimens. First, quasi-static benchmark results were created based on an approach developed previously. Second, the delamination was allowed to propagate under quasi-static loading from its initial location using the automated procedure implemented in ANSYS (TradeMark) and Abaqus/Standard (TradeMark). Input control parameters were varied to study the effect on the computed delamination propagation. Overall, the benchmarking procedure proved valuable by highlighting the issues associated with choosing the appropriate input parameters for the VCCT implementations in ANSYS® and Abaqus/Standard®. However, further assessment for mixed-mode delamination fatigue onset and growth is required. Additionally studies should include the assessment of the propagation capabilities in more complex specimens and on a structural level.

  14. Benchmarking Global Food Safety Performances: The Era of Risk Intelligence.

    Science.gov (United States)

    Valleé, Jean-Charles Le; Charlebois, Sylvain

    2015-10-01

    Food safety data segmentation and limitations hamper the world's ability to select, build up, monitor, and evaluate food safety performance. Currently, there is no metric that captures the entire food safety system, and performance data are not collected strategically on a global scale. Therefore, food safety benchmarking is essential not only to help monitor ongoing performance but also to inform continued food safety system design, adoption, and implementation toward more efficient and effective food safety preparedness, responsiveness, and accountability. This comparative study identifies and evaluates common elements among global food safety systems. It provides an overall world ranking of food safety performance for 17 Organisation for Economic Co-Operation and Development (OECD) countries, illustrated by 10 indicators organized across three food safety risk governance domains: risk assessment (chemical risks, microbial risks, and national reporting on food consumption), risk management (national food safety capacities, food recalls, food traceability, and radionuclides standards), and risk communication (allergenic risks, labeling, and public trust). Results show all countries have very high food safety standards, but Canada and Ireland, followed by France, earned excellent grades relative to their peers. However, any subsequent global ranking study should consider the development of survey instruments to gather adequate and comparable national evidence on food safety.

  15. Development of an MPI benchmark program library

    Energy Technology Data Exchange (ETDEWEB)

    Uehara, Hitoshi

    2001-03-01

    Distributed parallel simulation software with message passing interfaces has been developed to realize large-scale and high performance numerical simulations. The most popular API for message communication is an MPI. The MPI will be provided on the Earth Simulator. It is known that performance of message communication using the MPI libraries gives a significant influence on a whole performance of simulation programs. We developed an MPI benchmark program library named MBL in order to measure the performance of message communication precisely. The MBL measures the performance of major MPI functions such as point-to-point communications and collective communications and the performance of major communication patterns which are often found in application programs. In this report, the description of the MBL and the performance analysis of the MPI/SX measured on the SX-4 are presented. (author)

  16. FRIB driver linac vacuum model and benchmarks

    CERN Document Server

    Durickovic, Bojan; Kersevan, Roberto; Machicoane, Guillaume

    2014-01-01

    The Facility for Rare Isotope Beams (FRIB) is a superconducting heavy-ion linear accelerator that is to produce rare isotopes far from stability for low energy nuclear science. In order to achieve this, its driver linac needs to achieve a very high beam current (up to 400 kW beam power), and this requirement makes vacuum levels of critical importance. Vacuum calculations have been carried out to verify that the vacuum system design meets the requirements. The modeling procedure was benchmarked by comparing models of an existing facility against measurements. In this paper, we present an overview of the methods used for FRIB vacuum calculations and simulation results for some interesting sections of the accelerator. (C) 2013 Elsevier Ltd. All rights reserved.

  17. NASA Indexing Benchmarks: Evaluating Text Search Engines

    Science.gov (United States)

    Esler, Sandra L.; Nelson, Michael L.

    1997-01-01

    The current proliferation of on-line information resources underscores the requirement for the ability to index collections of information and search and retrieve them in a convenient manner. This study develops criteria for analytically comparing the index and search engines and presents results for a number of freely available search engines. A product of this research is a toolkit capable of automatically indexing, searching, and extracting performance statistics from each of the focused search engines. This toolkit is highly configurable and has the ability to run these benchmark tests against other engines as well. Results demonstrate that the tested search engines can be grouped into two levels. Level one engines are efficient on small to medium sized data collections, but show weaknesses when used for collections 100MB or larger. Level two search engines are recommended for data collections up to and beyond 100MB.

  18. Benchmarking organic mixed conductors for transistors

    KAUST Repository

    Inal, Sahika

    2017-11-20

    Organic mixed conductors have garnered significant attention in applications from bioelectronics to energy storage/generation. Their implementation in organic transistors has led to enhanced biosensing, neuromorphic function, and specialized circuits. While a narrow class of conducting polymers continues to excel in these new applications, materials design efforts have accelerated as researchers target new functionality, processability, and improved performance/stability. Materials for organic electrochemical transistors (OECTs) require both efficient electronic transport and facile ion injection in order to sustain high capacity. In this work, we show that the product of the electronic mobility and volumetric charge storage capacity (µC*) is the materials/system figure of merit; we use this framework to benchmark and compare the steady-state OECT performance of ten previously reported materials. This product can be independently verified and decoupled to guide materials design and processing. OECTs can therefore be used as a tool for understanding and designing new organic mixed conductors.

  19. Benchmark Results for Few-Body Hypernuclei

    Science.gov (United States)

    Ferrari Ruffino, F.; Lonardoni, D.; Barnea, N.; Deflorian, S.; Leidemann, W.; Orlandini, G.; Pederiva, F.

    2017-05-01

    The Non-Symmetrized Hyperspherical Harmonics method (NSHH) is introduced in the hypernuclear sector and benchmarked with three different ab-initio methods, namely the Auxiliary Field Diffusion Monte Carlo method, the Faddeev-Yakubovsky approach and the Gaussian Expansion Method. Binding energies and hyperon separation energies of three- to five-body hypernuclei are calculated by employing the two-body Λ N component of the phenomenological Bodmer-Usmani potential (Bodmer and Usmani in Nucl Phys A 477:621, 1988; Usmani and Khanna in J Phys G 35:025105, 2008), and a hyperon-nucleon interaction (Hiyama et al. in Phus Rev C 65:011301, 2001) simulating the scattering phase shifts given by NSC97f (Rijken et al. in Phys Rev C 59:21, 1999). The range of applicability of the NSHH method is briefly discussed.

  20. Supply network configuration—A benchmarking problem

    Science.gov (United States)

    Brandenburg, Marcus

    2018-03-01

    Managing supply networks is a highly relevant task that strongly influences the competitiveness of firms from various industries. Designing supply networks is a strategic process that considerably affects the structure of the whole network. In contrast, supply networks for new products are configured without major adaptations of the existing structure, but the network has to be configured before the new product is actually launched in the marketplace. Due to dynamics and uncertainties, the resulting planning problem is highly complex. However, formal models and solution approaches that support supply network configuration decisions for new products are scant. The paper at hand aims at stimulating related model-based research. To formulate mathematical models and solution procedures, a benchmarking problem is introduced which is derived from a case study of a cosmetics manufacturer. Tasks, objectives, and constraints of the problem are described in great detail and numerical values and ranges of all problem parameters are given. In addition, several directions for future research are suggested.

  1. Development of solutions to benchmark piping problems

    Energy Technology Data Exchange (ETDEWEB)

    Reich, M; Chang, T Y; Prachuktam, S; Hartzman, M

    1977-12-01

    Benchmark problems and their solutions are presented. The problems consist in calculating the static and dynamic response of selected piping structures subjected to a variety of loading conditions. The structures range from simple pipe geometries to a representative full scale primary nuclear piping system, which includes the various components and their supports. These structures are assumed to behave in a linear elastic fashion only, i.e., they experience small deformations and small displacements with no existing gaps, and remain elastic through their entire response. The solutions were obtained by using the program EPIPE, which is a modification of the widely available program SAP IV. A brief outline of the theoretical background of this program and its verification is also included.

  2. Shielding integral benchmark archive and database (SINBAD)

    Energy Technology Data Exchange (ETDEWEB)

    Kirk, B.L.; Grove, R.E. [Radiation Safety Information Computational Center RSICC, Oak Ridge National Laboratory, P.O. Box 2008, Oak Ridge, TN 37831-6171 (United States); Kodeli, I. [Josef Stefan Inst., Jamova 39, 1000 Ljubljana (Slovenia); Gulliford, J.; Sartori, E. [OECD NEA Data Bank, Bd des Iles, 92130 Issy-les-Moulineaux (France)

    2011-07-01

    The shielding integral benchmark archive and database (SINBAD) collection of experiments descriptions was initiated in the early 1990s. SINBAD is an international collaboration between the Organization for Economic Cooperation and Development's Nuclear Energy Agency Data Bank (OECD/NEADB) and the Radiation Safety Information Computational Center (RSICC) at Oak Ridge National Laboratory (ORNL). SINBAD was designed to compile experiments and corresponding computational models with the goal of preserving institutional knowledge and expertise that need to be handed down to future scientists. SINBAD can serve as a learning tool for university students and scientists who need to design experiments or gain expertise in modeling and simulation. The SINBAD database is currently divided into three categories - fission, fusion, and accelerator experiments. Many experiments are described and analyzed using deterministic or stochastic (Monte Carlo) radiation transport software. The nuclear cross sections also play an important role as they are necessary in performing computational analysis. (authors)

  3. Improvements to the APBS biomolecular solvation software suite: Improvements to the APBS Software Suite

    Energy Technology Data Exchange (ETDEWEB)

    Jurrus, Elizabeth [Pacific Northwest National Laboratory, Richland Washington; Engel, Dave [Pacific Northwest National Laboratory, Richland Washington; Star, Keith [Pacific Northwest National Laboratory, Richland Washington; Monson, Kyle [Pacific Northwest National Laboratory, Richland Washington; Brandi, Juan [Pacific Northwest National Laboratory, Richland Washington; Felberg, Lisa E. [University of California, Berkeley California; Brookes, David H. [University of California, Berkeley California; Wilson, Leighton [University of Michigan, Ann Arbor Michigan; Chen, Jiahui [Southern Methodist University, Dallas Texas; Liles, Karina [Pacific Northwest National Laboratory, Richland Washington; Chun, Minju [Pacific Northwest National Laboratory, Richland Washington; Li, Peter [Pacific Northwest National Laboratory, Richland Washington; Gohara, David W. [St. Louis University, St. Louis Missouri; Dolinsky, Todd [FoodLogiQ, Durham North Carolina; Konecny, Robert [University of California San Diego, San Diego California; Koes, David R. [University of Pittsburgh, Pittsburgh Pennsylvania; Nielsen, Jens Erik [Protein Engineering, Novozymes A/S, Copenhagen Denmark; Head-Gordon, Teresa [University of California, Berkeley California; Geng, Weihua [Southern Methodist University, Dallas Texas; Krasny, Robert [University of Michigan, Ann Arbor Michigan; Wei, Guo-Wei [Michigan State University, East Lansing Michigan; Holst, Michael J. [University of California San Diego, San Diego California; McCammon, J. Andrew [University of California San Diego, San Diego California; Baker, Nathan A. [Pacific Northwest National Laboratory, Richland Washington; Brown University, Providence Rhode Island

    2017-10-24

    The Adaptive Poisson-Boltzmann Solver (APBS) software was developed to solve the equations of continuum electrostatics for large biomolecular assemblages that has provided impact in the study of a broad range of chemical, biological, and biomedical applications. APBS addresses three key technology challenges for understanding solvation and electrostatics in biomedical applications: accurate and efficient models for biomolecular solvation and electrostatics, robust and scalable software for applying those theories to biomolecular systems, and mechanisms for sharing and analyzing biomolecular electrostatics data in the scientific community. To address new research applications and advancing computational capabilities, we have continually updated APBS and its suite of accompanying software since its release in 2001. In this manuscript, we discuss the models and capabilities that have recently been implemented within the APBS software package including: a Poisson-Boltzmann analytical and a semi-analytical solver, an optimized boundary element solver, a geometry-based geometric flow solvation model, a graph theory based algorithm for determining pKa values, and an improved web-based visualization tool for viewing electrostatics.

  4. Benchmark calculation of subchannel analysis codes

    International Nuclear Information System (INIS)

    1996-02-01

    In order to evaluate the analysis capabilities of various subchannel codes used in thermal-hydraulic design of light water reactors, benchmark calculations were performed. The selected benchmark problems and major findings obtained by the calculations were as follows: (1)As for single-phase flow mixing experiments between two channels, the calculated results of water temperature distribution along the flow direction were agreed with experimental results by tuning turbulent mixing coefficients properly. However, the effect of gap width observed in the experiments could not be predicted by the subchannel codes. (2)As for two-phase flow mixing experiments between two channels, in high water flow rate cases, the calculated distributions of air and water flows in each channel were well agreed with the experimental results. In low water flow cases, on the other hand, the air mixing rates were underestimated. (3)As for two-phase flow mixing experiments among multi-channels, the calculated mass velocities at channel exit under steady-state condition were agreed with experimental values within about 10%. However, the predictive errors of exit qualities were as high as 30%. (4)As for critical heat flux(CHF) experiments, two different results were obtained. A code indicated that the calculated CHF's using KfK or EPRI correlations were well agreed with the experimental results, while another code suggested that the CHF's were well predicted by using WSC-2 correlation or Weisman-Pei mechanistic model. (5)As for droplets entrainment and deposition experiments, it was indicated that the predictive capability was significantly increased by improving correlations. On the other hand, a remarkable discrepancy between codes was observed. That is, a code underestimated the droplet flow rate and overestimated the liquid film flow rate in high quality cases, while another code overestimated the droplet flow rate and underestimated the liquid film flow rate in low quality cases. (J.P.N.)

  5. Benchmarking management practices in Australian public healthcare.

    Science.gov (United States)

    Agarwal, Renu; Green, Roy; Agarwal, Neeru; Randhawa, Krithika

    2016-01-01

    The purpose of this paper is to investigate the quality of management practices of public hospitals in the Australian healthcare system, specifically those in the state-managed health systems of Queensland and New South Wales (NSW). Further, the authors assess the management practices of Queensland and NSW public hospitals jointly and globally benchmark against those in the health systems of seven other countries, namely, USA, UK, Sweden, France, Germany, Italy and Canada. In this study, the authors adapt the unique and globally deployed Bloom et al. (2009) survey instrument that uses a "double blind, double scored" methodology and an interview-based scoring grid to measure and internationally benchmark the management practices in Queensland and NSW public hospitals based on 21 management dimensions across four broad areas of management - operations, performance monitoring, targets and people management. The findings reveal the areas of strength and potential areas of improvement in the Queensland and NSW Health hospital management practices when compared with public hospitals in seven countries, namely, USA, UK, Sweden, France, Germany, Italy and Canada. Together, Queensland and NSW Health hospitals perform best in operations management followed by performance monitoring. While target management presents scope for improvement, people management is the sphere where these Australian hospitals lag the most. This paper is of interest to both hospital administrators and health care policy-makers aiming to lift management quality at the hospital level as well as at the institutional level, as a vehicle to consistently deliver sustainable high-quality health services. This study provides the first internationally comparable robust measure of management capability in Australian public hospitals, where hospitals are run independently by the state-run healthcare systems. Additionally, this research study contributes to the empirical evidence base on the quality of

  6. Virtual Suit Fit Assessment Using Body Shape Model

    Data.gov (United States)

    National Aeronautics and Space Administration — Shoulder injury is one of the most serious risks for crewmembers in long-duration spaceflight. While suboptimal suit fit and contact pressures between the shoulder...

  7. Nonventing Thermal and Humidity Control for EVA Suits Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Future manned space exploration missions will require space suits with capabilities beyond the current state of the art. Portable Life Support Systems for these...

  8. Tchaikovsky, P.: Orchestral Suite no. 3 op. 55 / Terry Williams

    Index Scriptorium Estoniae

    Williams, Terry

    1996-01-01

    Uuest heliplaadist "Tchaikovsky, P.: Orchestral Suite no. 3 op. 55. Francesca di Rimini op. 32. Detroit Symphony Orchestra, Neeme Järvi". Chandos CHAN 9 419, distribution Media 7 (CD: 160F). TT: 1h 09'20"

  9. Prokofiev: War and Peace - Symphonic Suite (arr. Palmer) / Ivan March

    Index Scriptorium Estoniae

    March, Ivan

    1993-01-01

    Uuest heliplaadist "Prokofiev: War and Peace - Symphonic Suite (arr. Palmer), Summer Night, Op. 123. Russian Overture, Op. 72. Philharmonia Orchestra / Neeme Järvi. Chandos ABTD 1598 CHAN9096 (64 minutes:DDD) Igor - Polovtsian Dances

  10. Prokofiev: Romeo and Juliet - Suite N1 / Ivan March

    Index Scriptorium Estoniae

    March, Ivan

    1990-01-01

    Uuest heliplaadist "Prokofiev: Romeo and Juliet - Suite N1, Op.64b, N2, Op.64c. Philharmonia Orchestra, Barry Wordsworth" Collins Classics cassette 1116-4. CD. Võrreldud Neeme Järvi plaadistustega 1116-2

  11. Arensky. Silhouettes (Suite N 2), Op. 23 / Jonathan Swain

    Index Scriptorium Estoniae

    Swain, Jonathan

    1991-01-01

    Uuest heliplaadist "Arensky. Silhouettes (Suite N 2), Op. 23. Scrjabin. Symphony N 3 in C minor, Op. 43 "Le divin poeme". Danish National Radio Symphony Orchestra. Neeme Järvi. Chandos cassette ABTD 1509; CD CHAN 8898 (66 minutes)

  12. A Test Suite for Safety-Critical Java using JML

    DEFF Research Database (Denmark)

    Ravn, Anders Peter; Søndergaard, Hans

    2013-01-01

    Development techniques are presented for a test suite for the draft specification of the Java profile for Safety-Critical Systems. Distinguishing features are: specification of conformance constraints in the Java Modeling Language, encoding of infrastructure concepts without implementation bias...

  13. Coupled Human-Space Suit Mobility Studies Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The space suit is arguably the most intimate piece of space flight hardware yet we know surprisingly little about the interactions between the astronaut and this...

  14. U.S. Climate Normals Product Suite (1981-2010)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The NOAA Climate Normals are a large suite of data products that provide users with many tools to understand typical climate conditions for thousands of locations...

  15. Advanced Gas Sensing Technology for Space Suits, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Advanced space suits require lightweight, low-power, durable sensors for monitoring critical life support materials. No current compact sensors have the tolerance...

  16. Nonventing Thermal and Humidity Control for EVA Suits, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — Future manned space exploration missions will require space suits with capabilities beyond the current state of the art. Portable Life Support Systems for these...

  17. EVA Physiology and Medical Considerations Working in the Suit

    Science.gov (United States)

    Parazynski, Scott

    2012-01-01

    This "EVA Physiology and Medical Considerations Working in the Suit" presentation covers several topics related to the medical implications and physiological effects of suited operations in space from the perspective of a physician with considerable first-hand Extravehicular Activity (EVA) experience. Key themes include EVA physiology working in a pressure suit in the vacuum of space, basic EVA life support and work support, Thermal Protection System (TPS) inspections and repairs, and discussions of the physical challenges of an EVA. Parazynski covers the common injuries and significant risks during EVAs, as well as physical training required to prepare for EVAs. He also shares overall suit physiological and medical knowledge with the next generation of Extravehicular Mobility Unit (EMU) system designers.

  18. 33 CFR 144.20-5 - Exposure suits.

    Science.gov (United States)

    2010-07-01

    ... light that is approved under 46 CFR 161.012. Each light must be securely attached to the front shoulder... lanyard coiled and stopped off. (f) No stowage container for exposure suits may be capable of being locked...

  19. Touring the Tomato: A Suite of Chemistry Laboratory Experiments

    Science.gov (United States)

    Sarkar, Sayantani; Chatterjee, Subhasish; Medina, Nancy; Stark, Ruth E.

    2013-01-01

    An eight-session interdisciplinary laboratory curriculum has been designed using a suite of analytical chemistry techniques to study biomaterials derived from an inexpensive source such as the tomato fruit. A logical

  20. The fractal harmonic law and its application to swimming suit

    Directory of Open Access Journals (Sweden)

    Kong Hai-Yan

    2012-01-01

    Full Text Available Decreasing friction force between a swimming suit and water is the key factor to design swimming suits. Water continuum mechanics forbids discontinuous fluids, but in angstrom scale water is indeed discontinuous. Swimming suit is smooth on large scale, but it is discontinuous when the scale becomes smaller. If fractal dimensions of swimming suit and water are the same, a minimum of friction force is predicted, which means fractal harmonization. In the paper, fractal harmonic law is introduced to design a swimsuit whose surface fractal dimensions on a macroscopic scale should be equal to or closed to the water's fractal dimensions on an Angstrom scale. Various possible microstructures of fabric are analyzed and a method to obtain perfect fractal structure of fabric is proposed by spraying nanofibers to its surface. The fractal harmonic law can be used to design a moving surface with a minimal friction.

  1. The University and College Counseling Center and Malpractice Suits.

    Science.gov (United States)

    Slimak, Richard E.; Berkowitz, Stanley R.

    1983-01-01

    Provides the university and college counseling center with management suggestions for and discussion of malpractice suits regarding issues that refer to sexual abuse, potentially dangerous, and suicidal clients. (Author/RC)

  2. The use of underwater dynamometry to evaluate two space suits

    Science.gov (United States)

    Squires, W. G.

    1989-01-01

    Four Astronauts were instrumented and donned one of three extravehicular activity (EVA) suits: the currently in use shuttle suit (STS), the Mark III (MK3), and the AX5. The STS was used as the comparison suit because of approved status. Each subject performed ten different exercises in each suit in three different manners (static, dynamic and fatigue) in two different environments, WETF and KC-135 (KC-135 not completed as of this report). Data were recorded from a flight qualified underwater dynamometer (Cybex power head) with a TEAC multichannel recorder/tape and downloaded into the VAX computer system for analysis. Also direct hard copy strip chart recordings were made for backup comparisons. Data were analyzed using the ANOVA procedure and results were graphed and reported without interpretation to the NASA/JSC ABL manager.

  3. 28 CFR 36.503 - Suit by the Attorney General.

    Science.gov (United States)

    2010-07-01

    ... BY PUBLIC ACCOMMODATIONS AND IN COMMERCIAL FACILITIES Enforcement § 36.503 Suit by the Attorney... discretion, the Attorney General may commence a civil action in any appropriate United States district court...

  4. CAMEO (Computer-Aided Management of Emergency Operations) Software Suite

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — CAMEO is the umbrella name for a system of software applications used widely to plan for and respond to chemical emergencies. All of the programs in the suite work...

  5. A Multi-Threaded Cryptographic Pseudorandom Number Generator Test Suite

    Science.gov (United States)

    2016-09-01

    faq.html. Accessed Jul. 1, 2016. [7] A. Desai, A. Hevia, and Y . L. Yin, A Practice-Oriented Treatment of Pseudorandom Number Generators . Berlin, Heidelberg...NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA THESIS A MULTI-THREADED CRYPTOGRAPHIC PSEUDORANDOM NUMBER GENERATOR TEST SUITE by Zhibin Zhang...Jefferson Davis Highway, Suite 1204, Arlington, VA 22202–4302, and to the Office of Management and Budget , Paperwork Reduction Project (0704-0188) Washington

  6. Corrections of the NIST Statistical Test Suite for Randomness

    OpenAIRE

    Kim, Song-Ju; Umeno, Ken; Hasegawa, Akio

    2004-01-01

    It is well known that the NIST statistical test suite was used for the evaluation of AES candidate algorithms. We have found that the test setting of Discrete Fourier Transform test and Lempel-Ziv test of this test suite are wrong. We give four corrections of mistakes in the test settings. This suggests that re-evaluation of the test results should be needed.

  7. A Test Suite for Safety-Critical Java using JML

    DEFF Research Database (Denmark)

    Ravn, Anders P.; Søndergaard, Hans

    2013-01-01

    Development techniques are presented for a test suite for the draft specification of the Java profile for Safety-Critical Systems. Distinguishing features are: specification of conformance constraints in the Java Modeling Language, encoding of infrastructure concepts without implementation bias......, and corresponding specifications of implicitly stated behavioral and real-time properties. The test programs are auto-generated from the specification, while concrete values for test parameters are selected manually. The suite is open source and publicly accessible....

  8. STS-74 M.S. Jerry L. Ross suits up

    Science.gov (United States)

    1995-01-01

    Spaceflight veteran Jerry L. Ross, Mission Specialist 2 on Shuttle Mission STS-74, is assisted by a suit technician as he finishes getting into his launch/entry suit in the Operations and Checkout Building. Ross and four fellow astronauts will depart shortly for Launch Pad 39A, where the Space Shuttle Atlantis awaits a second liftoff attempt during a seven-minute window scheduled to open at approximately 7:30 a.m. EST, Nov. 12.

  9. Small drinking water systems under spatiotemporal water quality variability: a risk-based performance benchmarking framework.

    Science.gov (United States)

    Bereskie, Ty; Haider, Husnain; Rodriguez, Manuel J; Sadiq, Rehan

    2017-08-23

    Traditional approaches for benchmarking drinking water systems are binary, based solely on the compliance and/or non-compliance of one or more water quality performance indicators against defined regulatory guidelines/standards. The consequence of water quality failure is dependent on location within a water supply system as well as time of the year (i.e., season) with varying levels of water consumption. Conventional approaches used for water quality comparison purposes fail to incorporate spatiotemporal variability and degrees of compliance and/or non-compliance. This can lead to misleading or inaccurate performance assessment data used in the performance benchmarking process. In this research, a hierarchical risk-based water quality performance benchmarking framework is proposed to evaluate small drinking water systems (SDWSs) through cross-comparison amongst similar systems. The proposed framework (R WQI framework) is designed to quantify consequence associated with seasonal and location-specific water quality issues in a given drinking water supply system to facilitate more efficient decision-making for SDWSs striving for continuous performance improvement. Fuzzy rule-based modelling is used to address imprecision associated with measuring performance based on singular water quality guidelines/standards and the uncertainties present in SDWS operations and monitoring. This proposed R WQI framework has been demonstrated using data collected from 16 SDWSs in Newfoundland and Labrador and Quebec, Canada, and compared to the Canadian Council of Ministers of the Environment WQI, a traditional, guidelines/standard-based approach. The study found that the R WQI framework provides an in-depth state of water quality and benchmarks SDWSs more rationally based on the frequency of occurrence and consequence of failure events.

  10. Benchmarking com foco na satisfação dos usuários do transporte coletivo por ônibus

    Directory of Open Access Journals (Sweden)

    Mariana Müller Barcelos

    2017-10-01

    Full Text Available Investing in quality of public transportation is fundamental to keep and attract users and to foster more sustainable cities. Benchmarking is an adequate tool to identify best practices and promote exchange experiences that contributes to improve transportation systems. Nevertheless, benchmarking focused on client satisfaction imposes challenges because of the lack of standards in collecting data and the socio-cultural biases inherent to opinion surveys. This paper presents a benchmarking analysis based on satisfaction data collected through a standardized survey. We propose a normalization of satisfaction rates that: (i reduces the effect of socio-cultural biases, (ii enables the comparison of bus systems operating in different Brazilian cities, and (iii allows the identification of potential benchmarks. The proposed method proved suitable in identifying goals, priorities and attributes of bus transit systems that can serve as reference to other cities.

  11. A Field-Based Aquatic Life Benchmark for Conductivity in Central Appalachian Streams (2010) (External Review Draft)

    Science.gov (United States)

    This report adapts the standard U.S. EPA methodology for deriving ambient water quality criteria. Rather than use toxicity test results, the adaptation uses field data to determine the loss of 5% of genera from streams. The method is applied to derive effect benchmarks for disso...

  12. Benchmarking on the management of radioactive waste; Benchmarking sobre la gestion de los residuos radiactivos

    Energy Technology Data Exchange (ETDEWEB)

    Rodriguez Gomez, M. a.; Gonzalez Gandal, R.; Gomez Castano, N.

    2013-09-01

    In this project, an evaluation of the practices carried out in the waste management field at the Spanish nuclear power plants has been done following the Benchmarking methodology. This process has allowed the identification of aspects to improve waste treatment processes; to reduce the volume of waste; to reduce management costs and to establish ways of management for the waste stream which do not have. (Author)

  13. High-Strength Composite Fabric Tested at Structural Benchmark Test Facility

    Science.gov (United States)

    Krause, David L.

    2002-01-01

    Large sheets of ultrahigh strength fabric were put to the test at NASA Glenn Research Center's Structural Benchmark Test Facility. The material was stretched like a snare drum head until the last ounce of strength was reached, when it burst with a cacophonous release of tension. Along the way, the 3-ft square samples were also pulled, warped, tweaked, pinched, and yanked to predict the material's physical reactions to the many loads that it will experience during its proposed use. The material tested was a unique multi-ply composite fabric, reinforced with fibers that had a tensile strength eight times that of common carbon steel. The fiber plies were oriented at 0 and 90 to provide great membrane stiffness, as well as oriented at 45 to provide an unusually high resistance to shear distortion. The fabric's heritage is in astronaut space suits and other NASA programs.

  14. Availability of Neutronics Benchmarks in the ICSBEP and IRPhEP Handbooks for Computational Tools Testing

    Energy Technology Data Exchange (ETDEWEB)

    Bess, John D.; Briggs, J. Blair; Ivanova, Tatiana; Hill, Ian; Gulliford, Jim

    2017-02-01

    In the past several decades, numerous experiments have been performed worldwide to support reactor operations, measurements, design, and nuclear safety. Those experiments represent an extensive international investment in infrastructure, expertise, and cost, representing significantly valuable resources of data supporting past, current, and future research activities. Those valuable assets represent the basis for recording, development, and validation of our nuclear methods and integral nuclear data [1]. The loss of these experimental data, which has occurred all too much in the recent years, is tragic. The high cost to repeat many of these measurements can be prohibitive, if not impossible, to surmount. Two international projects were developed, and are under the direction of the Organisation for Co-operation and Development Nuclear Energy Agency (OECD NEA) to address the challenges of not just data preservation, but evaluation of the data to determine its merit for modern and future use. The International Criticality Safety Benchmark Evaluation Project (ICSBEP) was established to identify and verify comprehensive critical benchmark data sets; evaluate the data, including quantification of biases and uncertainties; compile the data and calculations in a standardized format; and formally document the effort into a single source of verified benchmark data [2]. Similarly, the International Reactor Physics Experiment Evaluation Project (IRPhEP) was established to preserve integral reactor physics experimental data, including separate or special effects data for nuclear energy and technology applications [3]. Annually, contributors from around the world continue to collaborate in the evaluation and review of select benchmark experiments for preservation and dissemination. The extensively peer-reviewed integral benchmark data can then be utilized to support nuclear design and safety analysts to validate the analytical tools, methods, and data needed for next

  15. The fifth Atomic Energy Research dynamic benchmark calculation with HEXTRAN-SMABRE

    International Nuclear Information System (INIS)

    Haenaelaeinen, Anitta

    1998-01-01

    The fifth Atomic Energy Research dynamic benchmark is the first Atomic Energy Research benchmark for coupling of the thermohydraulic codes and three-dimensional reactor dynamic core models. In VTT HEXTRAN 2.7 is used for the core dynamics and SMABRE 4.6 as a thermohydraulic model for the primary and secondary loops. The plant model for SMABRE is based mainly on two input models. the Loviisa model and standard WWER-440/213 plant model. The primary circuit includes six separate loops, totally 505 nodes and 652 junctions. The reactor pressure vessel is divided into six parallel channels. In HEXTRAN calculation 176 symmetry is used in the core. In the sequence of main steam header break at the hot standby state, the liquid temperature is decreased symmetrically in the core inlet which leads to return to power. In the benchmark, no isolations of the steam generators are assumed and the maximum core power is about 38 % of the nominal power at four minutes after the break opening in the HEXTRAN-SMABRE calculation. Due to boric acid in the high pressure safety injection water, the power finally starts to decrease. The break flow is pure steam in the HEXTRAN-SMABRE calculation during the whole transient even in the swell levels in the steam generators are very high due to flashing. Because of sudden peaks in the preliminary results of the steam generator heat transfer, the SMABRE drift-flux model was modified. The new model is a simplified version of the EPRI correlation based on test data. The modified correlation behaves smoothly. In the calculations nuclear data is based on the ENDF/B-IV library and it has been evaluated with the CASMO-HEX code. The importance of the nuclear data was illustrated by repeating the benchmark calculation with using three different data sets. Optimal extensive data valid from hot to cold conditions were not available for all types of fuel enrichments needed in this benchmark.(Author)

  16. Glassy Chimeras Could Be Blind to Quantum Speedup: Designing Better Benchmarks for Quantum Annealing Machines

    Directory of Open Access Journals (Sweden)

    Helmut G. Katzgraber

    2014-04-01

    Full Text Available Recently, a programmable quantum annealing machine has been built that minimizes the cost function of hard optimization problems by, in principle, adiabatically quenching quantum fluctuations. Tests performed by different research teams have shown that, indeed, the machine seems to exploit quantum effects. However, experiments on a class of random-bond instances have not yet demonstrated an advantage over classical optimization algorithms on traditional computer hardware. Here, we present evidence as to why this might be the case. These engineered quantum annealing machines effectively operate coupled to a decohering thermal bath. Therefore, we study the finite-temperature critical behavior of the standard benchmark problem used to assess the computational capabilities of these complex machines. We simulate both random-bond Ising models and spin glasses with bimodal and Gaussian disorder on the D-Wave Chimera topology. Our results show that while the worst-case complexity of finding a ground state of an Ising spin glass on the Chimera graph is not polynomial, the finite-temperature phase space is likely rather simple because spin glasses on Chimera have only a zero-temperature transition. This means that benchmarking optimization methods using spin glasses on the Chimera graph might not be the best benchmark problems to test quantum speedup. We propose alternative benchmarks by embedding potentially harder problems on the Chimera topology. Finally, we also study the (reentrant disorder-temperature phase diagram of the random-bond Ising model on the Chimera graph and show that a finite-temperature ferromagnetic phase is stable up to 19.85(15% antiferromagnetic bonds. Beyond this threshold, the system only displays a zero-temperature spin-glass phase. Our results therefore show that a careful design of the hardware architecture and benchmark problems is key when building quantum annealing machines.

  17. Benchmarking in pathology: development of a benchmarking complexity unit and associated key performance indicators.

    Science.gov (United States)

    Neil, Amanda; Pfeffer, Sally; Burnett, Leslie

    2013-01-01

    This paper details the development of a new type of pathology laboratory productivity unit, the benchmarking complexity unit (BCU). The BCU provides a comparative index of laboratory efficiency, regardless of test mix. It also enables estimation of a measure of how much complex pathology a laboratory performs, and the identification of peer organisations for the purposes of comparison and benchmarking. The BCU is based on the theory that wage rates reflect productivity at the margin. A weighting factor for the ratio of medical to technical staff time was dynamically calculated based on actual participant site data. Given this weighting, a complexity value for each test, at each site, was calculated. The median complexity value (number of BCUs) for that test across all participating sites was taken as its complexity value for the Benchmarking in Pathology Program. The BCU allowed implementation of an unbiased comparison unit and test listing that was found to be a robust indicator of the relative complexity for each test. Employing the BCU data, a number of Key Performance Indicators (KPIs) were developed, including three that address comparative organisational complexity, analytical depth and performance efficiency, respectively. Peer groups were also established using the BCU combined with simple organisational and environmental metrics. The BCU has enabled productivity statistics to be compared between organisations. The BCU corrects for differences in test mix and workload complexity of different organisations and also allows for objective stratification into peer groups.

  18. The Learning Organisation: Results of a Benchmarking Study.

    Science.gov (United States)

    Zairi, Mohamed

    1999-01-01

    Learning in corporations was assessed using these benchmarks: core qualities of creative organizations, characteristic of organizational creativity, attributes of flexible organizations, use of diversity and conflict, creative human resource management systems, and effective and successful teams. These benchmarks are key elements of the learning…

  19. 7 CFR 1709.5 - Determination of energy cost benchmarks.

    Science.gov (United States)

    2010-01-01

    ... recent data available, and periodically revise, the home energy cost benchmarks and the high energy cost...), other petroleum products, wood and other biomass fuels, coal, wind and solar energy. ... 7 Agriculture 11 2010-01-01 2010-01-01 false Determination of energy cost benchmarks. 1709.5...

  20. Supermarket Refrigeration System - Benchmark for Hybrid System Control

    DEFF Research Database (Denmark)

    Sloth, Lars Finn; Izadi-Zamanabadi, Roozbeh; Wisniewski, Rafal

    2007-01-01

    This paper presents a supermarket refrigeration system as a benchmark for development of new ideas and a comparison of methods for hybrid systems' modeling and control. The benchmark features switch dynamics and discrete valued input making it a hybrid system, furthermore the outputs are subjected...... to a number of constrains. The objective is to develop an efficient and optimal control strategy....