WorldWideScience

Sample records for standard benchmark problem

  1. Piping benchmark problems for the Westinghouse AP600 Standardized Plant

    International Nuclear Information System (INIS)

    Bezler, P.; DeGrassi, G.; Braverman, J.; Wang, Y.K.

    1997-01-01

    To satisfy the need for verification of the computer programs and modeling techniques that will be used to perform the final piping analyses for the Westinghouse AP600 Standardized Plant, three benchmark problems were developed. The problems are representative piping systems subjected to representative dynamic loads with solutions developed using the methods being proposed for analysis for the AP600 standard design. It will be required that the combined license licensees demonstrate that their solutions to these problems are in agreement with the benchmark problem set

  2. Shielding benchmark problems, (2)

    International Nuclear Information System (INIS)

    Tanaka, Shun-ichi; Sasamoto, Nobuo; Oka, Yoshiaki; Shin, Kazuo; Tada, Keiko.

    1980-02-01

    Shielding benchmark problems prepared by Working Group of Assessment of Shielding Experiments in the Research Committee on Shielding Design in the Atomic Energy Society of Japan were compiled by Shielding Laboratory in Japan Atomic Energy Research Institute. Fourteen shielding benchmark problems are presented newly in addition to twenty-one problems proposed already, for evaluating the calculational algorithm and accuracy of computer codes based on discrete ordinates method and Monte Carlo method and for evaluating the nuclear data used in codes. The present benchmark problems are principally for investigating the backscattering and the streaming of neutrons and gamma rays in two- and three-dimensional configurations. (author)

  3. Argonne Code Center: Benchmark problem book.

    Energy Technology Data Exchange (ETDEWEB)

    None, None

    1977-06-01

    This book is an outgrowth of activities of the Computational Benchmark Problems Committee of the Mathematics and Computation Division of the American Nuclear Society. This is the second supplement of the original benchmark book which was first published in February, 1968 and contained computational benchmark problems in four different areas. Supplement No. 1, which was published in December, 1972, contained corrections to the original benchmark book plus additional problems in three new areas. The current supplement. Supplement No. 2, contains problems in eight additional new areas. The objectives of computational benchmark work and the procedures used by the committee in pursuing the objectives are outlined in the original edition of the benchmark book (ANL-7416, February, 1968). The members of the committee who have made contributions to Supplement No. 2 are listed below followed by the contributors to the earlier editions of the benchmark book.

  4. Benchmark problems for numerical implementations of phase field models

    International Nuclear Information System (INIS)

    Jokisaari, A. M.; Voorhees, P. W.; Guyer, J. E.; Warren, J.; Heinonen, O. G.

    2016-01-01

    Here, we present the first set of benchmark problems for phase field models that are being developed by the Center for Hierarchical Materials Design (CHiMaD) and the National Institute of Standards and Technology (NIST). While many scientific research areas use a limited set of well-established software, the growing phase field community continues to develop a wide variety of codes and lacks benchmark problems to consistently evaluate the numerical performance of new implementations. Phase field modeling has become significantly more popular as computational power has increased and is now becoming mainstream, driving the need for benchmark problems to validate and verify new implementations. We follow the example set by the micromagnetics community to develop an evolving set of benchmark problems that test the usability, computational resources, numerical capabilities and physical scope of phase field simulation codes. In this paper, we propose two benchmark problems that cover the physics of solute diffusion and growth and coarsening of a second phase via a simple spinodal decomposition model and a more complex Ostwald ripening model. We demonstrate the utility of benchmark problems by comparing the results of simulations performed with two different adaptive time stepping techniques, and we discuss the needs of future benchmark problems. The development of benchmark problems will enable the results of quantitative phase field models to be confidently incorporated into integrated computational materials science and engineering (ICME), an important goal of the Materials Genome Initiative.

  5. Resolution for the Loviisa benchmark problem

    International Nuclear Information System (INIS)

    Garcia, C.R.; Quintero, R.; Milian, D.

    1992-01-01

    In the present paper, the Loviisa benchmark problem for cycles 11 and 8, and reactor blocks 1 and 2 from Loviisa NPP, is calculated. This problem user law leakage reload patterns and was posed at the second thematic group of TIC meeting held in Rheinsberg GDR, march 1989. SPPS-1 coarse mesh code has been used for the calculations

  6. Benchmark problems for repository siting models

    International Nuclear Information System (INIS)

    Ross, B.; Mercer, J.W.; Thomas, S.D.; Lester, B.H.

    1982-12-01

    This report describes benchmark problems to test computer codes used in siting nuclear waste repositories. Analytical solutions, field problems, and hypothetical problems are included. Problems are included for the following types of codes: ground-water flow in saturated porous media, heat transport in saturated media, ground-water flow in saturated fractured media, heat and solute transport in saturated porous media, solute transport in saturated porous media, solute transport in saturated fractured media, and solute transport in unsaturated porous media

  7. Comparison of Standard Light Water Reactor Cross-Section Libraries using the United States Nuclear Regulatory Commission Boiling Water Reactor Benchmark Problem

    Directory of Open Access Journals (Sweden)

    Kulesza Joel A.

    2016-01-01

    Full Text Available This paper describes a comparison of contemporary and historical light water reactor shielding and pressure vessel dosimetry cross-section libraries for a boiling water reactor calculational benchmark problem. The calculational benchmark problem was developed at Brookhaven National Laboratory by the request of the U. S. Nuclear Regulatory Commission. The benchmark problem was originally evaluated by Brookhaven National Laboratory using the Oak Ridge National Laboratory discrete ordinates code DORT and the BUGLE-93 cross-section library. In this paper, the Westinghouse RAPTOR-M3G three-dimensional discrete ordinates code was used. A variety of cross-section libraries were used with RAPTOR-M3G including the BUGLE93, BUGLE-96, and BUGLE-B7 cross-section libraries developed at Oak Ridge National Laboratory and ALPAN-VII.0 developed at Westinghouse. In comparing the calculated fast reaction rates using the four aforementioned cross-section libraries in the pressure vessel capsule, for six dosimetry reaction rates, a maximum relative difference of 8% was observed. As such, it is concluded that the results calculated by RAPTOR-M3G are consistent with the benchmark and further that the different vintage BUGLE cross-section libraries investigated are largely self-consistent.

  8. Assessment of Usability Benchmarks: Combining Standardized Scales with Specific Questions

    Directory of Open Access Journals (Sweden)

    Stephanie Bettina Linek

    2011-12-01

    Full Text Available The usability of Web sites and online services is of rising importance. When creating a completely new Web site, qualitative data are adequate for identifying the most usability problems. However, changes of an existing Web site should be evaluated by a quantitative benchmarking process. The proposed paper describes the creation of a questionnaire that allows a quantitative usability benchmarking, i.e. a direct comparison of the different versions of a Web site and an orientation on general standards of usability. The questionnaire is also open for qualitative data. The methodology will be explained by the digital library services of the ZBW.

  9. Supply network configuration—A benchmarking problem

    Science.gov (United States)

    Brandenburg, Marcus

    2018-03-01

    Managing supply networks is a highly relevant task that strongly influences the competitiveness of firms from various industries. Designing supply networks is a strategic process that considerably affects the structure of the whole network. In contrast, supply networks for new products are configured without major adaptations of the existing structure, but the network has to be configured before the new product is actually launched in the marketplace. Due to dynamics and uncertainties, the resulting planning problem is highly complex. However, formal models and solution approaches that support supply network configuration decisions for new products are scant. The paper at hand aims at stimulating related model-based research. To formulate mathematical models and solution procedures, a benchmarking problem is introduced which is derived from a case study of a cosmetics manufacturer. Tasks, objectives, and constraints of the problem are described in great detail and numerical values and ranges of all problem parameters are given. In addition, several directions for future research are suggested.

  10. Statistical benchmarking in utility regulation: Role, standards and methods

    International Nuclear Information System (INIS)

    Newton Lowry, Mark; Getachew, Lullit

    2009-01-01

    Statistical benchmarking is being used with increasing frequency around the world in utility rate regulation. We discuss how and where benchmarking is in use for this purpose and the pros and cons of regulatory benchmarking. We then discuss alternative performance standards and benchmarking methods in regulatory applications. We use these to propose guidelines for the appropriate use of benchmarking in the rate setting process. The standards, which we term the competitive market and frontier paradigms, have a bearing on method selection. These along with regulatory experience suggest that benchmarking can either be used for prudence review in regulation or to establish rates or rate setting mechanisms directly

  11. Development of solutions to benchmark piping problems

    Energy Technology Data Exchange (ETDEWEB)

    Reich, M; Chang, T Y; Prachuktam, S; Hartzman, M

    1977-12-01

    Benchmark problems and their solutions are presented. The problems consist in calculating the static and dynamic response of selected piping structures subjected to a variety of loading conditions. The structures range from simple pipe geometries to a representative full scale primary nuclear piping system, which includes the various components and their supports. These structures are assumed to behave in a linear elastic fashion only, i.e., they experience small deformations and small displacements with no existing gaps, and remain elastic through their entire response. The solutions were obtained by using the program EPIPE, which is a modification of the widely available program SAP IV. A brief outline of the theoretical background of this program and its verification is also included.

  12. Computational benchmark problem for deep penetration in iron

    International Nuclear Information System (INIS)

    Hendricks, J.S.; Carter, L.L.

    1980-01-01

    A calculational benchmark problem which is simple to model and easy to interpret is described. The benchmark consists of monoenergetic 2-, 4-, or 40-MeV neutrons normally incident upon a 3-m-thick pure iron slab. Currents, fluxes, and radiation doses are tabulated throughout the slab

  13. Simplified two and three dimensional HTTR benchmark problems

    International Nuclear Information System (INIS)

    Zhang Zhan; Rahnema, Farzad; Zhang Dingkang; Pounders, Justin M.; Ougouag, Abderrafi M.

    2011-01-01

    To assess the accuracy of diffusion or transport methods for reactor calculations, it is desirable to create heterogeneous benchmark problems that are typical of whole core configurations. In this paper we have created two and three dimensional numerical benchmark problems typical of high temperature gas cooled prismatic cores. Additionally, a single cell and single block benchmark problems are also included. These problems were derived from the HTTR start-up experiment. Since the primary utility of the benchmark problems is in code-to-code verification, minor details regarding geometry and material specification of the original experiment have been simplified while retaining the heterogeneity and the major physics properties of the core from a neutronics viewpoint. A six-group material (macroscopic) cross section library has been generated for the benchmark problems using the lattice depletion code HELIOS. Using this library, Monte Carlo solutions are presented for three configurations (all-rods-in, partially-controlled and all-rods-out) for both the 2D and 3D problems. These solutions include the core eigenvalues, the block (assembly) averaged fission densities, local peaking factors, the absorption densities in the burnable poison and control rods, and pin fission density distribution for selected blocks. Also included are the solutions for the single cell and single block problems.

  14. The rotating movement of three immiscible fluids - A benchmark problem

    NARCIS (Netherlands)

    Bakker, Mark; Oude Essink, Gualbert; Langevin, Christian D.

    2004-01-01

    A benchmark problem involving the rotating movement of three immiscible fluids is proposed for verifying the density-dependent flow component of groundwater flow codes. The problem consists of a two-dimensional strip in the vertical plane filled with three fluids of different densities separated by

  15. Three anisotropic benchmark problems for adaptive finite element methods

    Czech Academy of Sciences Publication Activity Database

    Šolín, Pavel; Čertík, O.; Korous, L.

    2013-01-01

    Roč. 219, č. 13 (2013), s. 7286-7295 ISSN 0096-3003 R&D Projects: GA AV ČR IAA100760702 Institutional support: RVO:61388998 Keywords : benchmark problem * anisotropic solution * boundary layer Subject RIV: BA - General Mathematics Impact factor: 1.600, year: 2013

  16. Validation of NESTLE against static reactor benchmark problems

    International Nuclear Information System (INIS)

    Mosteller, R.D.

    1996-01-01

    The NESTLE advanced modal code was developed at North Carolina State University with support from Los Alamos National Laboratory and Idaho National Engineering Laboratory. It recently has been benchmarked successfully against measured data from pressurized water reactors (PWRs). However, NESTLE's geometric capabilities are very flexible, and it can be applied to a variety of other types of reactors. This study presents comparisons of NESTLE results with those from other codes for static benchmark problems for PWRs, boiling water reactors (BWRs), high-temperature gas-cooled reactors (HTGRs) and CANDU heavy- water reactors (HWRs)

  17. Validation of NESTLE against static reactor benchmark problems

    International Nuclear Information System (INIS)

    Mosteller, R.D.

    1996-01-01

    The NESTLE advanced nodal code was developed at North Carolina State University with support from Los Alamos National Laboratory and Idaho National Engineering Laboratory. It recently has been benchmarked successfully against measured data from pressurized water reactors (PWRs). However, NESTLE's geometric capabilities are very flexible, and it can be applied to a variety of other types of reactors. This study presents comparisons of NESTLE results with those from other codes for static benchmark problems for PWRs, boiling water reactors (BWRs), high-temperature gas-cooled reactors (HTGRs), and Canada deuterium uranium (CANDU) heavy-water reactors (HWRs)

  18. Implementation and verification of global optimization benchmark problems

    Science.gov (United States)

    Posypkin, Mikhail; Usov, Alexander

    2017-12-01

    The paper considers the implementation and verification of a test suite containing 150 benchmarks for global deterministic box-constrained optimization. A C++ library for describing standard mathematical expressions was developed for this purpose. The library automate the process of generating the value of a function and its' gradient at a given point and the interval estimates of a function and its' gradient on a given box using a single description. Based on this functionality, we have developed a collection of tests for an automatic verification of the proposed benchmarks. The verification has shown that literary sources contain mistakes in the benchmarks description. The library and the test suite are available for download and can be used freely.

  19. Implementation and verification of global optimization benchmark problems

    Directory of Open Access Journals (Sweden)

    Posypkin Mikhail

    2017-12-01

    Full Text Available The paper considers the implementation and verification of a test suite containing 150 benchmarks for global deterministic box-constrained optimization. A C++ library for describing standard mathematical expressions was developed for this purpose. The library automate the process of generating the value of a function and its’ gradient at a given point and the interval estimates of a function and its’ gradient on a given box using a single description. Based on this functionality, we have developed a collection of tests for an automatic verification of the proposed benchmarks. The verification has shown that literary sources contain mistakes in the benchmarks description. The library and the test suite are available for download and can be used freely.

  20. A proposed benchmark problem for cargo nuclear threat monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Holmes, Thomas Wesley, E-mail: twholmes@ncsu.edu [Center for Engineering Applications of Radioisotopes, Nuclear Engineering Department, North Carolina State University, Raleigh, NC 27695-7909 (United States); Calderon, Adan; Peeples, Cody R.; Gardner, Robin P. [Center for Engineering Applications of Radioisotopes, Nuclear Engineering Department, North Carolina State University, Raleigh, NC 27695-7909 (United States)

    2011-10-01

    There is currently a great deal of technical and political effort focused on reducing the risk of potential attacks on the United States involving radiological dispersal devices or nuclear weapons. This paper proposes a benchmark problem for gamma-ray and X-ray cargo monitoring with results calculated using MCNP5, v1.51. The primary goal is to provide a benchmark problem that will allow researchers in this area to evaluate Monte Carlo models for both speed and accuracy in both forward and inverse calculational codes and approaches for nuclear security applications. A previous benchmark problem was developed by one of the authors (RPG) for two similar oil well logging problems (Gardner and Verghese, 1991, ). One of those benchmarks has recently been used by at least two researchers in the nuclear threat area to evaluate the speed and accuracy of Monte Carlo codes combined with variance reduction techniques. This apparent need has prompted us to design this benchmark problem specifically for the nuclear threat researcher. This benchmark consists of conceptual design and preliminary calculational results using gamma-ray interactions on a system containing three thicknesses of three different shielding materials. A point source is placed inside the three materials lead, aluminum, and plywood. The first two materials are in right circular cylindrical form while the third is a cube. The entire system rests on a sufficiently thick lead base so as to reduce undesired scattering events. The configuration was arranged in such a manner that as gamma-ray moves from the source outward it first passes through the lead circular cylinder, then the aluminum circular cylinder, and finally the wooden cube before reaching the detector. A 2 in.x4 in.x16 in. box style NaI (Tl) detector was placed 1 m from the point source located in the center with the 4 in.x16 in. side facing the system. The two sources used in the benchmark are {sup 137}Cs and {sup 235}U.

  1. Benchmark Problems of the Geothermal Technologies Office Code Comparison Study

    Energy Technology Data Exchange (ETDEWEB)

    White, Mark D. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Podgorney, Robert [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Kelkar, Sharad M. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); McClure, Mark W. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Danko, George [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Ghassemi, Ahmad [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Fu, Pengcheng [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Bahrami, Davood [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Barbier, Charlotte [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Cheng, Qinglu [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Chiu, Kit-Kwan [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Detournay, Christine [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Elsworth, Derek [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Fang, Yi [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Furtney, Jason K. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Gan, Quan [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Gao, Qian [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Guo, Bin [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Hao, Yue [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Horne, Roland N. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Huang, Kai [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Im, Kyungjae [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Norbeck, Jack [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Rutqvist, Jonny [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Safari, M. R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Sesetty, Varahanaresh [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Sonnenthal, Eric [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Tao, Qingfeng [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); White, Signe K. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Wong, Yang [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Xia, Yidong [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2016-12-02

    A diverse suite of numerical simulators is currently being applied to predict or understand the performance of enhanced geothermal systems (EGS). To build confidence and identify critical development needs for these analytical tools, the United States Department of Energy, Geothermal Technologies Office has sponsored a Code Comparison Study (GTO-CCS), with participants from universities, industry, and national laboratories. A principal objective for the study was to create a community forum for improvement and verification of numerical simulators for EGS modeling. Teams participating in the study were those representing U.S. national laboratories, universities, and industries, and each team brought unique numerical simulation capabilities to bear on the problems. Two classes of problems were developed during the study, benchmark problems and challenge problems. The benchmark problems were structured to test the ability of the collection of numerical simulators to solve various combinations of coupled thermal, hydrologic, geomechanical, and geochemical processes. This class of problems was strictly defined in terms of properties, driving forces, initial conditions, and boundary conditions. Study participants submitted solutions to problems for which their simulation tools were deemed capable or nearly capable. Some participating codes were originally developed for EGS applications whereas some others were designed for different applications but can simulate processes similar to those in EGS. Solution submissions from both were encouraged. In some cases, participants made small incremental changes to their numerical simulation codes to address specific elements of the problem, and in other cases participants submitted solutions with existing simulation tools, acknowledging the limitations of the code. The challenge problems were based on the enhanced geothermal systems research conducted at Fenton Hill, near Los Alamos, New Mexico, between 1974 and 1995. The problems

  2. Using PISA as an International Benchmark in Standard Setting.

    Science.gov (United States)

    Phillips, Gary W; Jiang, Tao

    2015-01-01

    This study describes how the Programme for International Student Assessment (PISA) can be used to internationally benchmark state performance standards. The process is accomplished in three steps. First, PISA items are embedded in the administration of the state assessment and calibrated on the state scale. Second, the international item calibrations are then used to link the state scale to the PISA scale through common item linking. Third, the statistical linking results are used as part of the state standard setting process to help standard setting panelists determine how high their state standards need to be in order to be internationally competitive. This process was carried out in Delaware, Hawaii, and Oregon, in three subjects-science, mathematics and reading with initial results reported by Phillips and Jiang (2011). An in depth discussion of methods and results are reported in this article for one subject (mathematics) and one state (Hawaii).

  3. The rotating movement of three immiscible fluids - A benchmark problem

    Science.gov (United States)

    Bakker, M.; Oude, Essink G.H.P.; Langevin, C.D.

    2004-01-01

    A benchmark problem involving the rotating movement of three immiscible fluids is proposed for verifying the density-dependent flow component of groundwater flow codes. The problem consists of a two-dimensional strip in the vertical plane filled with three fluids of different densities separated by interfaces. Initially, the interfaces between the fluids make a 45??angle with the horizontal. Over time, the fluids rotate to the stable position whereby the interfaces are horizontal; all flow is caused by density differences. Two cases of the problem are presented, one resulting in a symmetric flow field and one resulting in an asymmetric flow field. An exact analytical solution for the initial flow field is presented by application of the vortex theory and complex variables. Numerical results are obtained using three variable-density groundwater flow codes (SWI, MOCDENS3D, and SEAWAT). Initial horizontal velocities of the interfaces, as simulated by the three codes, compare well with the exact solution. The three codes are used to simulate the positions of the interfaces at two times; the three codes produce nearly identical results. The agreement between the results is evidence that the specific rotational behavior predicted by the models is correct. It also shows that the proposed problem may be used to benchmark variable-density codes. It is concluded that the three models can be used to model accurately the movement of interfaces between immiscible fluids, and have little or no numerical dispersion. ?? 2003 Elsevier B.V. All rights reserved.

  4. A highly simplified 3D BWR benchmark problem

    International Nuclear Information System (INIS)

    Douglass, Steven; Rahnema, Farzad

    2010-01-01

    The resurgent interest in reactor development associated with the nuclear renaissance has paralleled significant advancements in computer technology, and allowed for unprecedented computational power to be applied to the numerical solution of neutron transport problems. The current generation of core-level solvers relies on a variety of approximate methods (e.g. nodal diffusion theory, spatial homogenization) to efficiently solve reactor problems with limited computer power; however, in recent years, the increased availability of high-performance computer systems has created an interest in the development of new methods and codes (deterministic and Monte Carlo) to directly solve whole-core reactor problems with full heterogeneity (lattice and core level). This paper presents the development of a highly simplified heterogeneous 3D benchmark problem with physics characteristic of boiling water reactors. The aim of this work is to provide a problem for developers to use to validate new whole-core methods and codes which take advantage of the advanced computational capabilities that are now available. Additionally, eigenvalues and an overview of the pin fission density distribution are provided for the benefit of the reader. (author)

  5. PID controller tuning using metaheuristic optimization algorithms for benchmark problems

    Science.gov (United States)

    Gholap, Vishal; Naik Dessai, Chaitali; Bagyaveereswaran, V.

    2017-11-01

    This paper contributes to find the optimal PID controller parameters using particle swarm optimization (PSO), Genetic Algorithm (GA) and Simulated Annealing (SA) algorithm. The algorithms were developed through simulation of chemical process and electrical system and the PID controller is tuned. Here, two different fitness functions such as Integral Time Absolute Error and Time domain Specifications were chosen and applied on PSO, GA and SA while tuning the controller. The proposed Algorithms are implemented on two benchmark problems of coupled tank system and DC motor. Finally, comparative study has been done with different algorithms based on best cost, number of iterations and different objective functions. The closed loop process response for each set of tuned parameters is plotted for each system with each fitness function.

  6. Investigations of the BSS-6 problem from the ANL benchmark problem book

    International Nuclear Information System (INIS)

    Babanakov, D.M.; Suslov, I.R.

    1996-01-01

    Results of extended numerical investigations of solutions to the BSS-6 problems from the ANL Benchmark Problem Book are presented. The influence of the space discretization error is evaluated for different space finite-difference schemes and for all of the BSS-6 problems; asymptotical (mesh size independent) solutions to the problems are obtained. On the basis of an analytical solution technique, a comparison analysis of time calculational schemes used in the BSS-6 problems is carried out. A modification of the Newton method for ill-conditioned systems of nonlinear algebraic equations arising within the framework of the analytical solution technique is outlined. (author)

  7. Benchmarking with the BLASST Sessional Staff Standards Framework

    Science.gov (United States)

    Luzia, Karina; Harvey, Marina; Parker, Nicola; McCormack, Coralie; Brown, Natalie R.

    2013-01-01

    Benchmarking as a type of knowledge-sharing around good practice within and between institutions is increasingly common in the higher education sector. More recently, benchmarking as a process that can contribute to quality enhancement has been deployed across numerous institutions with a view to systematising frameworks to assure and enhance the…

  8. Standard Guide for Benchmark Testing of Light Water Reactor Calculations

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2010-01-01

    1.1 This guide covers general approaches for benchmarking neutron transport calculations in light water reactor systems. A companion guide (Guide E2005) covers use of benchmark fields for testing neutron transport calculations and cross sections in well controlled environments. This guide covers experimental benchmarking of neutron fluence calculations (or calculations of other exposure parameters such as dpa) in more complex geometries relevant to reactor surveillance. Particular sections of the guide discuss: the use of well-characterized benchmark neutron fields to provide an indication of the accuracy of the calculational methods and nuclear data when applied to typical cases; and the use of plant specific measurements to indicate bias in individual plant calculations. Use of these two benchmark techniques will serve to limit plant-specific calculational uncertainty, and, when combined with analytical uncertainty estimates for the calculations, will provide uncertainty estimates for reactor fluences with ...

  9. A novel hybrid meta-heuristic technique applied to the well-known benchmark optimization problems

    Science.gov (United States)

    Abtahi, Amir-Reza; Bijari, Afsane

    2017-09-01

    In this paper, a hybrid meta-heuristic algorithm, based on imperialistic competition algorithm (ICA), harmony search (HS), and simulated annealing (SA) is presented. The body of the proposed hybrid algorithm is based on ICA. The proposed hybrid algorithm inherits the advantages of the process of harmony creation in HS algorithm to improve the exploitation phase of the ICA algorithm. In addition, the proposed hybrid algorithm uses SA to make a balance between exploration and exploitation phases. The proposed hybrid algorithm is compared with several meta-heuristic methods, including genetic algorithm (GA), HS, and ICA on several well-known benchmark instances. The comprehensive experiments and statistical analysis on standard benchmark functions certify the superiority of the proposed method over the other algorithms. The efficacy of the proposed hybrid algorithm is promising and can be used in several real-life engineering and management problems.

  10. Development of common user data model for APOLLO3 and MARBLE and application to benchmark problems

    International Nuclear Information System (INIS)

    Yokoyama, Kenji

    2009-07-01

    A Common User Data Model, CUDM, has been developed for the purpose of benchmark calculations between APOLLO3 and MARBLE code systems. The current version of CUDM was designed for core calculation benchmark problems with 3-dimensional Cartesian, 3-D XYZ, geometry. CUDM is able to manage all input/output data such as 3-D XYZ geometry, effective macroscopic cross section, effective multiplication factor and neutron flux. In addition, visualization tools for geometry and neutron flux were included. CUDM was designed by the object-oriented technique and implemented using Python programming language. Based on the CUDM, a prototype system for a benchmark calculation, CUDM-benchmark, was also developed. The CUDM-benchmark supports input/output data conversion for IDT solver in APOLLO3, and TRITAC and SNT solvers in MARBLE. In order to evaluate pertinence of CUDM, the CUDM-benchmark was applied to benchmark problems proposed by T. Takeda, G. Chiba and I. Zmijarevic. It was verified that the CUDM-benchmark successfully reproduced the results calculated with reference input data files, and provided consistent results among all the solvers by using one common input data defined by CUDM. In addition, a detailed benchmark calculation for Chiba benchmark was performed by using the CUDM-benchmark. Chiba benchmark is a neutron transport benchmark problem for fast criticality assembly without homogenization. This benchmark problem consists of 4 core configurations which have different sodium void regions, and each core configuration is defined by more than 5,000 fuel/material cells. In this application, it was found that the results by IDT and SNT solvers agreed well with the reference results by Monte-Carlo code. In addition, model effects such as quadrature set effect, S n order effect and mesh size effect were systematically evaluated and summarized in this report. (author)

  11. Benchmarking Benchmarks

    NARCIS (Netherlands)

    D.C. Blitz (David)

    2011-01-01

    textabstractBenchmarking benchmarks is a bundle of six studies that are inspired by the prevalence of benchmarking in academic finance research as well as in investment practice. Three studies examine if current benchmark asset pricing models adequately describe the cross-section of stock returns.

  12. Within-Group Effect-Size Benchmarks for Problem-Solving Therapy for Depression in Adults

    Science.gov (United States)

    Rubin, Allen; Yu, Miao

    2017-01-01

    This article provides benchmark data on within-group effect sizes from published randomized clinical trials that supported the efficacy of problem-solving therapy (PST) for depression among adults. Benchmarks are broken down by type of depression (major or minor), type of outcome measure (interview or self-report scale), whether PST was provided…

  13. A Comparative Study of Differential Evolution, Particle Swarm Optimization, and Evolutionary Algorithms on Numerical Benchmark Problems

    DEFF Research Database (Denmark)

    Vesterstrøm, Jacob Svaneborg; Thomsen, Rene

    2004-01-01

    Several extensions to evolutionary algorithms (EAs) and particle swarm optimization (PSO) have been suggested during the last decades offering improved performance on selected benchmark problems. Recently, another search heuristic termed differential evolution (DE) has shown superior performance...

  14. Authentication: A Standard Problem or a Problem of Standards?

    Directory of Open Access Journals (Sweden)

    Amanda Capes-Davis

    2016-06-01

    Full Text Available Reproducibility and transparency in biomedical sciences have been called into question, and scientists have been found wanting as a result. Putting aside deliberate fraud, there is evidence that a major contributor to lack of reproducibility is insufficient quality assurance of reagents used in preclinical research. Cell lines are widely used in biomedical research to understand fundamental biological processes and disease states, yet most researchers do not perform a simple, affordable test to authenticate these key resources. Here, we provide a synopsis of the problems we face and how standards can contribute to an achievable solution.

  15. Benchmarking Problems Used in Second Year Level Organic Chemistry Instruction

    Science.gov (United States)

    Raker, Jeffrey R.; Towns, Marcy H.

    2010-01-01

    Investigations of the problem types used in college-level general chemistry examinations have been reported in this Journal and were first reported in the "Journal of Chemical Education" in 1924. This study extends the findings from general chemistry to the problems of four college-level organic chemistry courses. Three problem…

  16. Multiphysics field analysis and multiobjective design optimization: a benchmark problem

    Czech Academy of Sciences Publication Activity Database

    di Barba, P.; Doležel, Ivo; Karban, P.; Kůs, P.; Mach, F.; Mognaschi, M. E.; Savini, A.

    2014-01-01

    Roč. 22, č. 7 (2014), s. 1214-1225 ISSN 1741-5977 R&D Projects: GA ČR(CZ) GAP102/11/0498 Institutional support: RVO:61388998 Keywords : coupled-field problems * finite-element analysis * hp-FEM adaptation Subject RIV: JA - Electronics ; Optoelectronics, Electrical Engineering Impact factor: 0.868, year: 2014

  17. A suite of benchmark and challenge problems for enhanced geothermal systems

    Energy Technology Data Exchange (ETDEWEB)

    White, Mark; Fu, Pengcheng; McClure, Mark; Danko, George; Elsworth, Derek; Sonnenthal, Eric; Kelkar, Sharad; Podgorney, Robert

    2017-11-06

    A diverse suite of numerical simulators is currently being applied to predict or understand the performance of enhanced geothermal systems (EGS). To build confidence and identify critical development needs for these analytical tools, the United States Department of Energy, Geothermal Technologies Office sponsored a Code Comparison Study (GTO-CCS), with participants from universities, industry, and national laboratories. A principal objective for the study was to create a community forum for improvement and verification of numerical simulators for EGS modeling. Teams participating in the study were those representing U.S. national laboratories, universities, and industries, and each team brought unique numerical simulation capabilities to bear on the problems. Two classes of problems were developed during the study, benchmark problems and challenge problems. The benchmark problems were structured to test the ability of the collection of numerical simulators to solve various combinations of coupled thermal, hydrologic, geomechanical, and geochemical processes. This class of problems was strictly defined in terms of properties, driving forces, initial conditions, and boundary conditions. The challenge problems were based on the enhanced geothermal systems research conducted at Fenton Hill, near Los Alamos, New Mexico, between 1974 and 1995. The problems involved two phases of research, stimulation, development, and circulation in two separate reservoirs. The challenge problems had specific questions to be answered via numerical simulation in three topical areas: 1) reservoir creation/stimulation, 2) reactive and passive transport, and 3) thermal recovery. Whereas the benchmark class of problems were designed to test capabilities for modeling coupled processes under strictly specified conditions, the stated objective for the challenge class of problems was to demonstrate what new understanding of the Fenton Hill experiments could be realized via the application of

  18. MHD and heat transfer benchmark problems for liquid metal flow in rectangular ducts. Final paper

    International Nuclear Information System (INIS)

    Sidorenkov, S.I.; Hua, T.Q.; Araseki, Hideo

    1994-07-01

    Liquid metal cooling systems of a self-cooled blanket in a tokamak reactor will likely include channels of rectangular cross section where liquid metal is circulated in the presence of strong magnetic fields. MHD pressure drop, velocity distribution and heat transfer characteristics are important issues in the engineering design considerations. Computer codes for the reliable solution of three-dimensional MHD flow problems are needed for fusion relevant conditions. This paper describes four benchmark problems to validate magnetohydrodynamic (MHD) and heat transfer computer codes. The problems include rectangular duct geometry with uniform and nonuniform magnetic fields, with and without surface heat flux, and various rectangular cross sections. Two of the problems are based on experiments. Participants in this benchmarking activity come from three countries: The Russian Federation, The United States, and Japan. The solution methods to the problems are described. Results from the different computer codes are presented and compared

  19. Safety, codes and standards for hydrogen installations. Metrics development and benchmarking

    Energy Technology Data Exchange (ETDEWEB)

    Harris, Aaron P. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Dedrick, Daniel E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); LaFleur, Angela Christine [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); San Marchi, Christopher W. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2014-04-01

    Automakers and fuel providers have made public commitments to commercialize light duty fuel cell electric vehicles and fueling infrastructure in select US regions beginning in 2014. The development, implementation, and advancement of meaningful codes and standards is critical to enable the effective deployment of clean and efficient fuel cell and hydrogen solutions in the energy technology marketplace. Metrics pertaining to the development and implementation of safety knowledge, codes, and standards are important to communicate progress and inform future R&D investments. This document describes the development and benchmarking of metrics specific to the development of hydrogen specific codes relevant for hydrogen refueling stations. These metrics will be most useful as the hydrogen fuel market transitions from pre-commercial to early-commercial phases. The target regions in California will serve as benchmarking case studies to quantify the success of past investments in research and development supporting safety codes and standards R&D.

  20. Merton's problem for an investor with a benchmark in a Barndorff-Nielsen and Shephard market.

    Science.gov (United States)

    Lennartsson, Jan; Lindberg, Carl

    2015-01-01

    To try to outperform an externally given benchmark with known weights is the most common equity mandate in the financial industry. For quantitative investors, this task is predominantly approached by optimizing their portfolios consecutively over short time horizons with one-period models. We seek in this paper to provide a theoretical justification to this practice when the underlying market is of Barndorff-Nielsen and Shephard type. This is done by verifying that an investor who seeks to maximize her expected terminal exponential utility of wealth in excess of her benchmark will in fact use an optimal portfolio equivalent to the one-period Markowitz mean-variance problem in continuum under the corresponding Black-Scholes market. Further, we can represent the solution to the optimization problem as in Feynman-Kac form. Hence, the problem, and its solution, is analogous to Merton's classical portfolio problem, with the main difference that Merton maximizes expected utility of terminal wealth, not wealth in excess of a benchmark.

  1. Fault estimation - A standard problem approach

    DEFF Research Database (Denmark)

    Stoustrup, J.; Niemann, Hans Henrik

    2002-01-01

    This paper presents a range of optimization based approaches to fault diagnosis. A variety of fault diagnosis problems are reformulated in the so-called standard problem set-up introduced in the literature on robust control. Once the standard problem formulations are given, the fault diagnosis pr...

  2. Piping benchmark problems. Volume 1. Dynamic analysis uniform support motion response spectrum method

    Energy Technology Data Exchange (ETDEWEB)

    Bezler, P.; Hartzman, M.; Reich, M.

    1980-08-01

    A set of benchmark problems and solutions have been developed for verifying the adequacy of computer programs used for dynamic analysis and design of nuclear piping systems by the Response Spectrum Method. The problems range from simple to complex configurations which are assumed to experience linear elastic behavior. The dynamic loading is represented by uniform support motion, assumed to be induced by seismic excitation in three spatial directions. The solutions consist of frequencies, participation factors, nodal displacement components and internal force and moment components. Solutions to associated anchor point motion static problems are not included.

  3. The Problem of National Standards.

    Science.gov (United States)

    Brannon, Lil

    1995-01-01

    Argues that the development of national standards is another way in which the literacy crisis is being managed and maintained, a crisis arising from the tension between America's promise to the individual that he or she will have full access to intellectual resources and the needs of capitalism to have a differentiated, stratified workforce. (TB)

  4. Benchmark results for the critical slab and sphere problem in one-speed neutron transport theory

    International Nuclear Information System (INIS)

    Rawat, Ajay; Mohankumar, N.

    2011-01-01

    Research highlights: → The critical slab and sphere problem in neutron transport under Case eigenfunction formalism is considered. → These equations reduce to integral expressions involving X functions. → Gauss quadrature is not ideal but DE quadrature is well-suited. → Several fold decrease in computational effort with improved accuracy is realisable. - Abstract: In this paper benchmark numerical results for the one-speed criticality problem with isotropic scattering for the slab and sphere are reported. The Fredholm integral equations of the second kind based on the Case eigenfunction formalism are numerically solved by Neumann iterations with the Double Exponential quadrature.

  5. Piping benchmark problems: dynamic analysis independent support motion response spectrum method

    International Nuclear Information System (INIS)

    Bezler, P.; Subudhi, M.; Hartzman, M.

    1985-08-01

    Four benchmark problems and solutions were developed for verifying the adequacy of computer programs used for the dynamic analysis and design of elastic piping systems by the independent support motion, response spectrum method. The dynamic loading is represented by distinct sets of support excitation spectra assumed to be induced by non-uniform excitation in three spatial directions. Complete input descriptions for each problem are provided and the solutions include predicted natural frequencies, participation factors, nodal displacements and element forces for independent support excitation and also for uniform envelope spectrum excitation. Solutions to the associated anchor point pseudo-static displacements are not included

  6. LHC benchmark scenarios for the real Higgs singlet extension of the standard model

    International Nuclear Information System (INIS)

    Robens, Tania; Stefaniak, Tim

    2016-01-01

    We present benchmark scenarios for searches for an additional Higgs state in the real Higgs singlet extension of the Standard Model in Run 2 of the LHC. The scenarios are selected such that they fulfill all relevant current theoretical and experimental constraints, but can potentially be discovered at the current LHC run. We take into account the results presented in earlier work and update the experimental constraints from relevant LHC Higgs searches and signal rate measurements. The benchmark scenarios are given separately for the low-mass and high-mass region, i.e. the mass range where the additional Higgs state is lighter or heavier than the discovered Higgs state at around 125 GeV. They have also been presented in the framework of the LHC Higgs Cross Section Working Group. (orig.)

  7. Verification of cardiac mechanics software: benchmark problems and solutions for testing active and passive material behaviour.

    Science.gov (United States)

    Land, Sander; Gurev, Viatcheslav; Arens, Sander; Augustin, Christoph M; Baron, Lukas; Blake, Robert; Bradley, Chris; Castro, Sebastian; Crozier, Andrew; Favino, Marco; Fastl, Thomas E; Fritz, Thomas; Gao, Hao; Gizzi, Alessio; Griffith, Boyce E; Hurtado, Daniel E; Krause, Rolf; Luo, Xiaoyu; Nash, Martyn P; Pezzuto, Simone; Plank, Gernot; Rossi, Simone; Ruprecht, Daniel; Seemann, Gunnar; Smith, Nicolas P; Sundnes, Joakim; Rice, J Jeremy; Trayanova, Natalia; Wang, Dafang; Jenny Wang, Zhinuo; Niederer, Steven A

    2015-12-08

    Models of cardiac mechanics are increasingly used to investigate cardiac physiology. These models are characterized by a high level of complexity, including the particular anisotropic material properties of biological tissue and the actively contracting material. A large number of independent simulation codes have been developed, but a consistent way of verifying the accuracy and replicability of simulations is lacking. To aid in the verification of current and future cardiac mechanics solvers, this study provides three benchmark problems for cardiac mechanics. These benchmark problems test the ability to accurately simulate pressure-type forces that depend on the deformed objects geometry, anisotropic and spatially varying material properties similar to those seen in the left ventricle and active contractile forces. The benchmark was solved by 11 different groups to generate consensus solutions, with typical differences in higher-resolution solutions at approximately 0.5%, and consistent results between linear, quadratic and cubic finite elements as well as different approaches to simulating incompressible materials. Online tools and solutions are made available to allow these tests to be effectively used in verification of future cardiac mechanics software.

  8. Fault estimation - A standard problem approach

    DEFF Research Database (Denmark)

    Stoustrup, J.; Niemann, Hans Henrik

    2002-01-01

    This paper presents a range of optimization based approaches to fault diagnosis. A variety of fault diagnosis problems are reformulated in the so-called standard problem set-up introduced in the literature on robust control. Once the standard problem formulations are given, the fault diagnosis...... problems can be solved by standard optimization techniques. The proposed methods include (1) fault diagnosis (fault estimation, (FE)) for systems with model uncertainties; FE for systems with parametric faults, and FE for a class of nonlinear systems. Copyright...

  9. Standardized Definitions for Code Verification Test Problems

    Energy Technology Data Exchange (ETDEWEB)

    Doebling, Scott William [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-09-14

    This document contains standardized definitions for several commonly used code verification test problems. These definitions are intended to contain sufficient information to set up the test problem in a computational physics code. These definitions are intended to be used in conjunction with exact solutions to these problems generated using Exact- Pack, www.github.com/lanl/exactpack.

  10. Comparison of three-dimensional ocean general circulation models on a benchmark problem

    International Nuclear Information System (INIS)

    Chartier, M.

    1990-12-01

    A french and an american Ocean General Circulation Models for deep-sea disposal of radioactive wastes are compared on a benchmark test problem. Both models are three-dimensional. They solve the hydrostatic primitive equations of the ocean with two different finite difference techniques. Results show that the dynamics simulated by both models are consistent. Several methods for the running of a model from a known state are tested in the French model: the diagnostic method, the prognostic method, the acceleration of convergence and the robust-diagnostic method

  11. Comparison of typical inelastic analysis predictions with benchmark problem experimental results

    International Nuclear Information System (INIS)

    Clinard, J.A.; Corum, J.M.; Sartory, W.K.

    1975-01-01

    The results of exemplary inelastic analyses for experimental benchmark problems on reactor components are presented. Consistent analytical procedures and constitutive relations were used in each of the analyses, and the material behavior data presented in the Appendix were used in all cases. Two finite-element inelastic computer programs were employed. These programs implement the analysis procedures and constitutive equations for type 304 stainless steel that are currently used in many analyses of elevated-temperature nuclear reactor system components. The analysis procedures and constitutive relations are briefly discussed, and representative analytical results are presented and compared to the test data. The results that are presented demonstrate the feasibility of performing inelastic analyses for the types of problems discussed, and they are indicative of the general level of agreement that the analyst might expect when using conventional inelastic analysis procedures. (U.S.)

  12. Standardizing Benchmark Dose Calculations to Improve Science-Based Decisions in Human Health Assessments

    Science.gov (United States)

    Wignall, Jessica A.; Shapiro, Andrew J.; Wright, Fred A.; Woodruff, Tracey J.; Chiu, Weihsueh A.; Guyton, Kathryn Z.

    2014-01-01

    Background: Benchmark dose (BMD) modeling computes the dose associated with a prespecified response level. While offering advantages over traditional points of departure (PODs), such as no-observed-adverse-effect-levels (NOAELs), BMD methods have lacked consistency and transparency in application, interpretation, and reporting in human health assessments of chemicals. Objectives: We aimed to apply a standardized process for conducting BMD modeling to reduce inconsistencies in model fitting and selection. Methods: We evaluated 880 dose–response data sets for 352 environmental chemicals with existing human health assessments. We calculated benchmark doses and their lower limits [10% extra risk, or change in the mean equal to 1 SD (BMD/L10/1SD)] for each chemical in a standardized way with prespecified criteria for model fit acceptance. We identified study design features associated with acceptable model fits. Results: We derived values for 255 (72%) of the chemicals. Batch-calculated BMD/L10/1SD values were significantly and highly correlated (R2 of 0.95 and 0.83, respectively, n = 42) with PODs previously used in human health assessments, with values similar to reported NOAELs. Specifically, the median ratio of BMDs10/1SD:NOAELs was 1.96, and the median ratio of BMDLs10/1SD:NOAELs was 0.89. We also observed a significant trend of increasing model viability with increasing number of dose groups. Conclusions: BMD/L10/1SD values can be calculated in a standardized way for use in health assessments on a large number of chemicals and critical effects. This facilitates the exploration of health effects across multiple studies of a given chemical or, when chemicals need to be compared, providing greater transparency and efficiency than current approaches. Citation: Wignall JA, Shapiro AJ, Wright FA, Woodruff TJ, Chiu WA, Guyton KZ, Rusyn I. 2014. Standardizing benchmark dose calculations to improve science-based decisions in human health assessments. Environ Health

  13. A comparison of global optimization algorithms with standard benchmark functions and real-world applications using Energy Plus

    Energy Technology Data Exchange (ETDEWEB)

    Kamph, Jerome Henri; Robinson, Darren; Wetter, Michael

    2009-09-01

    There is an increasing interest in the use of computer algorithms to identify combinations of parameters which optimise the energy performance of buildings. For such problems, the objective function can be multi-modal and needs to be approximated numerically using building energy simulation programs. As these programs contain iterative solution algorithms, they introduce discontinuities in the numerical approximation to the objective function. Metaheuristics often work well for such problems, but their convergence to a global optimum cannot be established formally. Moreover, different algorithms tend to be suited to particular classes of optimization problems. To shed light on this issue we compared the performance of two metaheuristics, the hybrid CMA-ES/HDE and the hybrid PSO/HJ, in minimizing standard benchmark functions and real-world building energy optimization problems of varying complexity. From this we find that the CMA-ES/HDE performs well on more complex objective functions, but that the PSO/HJ more consistently identifies the global minimum for simpler objective functions. Both identified similar values in the objective functions arising from energy simulations, but with different combinations of model parameters. This may suggest that the objective function is multi-modal. The algorithms also correctly identified some non-intuitive parameter combinations that were caused by a simplified control sequence of the building energy system that does not represent actual practice, further reinforcing their utility.

  14. BIGHORN Computational Fluid Dynamics Theory, Methodology, and Code Verification & Validation Benchmark Problems

    Energy Technology Data Exchange (ETDEWEB)

    Xia, Yidong [Idaho National Lab. (INL), Idaho Falls, ID (United States); Andrs, David [Idaho National Lab. (INL), Idaho Falls, ID (United States); Martineau, Richard Charles [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2016-08-01

    This document presents the theoretical background for a hybrid finite-element / finite-volume fluid flow solver, namely BIGHORN, based on the Multiphysics Object Oriented Simulation Environment (MOOSE) computational framework developed at the Idaho National Laboratory (INL). An overview of the numerical methods used in BIGHORN are discussed and followed by a presentation of the formulation details. The document begins with the governing equations for the compressible fluid flow, with an outline of the requisite constitutive relations. A second-order finite volume method used for solving the compressible fluid flow problems is presented next. A Pressure-Corrected Implicit Continuous-fluid Eulerian (PCICE) formulation for time integration is also presented. The multi-fluid formulation is being developed. Although multi-fluid is not fully-developed, BIGHORN has been designed to handle multi-fluid problems. Due to the flexibility in the underlying MOOSE framework, BIGHORN is quite extensible, and can accommodate both multi-species and multi-phase formulations. This document also presents a suite of verification & validation benchmark test problems for BIGHORN. The intent for this suite of problems is to provide baseline comparison data that demonstrates the performance of the BIGHORN solution methods on problems that vary in complexity from laminar to turbulent flows. Wherever possible, some form of solution verification has been attempted to identify sensitivities in the solution methods, and suggest best practices when using BIGHORN.

  15. Dependability of technical items: Problems of standardization

    Science.gov (United States)

    Fedotova, G. A.; Voropai, N. I.; Kovalev, G. F.

    2016-12-01

    This paper is concerned with problems blown up in the development of a new version of the Interstate Standard GOST 27.002 "Industrial product dependability. Terms and definitions". This Standard covers a wide range of technical items and is used in numerous regulations, specifications, standard and technical documentation. A currently available State Standard GOST 27.002-89 was introduced in 1990. Its development involved a participation of scientists and experts from different technical areas, its draft was debated in different audiences and constantly refined, so it was a high quality document. However, after 25 years of its application it's become necessary to develop a new version of the Standard that would reflect the current understanding of industrial dependability, accounting for the changes taking place in Russia in the production, management and development of various technical systems and facilities. The development of a new version of the Standard makes it possible to generalize on a terminological level the knowledge and experience in the area of reliability of technical items, accumulated over a quarter of the century in different industries and reliability research schools, to account for domestic and foreign experience of standardization. Working on the new version of the Standard, we have faced a number of issues and problems on harmonization with the International Standard IEC 60500-192, caused first of all by different approaches to the use of terms and differences in the mentalities of experts from different countries. The paper focuses on the problems related to the chapter "Maintenance, restoration and repair", which caused difficulties for the developers to harmonize term definitions both with experts and the International Standard, which is mainly related to differences between the Russian concept and practice of maintenance and repair and foreign ones.

  16. Comparison of typical inelastic analysis predictions with benchmark problem experimental results

    International Nuclear Information System (INIS)

    Clinard, J.A.; Corum, J.M.; Sartory, W.K.

    1975-01-01

    The results of exemplary inelastic analyses are presented for a series of experimental benchmark problems. Consistent analytical procedures and constitutive relations were used in each of the analyses, and published material behavior data were used in all cases. Two finite-element inelastic computer programs were employed. These programs implement the analysis procedures and constitutive equations for Type 304 stainless steel that are currently used in many analyses of elevated-temperature nuclear reactor system components. The analysis procedures and constitutive relations are briefly discussed, and representative analytical results are presented and compared to the test data. The results that are presented demonstrate the feasibility of performing inelastic analyses, and they are indicative of the general level of agreement that the analyst might expect when using conventional inelastic analysis procedures. (U.S.)

  17. Non-standard and improperly posed problems

    CERN Document Server

    Straughan, Brian; Ames, William F

    1997-01-01

    Written by two international experts in the field, this book is the first unified survey of the advances made in the last 15 years on key non-standard and improperly posed problems for partial differential equations.This reference for mathematicians, scientists, and engineers provides an overview of the methodology typically used to study improperly posed problems. It focuses on structural stability--the continuous dependence of solutions on the initial conditions and the modeling equations--and on problems for which data are only prescribed on part of the boundary.The book addresses continuou

  18. Benchmarking LES with wall-functions and RANS for fatigue problems in thermal–hydraulics systems

    Energy Technology Data Exchange (ETDEWEB)

    Tunstall, R., E-mail: ryan.tunstall@manchester.ac.uk [School of MACE, The University of Manchester, Manchester M13 9PL (United Kingdom); Laurence, D.; Prosser, R. [School of MACE, The University of Manchester, Manchester M13 9PL (United Kingdom); Skillen, A. [Scientific Computing Department, STFC Daresbury Laboratory, Warrington WA4 4AD (United Kingdom)

    2016-11-15

    Highlights: • We benchmark LES with blended wall-functions and low-Re RANS for a pipe bend and T-Junction. • Blended wall-laws allow the first cell from the wall to be placed anywhere in the boundary layer. • In both cases LES predictions improve as the first cell wall spacing is reduced. • Near-wall temperature fluctuations in the T-Junction are overpredicted by wall-modelled LES. • The EBRSM outperforms other RANS models for the pipe bend. - Abstract: In assessing whether nuclear plant components such as T-Junctions are likely to suffer thermal fatigue problems in service, CFD techniques need to provide accurate predictions for wall temperature fluctuations. Though it has been established that this is within the capabilities of wall-resolved LES, its high computational cost has prevented widespread usage in industry. In the present paper the suitability of LES with blended wall-functions, that allow the first cell to be placed in any part of the boundary layer, is assessed. Numerical results for the flows through a 90° pipe bend and a T-Junction are compared against experimental data. Both test cases contain areas where equilibrium laws are violated in practice. It is shown that reducing the first cell wall spacing improves agreement with experimental data by limiting the extent from the wall in which the solution is constrained to an equilibrium law. The LES with wall-function approach consistently overpredicts the near-wall temperature fluctuations in the T-Junction, suggesting that it can be considered as a conservative approach. We also benchmark a range of low-Re RANS models. EBRSM predictions for the 90° pipe bend are in significantly better agreement with experimental data than those from the other models. There are discrepancies from all RANS models in the case of the T-Junction.

  19. A Web Resource for Standardized Benchmark Datasets, Metrics, and Rosetta Protocols for Macromolecular Modeling and Design.

    Directory of Open Access Journals (Sweden)

    Shane Ó Conchúir

    Full Text Available The development and validation of computational macromolecular modeling and design methods depend on suitable benchmark datasets and informative metrics for comparing protocols. In addition, if a method is intended to be adopted broadly in diverse biological applications, there needs to be information on appropriate parameters for each protocol, as well as metrics describing the expected accuracy compared to experimental data. In certain disciplines, there exist established benchmarks and public resources where experts in a particular methodology are encouraged to supply their most efficient implementation of each particular benchmark. We aim to provide such a resource for protocols in macromolecular modeling and design. We present a freely accessible web resource (https://kortemmelab.ucsf.edu/benchmarks to guide the development of protocols for protein modeling and design. The site provides benchmark datasets and metrics to compare the performance of a variety of modeling protocols using different computational sampling methods and energy functions, providing a "best practice" set of parameters for each method. Each benchmark has an associated downloadable benchmark capture archive containing the input files, analysis scripts, and tutorials for running the benchmark. The captures may be run with any suitable modeling method; we supply command lines for running the benchmarks using the Rosetta software suite. We have compiled initial benchmarks for the resource spanning three key areas: prediction of energetic effects of mutations, protein design, and protein structure prediction, each with associated state-of-the-art modeling protocols. With the help of the wider macromolecular modeling community, we hope to expand the variety of benchmarks included on the website and continue to evaluate new iterations of current methods as they become available.

  20. Singlet extensions of the standard model at LHC Run 2: benchmarks and comparison with the NMSSM

    Energy Technology Data Exchange (ETDEWEB)

    Costa, Raul [Centro de Física Teórica e Computacional, Faculdade de Ciências,Universidade de Lisboa, Campo Grande, Edifício C8 1749-016 Lisboa (Portugal); Departamento de Física da Universidade de Aveiro,Campus de Santiago, 3810-183 Aveiro (Portugal); Mühlleitner, Margarete [Institute for Theoretical Physics, Karlsruhe Institute of Technology,76128 Karlsruhe (Germany); Sampaio, Marco O.P. [Departamento de Física da Universidade de Aveiro,Campus de Santiago, 3810-183 Aveiro (Portugal); CIDMA - Center for Research Development in Mathematics and Applications,Campus de Santiago, 3810-183 Aveiro (Portugal); Santos, Rui [Centro de Física Teórica e Computacional, Faculdade de Ciências,Universidade de Lisboa, Campo Grande, Edifício C8 1749-016 Lisboa (Portugal); ISEL - Instituto Superior de Engenharia de Lisboa,Instituto Politécnico de Lisboa, 1959-007 Lisboa (Portugal)

    2016-06-07

    The Complex singlet extension of the Standard Model (CxSM) is the simplest extension that provides scenarios for Higgs pair production with different masses. The model has two interesting phases: the dark matter phase, with a Standard Model-like Higgs boson, a new scalar and a dark matter candidate; and the broken phase, with all three neutral scalars mixing. In the latter phase Higgs decays into a pair of two different Higgs bosons are possible. In this study we analyse Higgs-to-Higgs decays in the framework of singlet extensions of the Standard Model (SM), with focus on the CxSM. After demonstrating that scenarios with large rates for such chain decays are possible we perform a comparison between the NMSSM and the CxSM. We find that, based on Higgs-to-Higgs decays, the only possibility to distinguish the two models at the LHC run 2 is through final states with two different scalars. This conclusion builds a strong case for searches for final states with two different scalars at the LHC run 2. Finally, we propose a set of benchmark points for the real and complex singlet extensions to be tested at the LHC run 2. They have been chosen such that the discovery prospects of the involved scalars are maximised and they fulfil the dark matter constraints. Furthermore, for some of the points the theory is stable up to high energy scales. For the computation of the decay widths and branching ratios we developed the Fortran code sHDECAY, which is based on the implementation of the real and complex singlet extensions of the SM in HDECAY.

  1. Stakeholder insights on the planning and development of an independent benchmark standard for responsible food marketing.

    Science.gov (United States)

    Cairns, Georgina; Macdonald, Laura

    2016-06-01

    A mixed methods qualitative survey investigated stakeholder responses to the proposal to develop an independently defined, audited and certifiable set of benchmark standards for responsible food marketing. Its purpose was to inform the policy planning and development process. A majority of respondents were supportive of the proposal. A majority also viewed the engagement and collaboration of a broad base of stakeholders in its planning and development as potentially beneficial. Positive responses were associated with views that policy controls can and should be extended to include all form of marketing, that obesity and non-communicable diseases prevention and control was a shared responsibility and an urgent policy priority and prior experience of independent standardisation as a policy lever for good practice. Strong policy leadership, demonstrable utilisation of the evidence base in its development and deployment and a conceptually clear communications plan were identified as priority targets for future policy planning. Future research priorities include generating more evidence on the feasibility of developing an effective community of practice and theory of change, the strengths and limitations of these and developing an evidence-based step-wise communications strategy. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Smallest-Small-World Cellular Harmony Search for Optimization of Unconstrained Benchmark Problems

    Directory of Open Access Journals (Sweden)

    Sung Soo Im

    2013-01-01

    Full Text Available We presented a new hybrid method that combines cellular harmony search algorithms with the Smallest-Small-World theory. A harmony search (HS algorithm is based on musical performance processes that occur when a musician searches for a better state of harmony. Harmony search has successfully been applied to a wide variety of practical optimization problems. Most of the previous researches have sought to improve the performance of the HS algorithm by changing the pitch adjusting rate and harmony memory considering rate. However, there has been a lack of studies to improve the performance of the algorithm by the formation of population structures. Therefore, we proposed an improved HS algorithm that uses the cellular automata formation and the topological structure of Smallest-Small-World network. The improved HS algorithm has a high clustering coefficient and a short characteristic path length, having good exploration and exploitation efficiencies. Nine benchmark functions were applied to evaluate the performance of the proposed algorithm. Unlike the existing improved HS algorithm, the proposed algorithm is expected to have improved algorithmic efficiency from the formation of the population structure.

  3. Validation of the AZTRAN 1.1 code with problems Benchmark of LWR reactors; Validacion del codigo AZTRAN 1.1 con problemas Benchmark de reactores LWR

    Energy Technology Data Exchange (ETDEWEB)

    Vallejo Q, J. A.; Bastida O, G. E.; Francois L, J. L. [UNAM, Facultad de Ingenieria, Departamento de Sistemas Energeticos, Ciudad Universitaria, 04510 Ciudad de Mexico (Mexico); Xolocostli M, J. V.; Gomez T, A. M., E-mail: amhed.jvq@gmail.com [ININ, Carretera Mexico-Toluca s/n, 52750 Ocoyoacac, Estado de Mexico (Mexico)

    2016-09-15

    The AZTRAN module is a computational program that is part of the AZTLAN platform (Mexican modeling platform for the analysis and design of nuclear reactors) and that solves the neutron transport equation in 3-dimensional using the discrete ordinates method S{sub N}, steady state and Cartesian geometry. As part of the activities of Working Group 4 (users group) of the AZTLAN project, this work validates the AZTRAN code using the 2002 Yamamoto Benchmark for LWR reactors. For comparison, the commercial code CASMO-4 and the free code Serpent-2 are used; in addition, the results are compared with the data obtained from an article of the PHYSOR 2002 conference. The Benchmark consists of a fuel pin, two UO{sub 2} cells and two other of MOX cells; there is a problem of each cell for each type of reactor PWR and BWR. Although the AZTRAN code is at an early stage of development, the results obtained are encouraging and close to those reported with other internationally accepted codes and methodologies. (Author)

  4. Standard problems for structural computer codes

    International Nuclear Information System (INIS)

    Philippacopoulos, A.J.; Miller, C.A.; Costantino, C.J.

    1985-01-01

    BNL is investigating the ranges of validity of the analytical methods used to predict the behavior of nuclear safety related structures under accidental and extreme environmental loadings. During FY 85, the investigations were concentrated on special problems that can significantly influence the outcome of the soil structure interaction evaluation process. Specially, limitations and applicability of the standard interaction methods when dealing with lift-off, layering and water table effects, were investigated. This paper describes the work and the results obtained during FY 85 from the studies on lift-off, layering and water-table effects in soil-structure interaction

  5. Computational benchmark problems: a review of recent work within the American Nuclear Society Mathematics and Computation Division

    International Nuclear Information System (INIS)

    Dodds, H.L. Jr.

    1977-01-01

    An overview of the recent accomplishments of the Computational Benchmark Problems Committee of the American Nuclear Society Mathematics and Computation Division is presented. Solutions of computational benchmark problems in the following eight areas are presented and discussed: (a) high-temperature gas-cooled reactor neutronics, (b) pressurized water reactor (PWR) thermal hydraulics, (c) PWR neutronics, (d) neutron transport in a cylindrical ''black'' rod, (e) neutron transport in a boiling water reactor (BWR) rod bundle, (f) BWR transient neutronics with thermal feedback, (g) neutron depletion in a heavy water reactor, and (h) heavy water reactor transient neutronics. It is concluded that these problems and solutions are of considerable value to the nuclear industry because they have been and will continue to be useful in the development, evaluation, and verification of computer codes and numerical-solution methods

  6. Generalizable open source urban water portfolio simulation framework demonstrated using a multi-objective risk-based planning benchmark problem.

    Science.gov (United States)

    Trindade, B. C.; Reed, P. M.

    2017-12-01

    The growing access and reduced cost for computing power in recent years has promoted rapid development and application of multi-objective water supply portfolio planning. As this trend continues there is a pressing need for flexible risk-based simulation frameworks and improved algorithm benchmarking for emerging classes of water supply planning and management problems. This work contributes the Water Utilities Management and Planning (WUMP) model: a generalizable and open source simulation framework designed to capture how water utilities can minimize operational and financial risks by regionally coordinating planning and management choices, i.e. making more efficient and coordinated use of restrictions, water transfers and financial hedging combined with possible construction of new infrastructure. We introduce the WUMP simulation framework as part of a new multi-objective benchmark problem for planning and management of regionally integrated water utility companies. In this problem, a group of fictitious water utilities seek to balance the use of the mentioned reliability driven actions (e.g., restrictions, water transfers and infrastructure pathways) and their inherent financial risks. Several traits of this problem make it ideal for a benchmark problem, namely the presence of (1) strong non-linearities and discontinuities in the Pareto front caused by the step-wise nature of the decision making formulation and by the abrupt addition of storage through infrastructure construction, (2) noise due to the stochastic nature of the streamflows and water demands, and (3) non-separability resulting from the cooperative formulation of the problem, in which decisions made by stakeholder may substantially impact others. Both the open source WUMP simulation framework and its demonstration in a challenging benchmarking example hold value for promoting broader advances in urban water supply portfolio planning for regions confronting change.

  7. Reference and standard benchmark field consensus fission yields for U.S. reactor dosimetry programs

    International Nuclear Information System (INIS)

    Gilliam, D.M.; Helmer, R.G.; Greenwood, R.C.; Rogers, J.W.; Heinrich, R.R.; Popek, R.J.; Kellogg, L.S.; Lippincott, E.P.; Hansen, G.E.; Zimmer, W.H.

    1977-01-01

    Measured fission product yields are reported for three benchmark neutron fields--the BIG-10 fast critical assembly at Los Alamos, the CFRMF fast neutron cavity at INEL, and the thermal column of the NBS Research Reactor. These measurements were carried out by participants in the Interlaboratory LMFBR Reaction Rates (ILRR) program. Fission product generation rates were determined by post-irradiation analysis of gamma-ray emission from fission activation foils. The gamma counting was performed by Ge(Li) spectrometry at INEL, ANL, and HEDL; the sample sent to INEL was also analyzed by NaI(Tl) spectrometry for Ba-140 content. The fission rates were determined by means of the NBS Double Fission Ionization Chamber using thin deposits of each of the fissionable isotopes. Four fissionable isotopes were included in the fast neutron field measurements; these were U-235, U-238, Pu-239, and Np-237. Only U-235 was included in the thermal neutron yield measurements. For the fast neutron fields, consensus yields were determined for three fission product isotopes--Zr-95, Ru-103, and Ba-140. For these fission product isotopes, a separately activated foil was analyzed by each of the three gamma counting laboratories. The experimental standard deviation of the three independent results was typically +- 1.5%. For the thermal neutron field, a consensus value for the Cs-137 yield was also obtained. Subsidiary fission yields are also reported for other isotopes which were studied less intensively (usually by only one of the participating laboratories). Comparisons with EBR-II fast reactor yields from destructive analysis and with ENDF/B recommended values are given

  8. Neutron transmission benchmark problems for iron and concrete shields in low, intermediate and high energy proton accelerator facilities

    International Nuclear Information System (INIS)

    Nakane, Yoshihiro; Sakamoto, Yukio; Hayashi, Katsumi

    1996-09-01

    Benchmark problems were prepared for evaluating the calculation codes and the nuclear data for accelerator shielding design by the Accelerator Shielding Working Group of the Research Committee on Reactor Physics in JAERI. Four benchmark problems: transmission of quasi-monoenergetic neutrons generated by 43 MeV and 68 MeV protons through iron and concrete shields at TIARA of JAERI, neutron fluxes in and around an iron beam stop irradiated by 500 MeV protons at KEK, reaction rate distributions inside a thick concrete shield irradiated by 6.2 GeV protons at LBL, and neutron and hadron fluxes inside an iron beam stop irradiated by 24 GeV protons at CERN are compiled in this document. Calculational configurations and neutron reaction cross section data up to 500 MeV are provided. (author)

  9. PEPSI deep spectra. II. Gaia benchmark stars and other M-K standards

    Science.gov (United States)

    Strassmeier, K. G.; Ilyin, I.; Weber, M.

    2018-04-01

    Context. High-resolution échelle spectra confine many essential stellar parameters once the data reach a quality appropriate to constrain the various physical processes that form these spectra. Aim. We provide a homogeneous library of high-resolution, high-S/N spectra for 48 bright AFGKM stars, some of them approaching the quality of solar-flux spectra. Our sample includes the northern Gaia benchmark stars, some solar analogs, and some other bright Morgan-Keenan (M-K) spectral standards. Methods: Well-exposed deep spectra were created by average-combining individual exposures. The data-reduction process relies on adaptive selection of parameters by using statistical inference and robust estimators. We employed spectrum synthesis techniques and statistics tools in order to characterize the spectra and give a first quick look at some of the science cases possible. Results: With an average spectral resolution of R ≈ 220 000 (1.36 km s-1), a continuous wavelength coverage from 383 nm to 912 nm, and S/N of between 70:1 for the faintest star in the extreme blue and 6000:1 for the brightest star in the red, these spectra are now made public for further data mining and analysis. Preliminary results include new stellar parameters for 70 Vir and α Tau, the detection of the rare-earth element dysprosium and the heavy elements uranium, thorium and neodymium in several RGB stars, and the use of the 12C to 13C isotope ratio for age-related determinations. We also found Arcturus to exhibit few-percent Ca II H&K and Hα residual profile changes with respect to the KPNO atlas taken in 1999. Based on data acquired with PEPSI using the Large Binocular Telescope (LBT) and the Vatican Advanced Technology Telescope (VATT). The LBT is an international collaboration among institutions in the United States, Italy, and Germany. LBT Corporation partners are the University of Arizona on behalf of the Arizona university system; Istituto Nazionale di Astrofisica, Italy; LBT

  10. Can consistent benchmarking within a standardized pain management concept decrease postoperative pain after total hip arthroplasty? A prospective cohort study including 367 patients

    Directory of Open Access Journals (Sweden)

    Benditz A

    2016-12-01

    Full Text Available Achim Benditz,1 Felix Greimel,1 Patrick Auer,2 Florian Zeman,3 Antje Göttermann,4 Joachim Grifka,1 Winfried Meissner,4 Frederik von Kunow1 1Department of Orthopedics, University Medical Center Regensburg, 2Clinic for anesthesia, Asklepios Klinikum Bad Abbach, Bad Abbach, 3Centre for Clinical Studies, University Medical Center Regensburg, Regensburg, 4Department of Anesthesiology and Intensive Care, Jena University Hospital, Jena, Germany Background: The number of total hip replacement surgeries has steadily increased over recent years. Reduction in postoperative pain increases patient satisfaction and enables better mobilization. Thus, pain management needs to be continuously improved. Problems are often caused not only by medical issues but also by organization and hospital structure. The present study shows how the quality of pain management can be increased by implementing a standardized pain concept and simple, consistent, benchmarking.Methods: All patients included in the study had undergone total hip arthroplasty (THA. Outcome parameters were analyzed 24 hours after surgery by means of the questionnaires from the German-wide project “Quality Improvement in Postoperative Pain Management” (QUIPS. A pain nurse interviewed patients and continuously assessed outcome quality parameters. A multidisciplinary team of anesthetists, orthopedic surgeons, and nurses implemented a regular procedure of data analysis and internal benchmarking. The health care team was informed of any results, and suggested improvements. Every staff member involved in pain management participated in educational lessons, and a special pain nurse was trained in each ward.Results: From 2014 to 2015, 367 patients were included. The mean maximal pain score 24 hours after surgery was 4.0 (±3.0 on an 11-point numeric rating scale, and patient satisfaction was 9.0 (±1.2. Over time, the maximum pain score decreased (mean 3.0, ±2.0, whereas patient satisfaction

  11. Comparative analysis of results between CASMO, MCNP and Serpent for a suite of Benchmark problems on BWR reactors

    International Nuclear Information System (INIS)

    Xolocostli M, J. V.; Vargas E, S.; Gomez T, A. M.; Reyes F, M. del C.; Del Valle G, E.

    2014-10-01

    In this paper a comparison is made in analyzing the suite of Benchmark problems for reactors type BWR between CASMO-4, MCNP6 and Serpent code. The Benchmark problem consists of two different geometries: a fuel cell of a pin and assembly type BWR. To facilitate the study of reactors physics in the fuel pin their nuclear characteristics are provided to detail, such as burnt dependence, the reactivity of selected nuclide, etc. With respect to the fuel assembly, the presented results are regarding to infinite multiplication factor for burning different steps and different vacuum conditions. Making the analysis of this set of Benchmark problems provides comprehensive test problems for the next fuels generation of BWR reactors with high extended burned. It is important to note that when making this comparison the purpose is to validate the methodologies used in modeling for different operating conditions, if the case is of other BWR assembly. The results will be within a range with some uncertainty, considering that does not depend on code that is used. Escuela Superior de Fisica y Matematicas of Instituto Politecnico Nacional (IPN (Mexico) has accumulated some experience in using Serpent, due to the potential of this code over other commercial codes such as CASMO and MCNP. The obtained results for the infinite multiplication factor are encouraging and motivate the studies to continue with the generation of the X S of a core to a next step a respective nuclear data library is constructed and this can be used by codes developed as part of the development project of the Mexican Analysis Platform of Nuclear Reactors AZTLAN. (Author)

  12. MC21/CTF and VERA multiphysics solutions to VERA core physics benchmark progression problems 6 and 7

    Directory of Open Access Journals (Sweden)

    Daniel J. Kelly, III

    2017-09-01

    Full Text Available The continuous energy Monte Carlo neutron transport code, MC21, was coupled to the CTF subchannel thermal-hydraulics code using a combination of Consortium for Advanced Simulation of Light Water Reactors (CASL tools and in-house Python scripts. An MC21/CTF solution for VERA Core Physics Benchmark Progression Problem 6 demonstrated good agreement with MC21/COBRA-IE and VERA solutions. The MC21/CTF solution for VERA Core Physics Benchmark Progression Problem 7, Watts Bar Unit 1 at beginning of cycle hot full power equilibrium xenon conditions, is the first published coupled Monte Carlo neutronics/subchannel T-H solution for this problem. MC21/CTF predicted a critical boron concentration of 854.5 ppm, yielding a critical eigenvalue of 0.99994 ± 6.8E-6 (95% confidence interval. Excellent agreement with a VERA solution of Problem 7 was also demonstrated for integral and local power and temperature parameters.

  13. VALIDATION OF FULL CORE GEOMETRY MODEL OF THE NODAL3 CODE IN THE PWR TRANSIENT BENCHMARK PROBLEMS

    Directory of Open Access Journals (Sweden)

    Tagor Malem Sembiring

    2015-10-01

    Full Text Available ABSTRACT VALIDATION OF FULL CORE GEOMETRY MODEL OF THE NODAL3 CODE IN THE PWR TRANSIENT BENCHMARK PROBLEMS. The coupled neutronic and thermal-hydraulic (T/H code, NODAL3 code, has been validated in some PWR static benchmark and the NEACRP PWR transient benchmark cases. However, the NODAL3 code have not yet validated in the transient benchmark cases of a control rod assembly (CR ejection at peripheral core using a full core geometry model, the C1 and C2 cases.  By this research work, the accuracy of the NODAL3 code for one CR ejection or the unsymmetrical group of CRs ejection case can be validated. The calculations by the NODAL3 code have been carried out by the adiabatic method (AM and the improved quasistatic method (IQS. All calculated transient parameters by the NODAL3 code were compared with the reference results by the PANTHER code. The maximum relative difference of 16% occurs in the calculated time of power maximum parameter by using the IQS method, while the relative difference of the AM method is 4% for C2 case.  All calculation results by the NODAL3 code shows there is no systematic difference, it means the neutronic and T/H modules are adopted in the code are considered correct. Therefore, all calculation results by using the NODAL3 code are very good agreement with the reference results. Keywords: nodal method, coupled neutronic and thermal-hydraulic code, PWR, transient case, control rod ejection.   ABSTRAK VALIDASI MODEL GEOMETRI TERAS PENUH PAKET PROGRAM NODAL3 DALAM PROBLEM BENCHMARK GAYUT WAKTU PWR. Paket program kopel neutronik dan termohidraulika (T/H, NODAL3, telah divalidasi dengan beberapa kasus benchmark statis PWR dan kasus benchmark gayut waktu PWR NEACRP.  Akan tetapi, paket program NODAL3 belum divalidasi dalam kasus benchmark gayut waktu akibat penarikan sebuah perangkat batang kendali (CR di tepi teras menggunakan model geometri teras penuh, yaitu kasus C1 dan C2. Dengan penelitian ini, akurasi paket program

  14. Computational shielding benchmarks

    International Nuclear Information System (INIS)

    The American Nuclear Society Standards Committee 6.2.1 is engaged in the documentation of radiation transport problems and their solutions. The primary objective of this effort is to test computational methods used within the international shielding community. Dissemination of benchmarks will, it is hoped, accomplish several goals: (1) Focus attention on problems whose solutions represent state-of-the-art methodology for representative transport problems of generic interest; (2) Specification of standard problems makes comparisons of alternate computational methods, including use of approximate vs. ''exact'' computer codes, more meaningful; (3) Comparison with experimental data may suggest improvements in computer codes and/or associated data sets; (4) Test reliability of new methods as they are introduced for the solution of specific problems; (5) Verify user ability to apply a given computational method; and (6) Verify status of a computer program being converted for use on a different computer (e.g., CDC vs IBM) or facility

  15. Validation and Comparison of 2D and 3D Codes for Nearshore Motion of Long Waves Using Benchmark Problems

    Science.gov (United States)

    Velioǧlu, Deniz; Cevdet Yalçıner, Ahmet; Zaytsev, Andrey

    2016-04-01

    Tsunamis are huge waves with long wave periods and wave lengths that can cause great devastation and loss of life when they strike a coast. The interest in experimental and numerical modeling of tsunami propagation and inundation increased considerably after the 2011 Great East Japan earthquake. In this study, two numerical codes, FLOW 3D and NAMI DANCE, that analyze tsunami propagation and inundation patterns are considered. Flow 3D simulates linear and nonlinear propagating surface waves as well as long waves by solving three-dimensional Navier-Stokes (3D-NS) equations. NAMI DANCE uses finite difference computational method to solve 2D depth-averaged linear and nonlinear forms of shallow water equations (NSWE) in long wave problems, specifically tsunamis. In order to validate these two codes and analyze the differences between 3D-NS and 2D depth-averaged NSWE equations, two benchmark problems are applied. One benchmark problem investigates the runup of long waves over a complex 3D beach. The experimental setup is a 1:400 scale model of Monai Valley located on the west coast of Okushiri Island, Japan. Other benchmark problem is discussed in 2015 National Tsunami Hazard Mitigation Program (NTHMP) Annual meeting in Portland, USA. It is a field dataset, recording the Japan 2011 tsunami in Hilo Harbor, Hawaii. The computed water surface elevation and velocity data are compared with the measured data. The comparisons showed that both codes are in fairly good agreement with each other and benchmark data. The differences between 3D-NS and 2D depth-averaged NSWE equations are highlighted. All results are presented with discussions and comparisons. Acknowledgements: Partial support by Japan-Turkey Joint Research Project by JICA on earthquakes and tsunamis in Marmara Region (JICA SATREPS - MarDiM Project), 603839 ASTARTE Project of EU, UDAP-C-12-14 project of AFAD Turkey, 108Y227, 113M556 and 213M534 projects of TUBITAK Turkey, RAPSODI (CONCERT_Dis-021) of CONCERT

  16. Variation in assessment and standard setting practices across UK undergraduate medicine and the need for a benchmark.

    Science.gov (United States)

    MacDougall, Margaret

    2015-10-31

    The principal aim of this study is to provide an account of variation in UK undergraduate medical assessment styles and corresponding standard setting approaches with a view to highlighting the importance of a UK national licensing exam in recognizing a common standard. Using a secure online survey system, response data were collected during the period 13 - 30 January 2014 from selected specialists in medical education assessment, who served as representatives for their respective medical schools. Assessment styles and corresponding choices of standard setting methods vary markedly across UK medical schools. While there is considerable consensus on the application of compensatory approaches, individual schools display their own nuances through use of hybrid assessment and standard setting styles, uptake of less popular standard setting techniques and divided views on norm referencing. The extent of variation in assessment and standard setting practices across UK medical schools validates the concern that there is a lack of evidence that UK medical students achieve a common standard on graduation. A national licensing exam is therefore a viable option for benchmarking the performance of all UK undergraduate medical students.

  17. Mesoscale Benchmark Demonstration Problem 1: Mesoscale Simulations of Intra-granular Fission Gas Bubbles in UO2 under Post-irradiation Thermal Annealing

    Energy Technology Data Exchange (ETDEWEB)

    Li, Yulan; Hu, Shenyang Y.; Montgomery, Robert; Gao, Fei; Sun, Xin; Tonks, Michael; Biner, Bullent; Millet, Paul; Tikare, Veena; Radhakrishnan, Balasubramaniam; Andersson , David

    2012-04-11

    A study was conducted to evaluate the capabilities of different numerical methods used to represent microstructure behavior at the mesoscale for irradiated material using an idealized benchmark problem. The purpose of the mesoscale benchmark problem was to provide a common basis to assess several mesoscale methods with the objective of identifying the strengths and areas of improvement in the predictive modeling of microstructure evolution. In this work, mesoscale models (phase-field, Potts, and kinetic Monte Carlo) developed by PNNL, INL, SNL, and ORNL were used to calculate the evolution kinetics of intra-granular fission gas bubbles in UO2 fuel under post-irradiation thermal annealing conditions. The benchmark problem was constructed to include important microstructural evolution mechanisms on the kinetics of intra-granular fission gas bubble behavior such as the atomic diffusion of Xe atoms, U vacancies, and O vacancies, the effect of vacancy capture and emission from defects, and the elastic interaction of non-equilibrium gas bubbles. An idealized set of assumptions was imposed on the benchmark problem to simplify the mechanisms considered. The capability and numerical efficiency of different models are compared against selected experimental and simulation results. These comparisons find that the phase-field methods, by the nature of the free energy formulation, are able to represent a larger subset of the mechanisms influencing the intra-granular bubble growth and coarsening mechanisms in the idealized benchmark problem as compared to the Potts and kinetic Monte Carlo methods. It is recognized that the mesoscale benchmark problem as formulated does not specifically highlight the strengths of the discrete particle modeling used in the Potts and kinetic Monte Carlo methods. Future efforts are recommended to construct increasingly more complex mesoscale benchmark problems to further verify and validate the predictive capabilities of the mesoscale modeling

  18. Validations of BWR nuclear design code using ABWR MOX numerical benchmark problems

    International Nuclear Information System (INIS)

    Takano, Shou; Sasagawa, Masaru; Yamana, Teppei; Ikehara, Tadashi; Yanagisawa, Naoki

    2017-01-01

    BWR core design code package (the HINES assembly code and the PANACH core simulator), being used for full MOX-ABWR core design, has been benchmarked against the high-fidelity numerical solutions as references, for the purpose of validating its capability of predicting the BWR core design parameters systematically from UO 2 to 100% MOX cores. The reference solutions were created by whole core critical calculations using MCNPs with the precisely modeled ABWR cores both in hot and cold conditions at BOC and EOC of the equilibrium cycle. A Doppler-Broadening Rejection Correction (DCRB) implemented MCNP5-1.4 with ENDF/B-VII.0 was mainly used to evaluate the core design parameters, except for effective delayed neutron fraction (β eff ) and prompt neutron lifetime (l) with MCNP6.1. The discrepancies in the results between the design codes HINES-PANACH and MCNPs for the core design parameters such as the bundle powers, hot pin powers, control rod worth, boron worth, void reactivity, Doppler reactivity, β eff and l, are almost within target accuracy, leading to the conclusion that HINES-PANACH has sufficient fidelity for application to full MOX-ABWR core design. (author)

  19. Complexity evaluation of benchmark instances for the p-median problem

    NARCIS (Netherlands)

    Goldengorin, B.; Krushinsky, D.

    The paper is aimed at experimental evaluation of the complexity of the p-Median problem instances, defined by m x n costs matrices, from several of the most widely used libraries. The complexity is considered in terms of possible problem size reduction and preprocessing, irrespective of the solution

  20. Benchmarking the Naval Systems Engineering Guide Against the Industry Standards for Systems Engineering

    Science.gov (United States)

    2012-09-01

    are all using the same standard or comparable ones. While one would think that there is coherency among those contractors and others building and...American Scientists, 2011). Adaption and modification are significant parts of SE thinking and methodology that need to be well used and available to...Obsolecence None None None Cover Disposal Standards None None Partial Cover Cover Data Gathering None Cover Cover Cover Enviromental Concerns None

  1. Benchmarking the SPHINX and CTH shock physics codes for three problems in ballistics

    Energy Technology Data Exchange (ETDEWEB)

    Wilson, L.T. [Naval Surface Warfare Center, Dahlgren, VA (United States); Hertel, E. [Sandia National Labs., Albuquerque, NM (United States); Schwalbe, L.; Wingate, C. [Los Alamos National Lab., NM (United States)

    1998-02-01

    The CTH Eulerian hydrocode, and the SPHINX smooth particle hydrodynamics (SPH) code were used to model a shock tube, two long rod penetrations into semi-infinite steel targets, and a long rod penetration into a spaced plate array. The results were then compared to experimental data. Both SPHINX and CTH modeled the one-dimensional shock tube problem well. Both codes did a reasonable job in modeling the outcome of the axisymmetric rod impact problem. Neither code correctly reproduced the depth of penetration in both experiments. In the 3-D problem, both codes reasonably replicated the penetration of the rod through the first plate. After this, however, the predictions of both codes began to diverge from the results seen in the experiment. In terms of computer resources, the run times are problem dependent, and are discussed in the text.

  2. Benchmarking semantic web technology

    CERN Document Server

    García-Castro, R

    2009-01-01

    This book addresses the problem of benchmarking Semantic Web Technologies; first, from a methodological point of view, proposing a general methodology to follow in benchmarking activities over Semantic Web Technologies and, second, from a practical point of view, presenting two international benchmarking activities that involved benchmarking the interoperability of Semantic Web technologies using RDF(S) as the interchange language in one activity and OWL in the other.The book presents in detail how the different resources needed for these interoperability benchmarking activities were defined:

  3. Benchmarking quantitative label-free LC-MS data processing workflows using a complex spiked proteomic standard dataset.

    Science.gov (United States)

    Ramus, Claire; Hovasse, Agnès; Marcellin, Marlène; Hesse, Anne-Marie; Mouton-Barbosa, Emmanuelle; Bouyssié, David; Vaca, Sebastian; Carapito, Christine; Chaoui, Karima; Bruley, Christophe; Garin, Jérôme; Cianférani, Sarah; Ferro, Myriam; Van Dorssaeler, Alain; Burlet-Schiltz, Odile; Schaeffer, Christine; Couté, Yohann; Gonzalez de Peredo, Anne

    2016-01-30

    Proteomic workflows based on nanoLC-MS/MS data-dependent-acquisition analysis have progressed tremendously in recent years. High-resolution and fast sequencing instruments have enabled the use of label-free quantitative methods, based either on spectral counting or on MS signal analysis, which appear as an attractive way to analyze differential protein expression in complex biological samples. However, the computational processing of the data for label-free quantification still remains a challenge. Here, we used a proteomic standard composed of an equimolar mixture of 48 human proteins (Sigma UPS1) spiked at different concentrations into a background of yeast cell lysate to benchmark several label-free quantitative workflows, involving different software packages developed in recent years. This experimental design allowed to finely assess their performances in terms of sensitivity and false discovery rate, by measuring the number of true and false-positive (respectively UPS1 or yeast background proteins found as differential). The spiked standard dataset has been deposited to the ProteomeXchange repository with the identifier PXD001819 and can be used to benchmark other label-free workflows, adjust software parameter settings, improve algorithms for extraction of the quantitative metrics from raw MS data, or evaluate downstream statistical methods. Bioinformatic pipelines for label-free quantitative analysis must be objectively evaluated in their ability to detect variant proteins with good sensitivity and low false discovery rate in large-scale proteomic studies. This can be done through the use of complex spiked samples, for which the "ground truth" of variant proteins is known, allowing a statistical evaluation of the performances of the data processing workflow. We provide here such a controlled standard dataset and used it to evaluate the performances of several label-free bioinformatics tools (including MaxQuant, Skyline, MFPaQ, IRMa-hEIDI and Scaffold) in

  4. Benchmarking the invariant embedding method against analytical solutions in model transport problems

    International Nuclear Information System (INIS)

    Malin, Wahlberg; Imre, Pazsit

    2005-01-01

    The purpose of this paper is to demonstrate the use of the invariant embedding method in a series of model transport problems, for which it is also possible to obtain an analytical solution. Due to the non-linear character of the embedding equations, their solution can only be obtained numerically. However, this can be done via a robust and effective iteration scheme. In return, the domain of applicability is far wider than the model problems investigated in this paper. The use of the invariant embedding method is demonstrated in three different areas. The first is the calculation of the energy spectrum of reflected (sputtered) particles from a multiplying medium, where the multiplication arises from recoil production. Both constant and energy dependent cross sections with a power law dependence were used in the calculations. The second application concerns the calculation of the path length distribution of reflected particles from a medium without multiplication. This is a relatively novel and unexpected application, since the embedding equations do not resolve the depth variable. The third application concerns the demonstration that solutions in an infinite medium and a half-space are interrelated through embedding-like integral equations, by the solution of which the reflected flux from a half-space can be reconstructed from solutions in an infinite medium or vice versa. In all cases the invariant embedding method proved to be robust, fast and monotonically converging to the exact solutions. (authors)

  5. Performance of MPI parallel processing implemented by MCNP5/ MCNPX for criticality benchmark problems

    International Nuclear Information System (INIS)

    Mark Dennis Usang; Mohd Hairie Rabir; Mohd Amin Sharifuldin Salleh; Mohamad Puad Abu

    2012-01-01

    MPI parallelism are implemented on a SUN Workstation for running MCNPX and on the High Performance Computing Facility (HPC) for running MCNP5. 23 input less obtained from MCNP Criticality Validation Suite are utilized for the purpose of evaluating the amount of speed up achievable by using the parallel capabilities of MPI. More importantly, we will study the economics of using more processors and the type of problem where the performance gain are obvious. This is important to enable better practices of resource sharing especially for the HPC facilities processing time. Future endeavours in this direction might even reveal clues for best MCNP5/ MCNPX coding practices for optimum performance of MPI parallelisms. (author)

  6. Toward the rational use of standardized infection ratios to benchmark surgical site infections.

    Science.gov (United States)

    Fukuda, Haruhisa; Morikane, Keita; Kuroki, Manabu; Taniguchi, Shinichiro; Shinzato, Takashi; Sakamoto, Fumie; Okada, Kunihiko; Matsukawa, Hiroshi; Ieiri, Yuko; Hayashi, Kouji; Kawai, Shin

    2013-09-01

    The National Healthcare Safety Network transitioned from surgical site infection (SSI) rates to the standardized infection ratio (SIR) calculated by statistical models that included perioperative factors (surgical approach and surgery duration). Rationally, however, only patient-related variables should be included in the SIR model. Logistic regression was performed to predict expected SSI rate in 2 models that included or excluded perioperative factors. Observed and expected SSI rates were used to calculate the SIR for each participating hospital. The difference of SIR in each model was then evaluated. Surveillance data were collected from a total of 1,530 colon surgery patients and 185 SSIs. C-index in the model with perioperative factors was statistically greater than that in the model including patient-related factors only (0.701 vs 0.621, respectively, P operative process or the competence of surgical teams, these factors should not be considered predictive variables. Copyright © 2013 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Mosby, Inc. All rights reserved.

  7. Benchmarking the invariant embedding method against analytical solutions in model transport problems

    Directory of Open Access Journals (Sweden)

    Wahlberg Malin

    2006-01-01

    Full Text Available The purpose of this paper is to demonstrate the use of the invariant embedding method in a few model transport problems for which it is also possible to obtain an analytical solution. The use of the method is demonstrated in three different areas. The first is the calculation of the energy spectrum of sputtered particles from a scattering medium without absorption, where the multiplication (particle cascade is generated by recoil production. Both constant and energy dependent cross-sections with a power law dependence were treated. The second application concerns the calculation of the path length distribution of reflected particles from a medium without multiplication. This is a relatively novel application, since the embedding equations do not resolve the depth variable. The third application concerns the demonstration that solutions in an infinite medium and in a half-space are interrelated through embedding-like integral equations, by the solution of which the flux reflected from a half-space can be reconstructed from solutions in an infinite medium or vice versa. In all cases, the invariant embedding method proved to be robust, fast, and monotonically converging to the exact solutions.

  8. Hospital website rankings in the United States: expanding benchmarks and standards for effective consumer engagement.

    Science.gov (United States)

    Huerta, Timothy R; Hefner, Jennifer L; Ford, Eric W; McAlearney, Ann Scheck; Menachemi, Nir

    2014-02-25

    these scores, rank order calculations for the top 100 websites are presented. Additionally, a link to raw data, including AHA ID, is provided to enable researchers and practitioners the ability to further explore relationships to other dynamics in health care. This census assessment of US hospitals and their health systems provides a clear indication of the state of the sector. While stakeholder engagement is core to most discussions of the role that hospitals must play in relation to communities, management of an online presence has not been recognized as a core competency fundamental to care delivery. Yet, social media management and network engagement are skills that exist at the confluence of marketing and technical prowess. This paper presents performance guidelines evaluated against best-demonstrated practice or independent standards to facilitate improvement of the sector's use of websites and social media.

  9. An XML format for benchmarks in High School Timetabling

    NARCIS (Netherlands)

    Post, Gerhard F.; Ahmadi, Samad; Daskalaki, Sophia; Kingston, Jeffrey H.; Kyngas, Jari; Nurmi, Cimmo; Ranson, David

    2012-01-01

    The High School Timetabling Problem is amongst the most widely used timetabling problems. This problem has varying structures in different high schools even within the same country or educational system. Due to lack of standard benchmarks and data formats this problem has been studied less than

  10. Reactor dosimetry integral reaction rate data in LMFBR Benchmark and standard neutron fields: status, accuracy and implications

    International Nuclear Information System (INIS)

    Fabry, A.; Ceulemans, H.; Vandeplas, P.; McElroy, W.N.; Lippincott, E.P.

    1977-01-01

    This paper provides conclusions that may be drawn regarding the consistency and accuracy of dosimetry cross-section files on the basis of integral reaction rate data measured in U.S. and European benchmark and standard neutron fields. In a discussion of the major experimental facilities CFRMF (Idaho Falls), BIGTEN (Los Alamos), ΣΣ (Mol, Bucharest), NISUS (London), TAPIRO (Roma), FISSION SPECTRA (NBS, Mol, PTB), attention is paid to quantifying the sensitivity of computed integral data relative to the presently evaluated accuracy of the various neutron spectral distributions. The status of available integral data is reviewed and the assigned uncertainties are appraised, including experience gained by interlaboratory comparisons. For all reactions studied and for the various neutron fields, the measured integral data are compared to the ones computed from the ENDF/B-IV and the SAND-II dosimetry cross-section libraries as well as to some other differential data in relevant cases. This comparison, together with the proposed sensitivity and accuracy assessments, is used, whenever possible, to establish how well the best cross-sections evaluated on the basis of differential measurements (category I dosimetry reactions) are reliable in terms of integral reaction rates prediction and, for those reactions for which discrepancies are indicated, in which energy range it is presumed that additional differential measurements might help. For the other reactions (category II), the inconsistencies and trends are examined. The need for further integral measurements and interlaboratory comparisons is also considered

  11. MCNP neutron benchmarks

    International Nuclear Information System (INIS)

    Hendricks, J.S.; Whalen, D.J.; Cardon, D.A.; Uhle, J.L.

    1991-01-01

    Over 50 neutron benchmark calculations have recently been completed as part of an ongoing program to validate the MCNP Monte Carlo radiation transport code. The new and significant aspects of this work are as follows: These calculations are the first attempt at a validation program for MCNP and the first official benchmarking of version 4 of the code. We believe the chosen set of benchmarks is a comprehensive set that may be useful for benchmarking other radiation transport codes and data libraries. These calculations provide insight into how well neutron transport calculations can be expected to model a wide variety of problems

  12. Comparative analysis of results between CASMO, MCNP and Serpent for a suite of Benchmark problems on BWR reactors; Analisis comparativo de resultados entre CASMO, MCNP y SERPENT para una suite de problemas Benchmark en reactores BWR

    Energy Technology Data Exchange (ETDEWEB)

    Xolocostli M, J. V.; Vargas E, S.; Gomez T, A. M. [ININ, Carretera Mexico-Toluca s/n, 52750 Ocoyoacac, Estado de Mexico (Mexico); Reyes F, M. del C.; Del Valle G, E., E-mail: vicente.xolocostli@inin.gob.mx [IPN, Escuela Superior de Fisica y Matematicas, UP - Adolfo Lopez Mateos, Edif. 9, 07738 Mexico D. F. (Mexico)

    2014-10-15

    In this paper a comparison is made in analyzing the suite of Benchmark problems for reactors type BWR between CASMO-4, MCNP6 and Serpent code. The Benchmark problem consists of two different geometries: a fuel cell of a pin and assembly type BWR. To facilitate the study of reactors physics in the fuel pin their nuclear characteristics are provided to detail, such as burnt dependence, the reactivity of selected nuclide, etc. With respect to the fuel assembly, the presented results are regarding to infinite multiplication factor for burning different steps and different vacuum conditions. Making the analysis of this set of Benchmark problems provides comprehensive test problems for the next fuels generation of BWR reactors with high extended burned. It is important to note that when making this comparison the purpose is to validate the methodologies used in modeling for different operating conditions, if the case is of other BWR assembly. The results will be within a range with some uncertainty, considering that does not depend on code that is used. Escuela Superior de Fisica y Matematicas of Instituto Politecnico Nacional (IPN (Mexico) has accumulated some experience in using Serpent, due to the potential of this code over other commercial codes such as CASMO and MCNP. The obtained results for the infinite multiplication factor are encouraging and motivate the studies to continue with the generation of the X S of a core to a next step a respective nuclear data library is constructed and this can be used by codes developed as part of the development project of the Mexican Analysis Platform of Nuclear Reactors AZTLAN. (Author)

  13. CSNI International Standard Problems (ISP). Brief descriptions (1975-1994)

    International Nuclear Information System (INIS)

    1994-07-01

    Between 1975 and 1994 the NEA Committee on the Safety of Nuclear Installations (CSNI) has sponsored some forty International Standard Problems (ISPs) in the fields of in-vessel thermal-hydraulic behaviour, fuel behaviour under accident conditions, fission product release and transport, core/concrete interactions, hydrogen distribution and mixing, containment thermal-hydraulics. ISPs are comparative exercises in which predictions of different computer codes for a given physical problem are compared with each other or with the results of a carefully controlled experimental study. The main goal of ISP exercises is to increase confidence in the validity and accuracy of tools which are used in assessing the safety of nuclear installations. Moreover, they enable code users to gain experience and demonstrate their competence. ISPs are performed as 'open' or 'blind' problems. In an open Standard Problem the results of the experiment are available to the participants before performing the calculations, while in a blind Standard Problem the results are locked until the calculational results are made available for comparison. Experiments selected to support ISP exercises are exceptionally well documented; they provide the framework for several code validation matrices. This report briefly describes 36 ISPs and 3 containment analysis standard problems (CASP)

  14. Features of energy efficiency benchmarking implementation as tools of DSTU ISO 50001: 2014 for Ukrainian industrial enterprises

    Directory of Open Access Journals (Sweden)

    Анастасія Юріївна Данілкова

    2015-12-01

    Full Text Available Essence, types and stages of energy efficiency benchmarking in the industrial enterprises are considered. Features, advantages, disadvantages and limitations on the use are defined and underlying problems that could affect the successful conduct of energy efficiency benchmarking to Ukrainian industrial enterprises are specified. Energy efficiency benchmarking as tools to the national standard of DSTU ISO 50001: 2014 is proposed

  15. Financial Integrity Benchmarks

    Data.gov (United States)

    City of Jackson, Mississippi — This data compiles standard financial integrity benchmarks that allow the City to measure its financial standing. It measure the City's debt ratio and bond ratings....

  16. The hierarchy problem and Physics Beyond the Standard Model

    Indian Academy of Sciences (India)

    f . Fine-tuning has to be done order by order in perturbation theory. Hierarchy problem. What guarantees the stability of v against quantum fluctuations? ⇒ Physics Beyond the Standard Model. Experimental side: Dark matter, neutrino mass, matter-antimatter asymmetry, ... Gautam Bhattacharyya. IASc Annual Meeting, IISER, ...

  17. Intercomparison of the finite difference and nodal discrete ordinates and surface flux transport methods for a LWR pool-reactor benchmark problem in X-Y geometry

    International Nuclear Information System (INIS)

    O'Dell, R.D.; Stepanek, J.; Wagner, M.R.

    1983-01-01

    The aim of the present work is to compare and discuss the three of the most advanced two dimensional transport methods, the finite difference and nodal discrete ordinates and surface flux method, incorporated into the transport codes TWODANT, TWOTRAN-NODAL, MULTIMEDIUM and SURCU. For intercomparison the eigenvalue and the neutron flux distribution are calculated using these codes in the LWR pool reactor benchmark problem. Additionally the results are compared with some results obtained by French collision probability transport codes MARSYAS and TRIDENT. Because the transport solution of this benchmark problem is close to its diffusion solution some results obtained by the finite element diffusion code FINELM and the finite difference diffusion code DIFF-2D are included

  18. Library Benchmarking

    Directory of Open Access Journals (Sweden)

    Wiji Suwarno

    2017-02-01

    Full Text Available The term benchmarking has been encountered in the implementation of total quality (TQM or in Indonesian termed holistic quality management because benchmarking is a tool to look for ideas or learn from the library. Benchmarking is a processof measuring and comparing for continuous business process of systematic and continuous measurement, the process of measuring and comparing for continuous business process of an organization to get information that can help these organization improve their performance efforts.

  19. CSNI International standard problems (ISP): brief descriptions (1975-1997)

    International Nuclear Information System (INIS)

    1997-07-01

    Over the last twenty years (1975-1999) the NEA Committee on the Safety of Nuclear Installations (CSNI) has sponsored more than forty International Standard Problems (ISPs) in the fields of in-vessel thermal-hydraulic behaviour, fuel behaviour under accident conditions, fission product release and transport, core/concrete interactions, hydrogen distribution and mixing, containment thermal-hydraulic, and iodine behaviour in the containment. ISPs are comparative exercises in which predictions of different computer codes for a given physical problem are compared with each other or with the results of a carefully controlled experimental study. The main goal of ISP exercises is to increase confidence in the validity and accuracy of analytical tools or testing procedures which are needed in warranting the safety of nuclear installations, and to demonstrate the competence of involved institutions. ISP exercises are performed as 'open' or 'blind' problems. The main characteristics of 41 ISPs completed between 1975 and 1999, and 3 containment analysis standard problems (CASPs) are briefly presented

  20. CSNI International standard problems (ISP). Brief descriptions (1975-1999)

    International Nuclear Information System (INIS)

    2000-03-01

    Over the last twenty-five years the NEA Committee on the Safety of Nuclear Installations (CSNI) has sponsored a considerable number of international activities to promote the exchange of experience between its Member countries in the use of nuclear safety codes and testing materials. A primary goal of these activities is to increase confidence in the validity and accuracy of analytical tools or testing procedures which are needed in warranting the safety of nuclear installations, and to demonstrate the competence of involved institutions. International Standard Problems (ISPs) exercises are comparative exercises in which predictions or recalculations of a given physical problem with different best-estimate computer code are compared with each other and above all with the results of a carefully specified experimental study. ISP exercises are performed as 'open' or 'blind' problems. In an open Standard Problem exercise the results of the experiment are available to the participants before performing the calculations, while in a blind Standard Problem exercise the experimental results are locked until the calculation results are made available for comparison. The CSNI-promoted ISP activity started in the early 70's and is still underway. Parallel to other national and international programs the CSNI has sponsored over more than 25 years forty-seven International Standard Problem exercises. This program has been focused mainly on the applicability of large thermal-hydraulic code systems simulating the behaviour of nuclear coolant and containment systems, fuel behaviour under accident conditions, hydrogen distribution, core-concrete interactions and fission product release and transport. One ISP exercise was organised in connection with a seismic ultimate dynamic response test. ISP exercises have proven to be very valuable to participating countries. They have been fruitful to identify code application problems and to amplify the contacts between the experimental and

  1. Co-ordinated research programme on benchmark study for the seismic analysis and testing of WWER-type nuclear power plants. V. 2B. General material: codes, standards, criteria. Working material

    International Nuclear Information System (INIS)

    1999-01-01

    In August 1991, following the SMiRT-11 Conference in Tokyo, a Technical Committee Meeting was held on the 'Seismic safety issues relating to existing NPPs'. The Proceedings of this TCM was subsequently compiled in an IAEA Working Material. One of the main recommendations of this TCM, called for the harmonization of criteria and methods used in Member States in seismic reassessment and upgrading of existing NPPs. Twenty four institutions from thirteen countries participated in the CRP named 'Benchmark study for the seismic analysis and testing of WWER type NPPs'. Two types of WWER reactors (WWER-1000 and WWER-440/213) selected for benchmarking. Kozloduy NPP Units 5/6 and Paks NPP represented these respectively as prototypes. Consistent with the recommendations of the TCM and the working paper prepared by the subsequent Consultants' Meeting, the focal activity of the CRP was the benchmarking exercises. A similar methodology was followed both for Paks NPP and Kozloduy NPP Unit 5. Firstly, the NPP (mainly the reactor building) was tested using a blast loading generated by a series of explosions from buried TNT charges. Records from this test were obtained at several free field locations (both downhole and surface), foundation mat, various elevations of structures as well as some tanks and the stack. Then the benchmark participants were provided with structural drawings, soil data and the free field record of the blast experiment. Their task was to make a blind prediction of the response at preselected locations. The analytical results from these participants were then compared with the results from the test. Although the benchmarking exercises constituted the focus of the CRP, there were many other interesting problems related to the seismic safety of WWER type NPPs which were addressed by the participants. These involved generic studies, i.e. codes and standards used in original WWER designs and their comparison with current international practice; seismic analysis

  2. Benchmark selection

    DEFF Research Database (Denmark)

    Hougaard, Jens Leth; Tvede, Mich

    2002-01-01

    Within a production theoretic framework, this paper considers an axiomatic approach to benchmark selection. It is shown that two simple and weak axioms; efficiency and comprehensive monotonicity characterize a natural family of benchmarks which typically becomes unique. Further axioms are added...... in order to obtain a unique selection...

  3. Control volume method for the thermal convection problem in a rotating spherical shell: test on the benchmark solution

    Czech Academy of Sciences Publication Activity Database

    Hejda, Pavel; Reshetnyak, M.

    2004-01-01

    Roč. 48, č. 4 (2004), s. 741-746 ISSN 0039-3169 R&D Projects: GA AV ČR KSK3012103 Grant - others:RFFR(RU) 03-05-64074; EC(XE) HPRI-CT-1999-00026 Institutional research plan: CEZ:AV0Z3012916 Keywords : liquid core * dynamo benchmark * finite volume method Subject RIV: DE - Earth Magnetism, Geodesy, Geography Impact factor: 0.447, year: 2004

  4. Standard Problems for CFD Validation for NGNP - Status Report

    International Nuclear Information System (INIS)

    Johnson, Richard W.; Schultz, Richard R.

    2010-01-01

    The U.S. Department of Energy (DOE) is conducting research and development to support the resurgence of nuclear power in the United States for both electrical power generation and production of process heat required for industrial processes such as the manufacture of hydrogen for use as a fuel in automobiles. The project is called the Next Generation Nuclear Plant (NGNP) Project, which is based on a Generation IV reactor concept called the very high temperature reactor (VHTR). The VHTR will be of the prismatic or pebble bed type; the former is considered herein. The VHTR will use helium as the coolant at temperatures ranging from 250 C to perhaps 1000 C. While computational fluid dynamics (CFD) has not previously been used for the safety analysis of nuclear reactors in the United States, it is being considered for existing and future reactors. It is fully recognized that CFD simulation codes will have to be validated for flow physics reasonably close to actual fluid dynamic conditions expected in normal operational and accident situations. The ''Standard Problem'' is an experimental data set that represents an important physical phenomenon or phenomena, whose selection is based on a phenomena identification and ranking table (PIRT) for the reactor in question. It will be necessary to build a database that contains a number of standard problems for use to validate CFD and systems analysis codes for the many physical problems that will need to be analyzed. The first two standard problems that have been developed for CFD validation consider flow in the lower plenum of the VHTR and bypass flow in the prismatic core. Both involve scaled models built from quartz and designed to be installed in the INL's matched index of refraction (MIR) test facility. The MIR facility employs mineral oil as the working fluid at a constant temperature. At this temperature, the index of refraction of the mineral oil is the same as that of the quartz. This provides an advantage to the

  5. Standard physics solution to the solar neutrino problem?

    Energy Technology Data Exchange (ETDEWEB)

    Dar, A. [Technion-Israel Inst. of Tech., Haifa (Israel). Dept. of Physics

    1996-11-01

    The {sup 8}B solar neutrino flux predicted by the standard solar model (SSM) is consistent within the theoretical and experimental uncertainties with that at Kamiokande. The Gallium and Chlorine solar neutrino experiments, however, seem to imply that the {sup 7}Be solar neutrino flux is strongly suppressed compared with that predicted by the SSM. If the {sup 7}Be solar neutrino flux is suppressed, still it can be due to astrophysical effects not included in the simplistic SSM. Such effects include short term fluctuations or periodic variation of the temperature in the solar core, rotational mixing of {sup 3}He in the solar core, and dense plasma effects which may strongly enhance p-capture by {sup 7}Be relative to e-capture. The new generation of solar observations which already look non stop deep into the sun, like Superkamiokande through neutrinos, and SOHO and GONG through acoustic waves, may point at the correct solution. Only Superkamiokande and/or future solar neutrino experiments, such as SNO, BOREXINO and HELLAZ, will be able to find out whether the solar neutrino problem is caused by neutrino properties beyond the minimal standard electroweak model or whether it is just a problem of the too simplistic standard solar model. (author) 1 fig., 3 tabs., refs.

  6. PMLB: a large benchmark suite for machine learning evaluation and comparison.

    Science.gov (United States)

    Olson, Randal S; La Cava, William; Orzechowski, Patryk; Urbanowicz, Ryan J; Moore, Jason H

    2017-01-01

    The selection, development, or comparison of machine learning methods in data mining can be a difficult task based on the target problem and goals of a particular study. Numerous publicly available real-world and simulated benchmark datasets have emerged from different sources, but their organization and adoption as standards have been inconsistent. As such, selecting and curating specific benchmarks remains an unnecessary burden on machine learning practitioners and data scientists. The present study introduces an accessible, curated, and developing public benchmark resource to facilitate identification of the strengths and weaknesses of different machine learning methodologies. We compare meta-features among the current set of benchmark datasets in this resource to characterize the diversity of available data. Finally, we apply a number of established machine learning methods to the entire benchmark suite and analyze how datasets and algorithms cluster in terms of performance. From this study, we find that existing benchmarks lack the diversity to properly benchmark machine learning algorithms, and there are several gaps in benchmarking problems that still need to be considered. This work represents another important step towards understanding the limitations of popular benchmarking suites and developing a resource that connects existing benchmarking standards to more diverse and efficient standards in the future.

  7. SSI and structural benchmarks

    International Nuclear Information System (INIS)

    Philippacopoulos, A.J.; Miller, C.A.; Costantino, C.J.; Graves, H.

    1987-01-01

    This paper presents the latest results of the ongoing program entitled, Standard Problems for Structural Computer Codes, currently being worked on at BNL for the USNRC, Office of Nuclear Regulatory Research. During FY 1986, efforts were focussed on three tasks, namely, (1) an investigation of ground water effects on the response of Category I structures, (2) the Soil-Structure Interaction Workshop and (3) studies on structural benchmarks associated with Category I structures. The objective of the studies on ground water effects is to verify the applicability and the limitations of the SSI methods currently used by the industry in performing seismic evaluations of nuclear plants which are located at sites with high water tables. In a previous study by BNL (NUREG/CR-4588), it has been concluded that the pore water can influence significantly the soil-structure interaction process. This result, however, is based on the assumption of fully saturated soil profiles. Consequently, the work was further extended to include cases associated with variable water table depths. In this paper, results related to cut-off depths beyond which the pore water effects can be ignored in seismic calculations, are addressed. Comprehensive numerical data are given for soil configurations typical to those encountered in nuclear plant sites. These data were generated by using a modified version of the SLAM code which is capable of handling problems related to the dynamic response of saturated soils. Further, the paper presents some key aspects of the Soil-Structure Interaction Workshop (NUREG/CP-0054) which was held in Bethesda, MD on June 1, 1986. Finally, recent efforts related to the task on the structural benchmarks are described

  8. Regulatory Benchmarking

    DEFF Research Database (Denmark)

    Agrell, Per J.; Bogetoft, Peter

    2017-01-01

    Benchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators...

  9. Interactive benchmarking

    DEFF Research Database (Denmark)

    Lawson, Lartey; Nielsen, Kurt

    2005-01-01

    We discuss individual learning by interactive benchmarking using stochastic frontier models. The interactions allow the user to tailor the performance evaluation to preferences and explore alternative improvement strategies by selecting and searching the different frontiers using directional...

  10. Using standardized patients versus video cases for representing clinical problems in problem-based learning

    Directory of Open Access Journals (Sweden)

    Bo Young Yoon

    2016-06-01

    Full Text Available Purpose: The quality of problem representation is critical for developing students’ problem-solving abilities in problem-based learning (PBL. This study investigates preclinical students’ experience with standardized patients (SPs as a problem representation method compared to using video cases in PBL. Methods: A cohort of 99 second-year preclinical students from Inje University College of Medicine (IUCM responded to a Likert scale questionnaire on their learning experiences after they had experienced both video cases and SPs in PBL. The questionnaire consisted of 14 items with eight subcategories: problem identification, hypothesis generation, motivation, collaborative learning, reflective thinking, authenticity, patient-doctor communication, and attitude toward patients. Results: The results reveal that using SPs led to the preclinical students having significantly positive experiences in boosting patient-doctor communication skills; the perceived authenticity of their clinical situations; development of proper attitudes toward patients; and motivation, reflective thinking, and collaborative learning when compared to using video cases. The SPs also provided more challenges than the video cases during problem identification and hypotheses generation. Conclusion: SPs are more effective than video cases in delivering higher levels of authenticity in clinical problems for PBL. The interaction with SPs engages preclinical students in deeper thinking and discussion; growth of communication skills; development of proper attitudes toward patients; and motivation. Considering the higher cost of SPs compared with video cases, SPs could be used most advantageously during the preclinical period in the IUCM curriculum.

  11. Using standardized patients versus video cases for representing clinical problems in problem-based learning.

    Science.gov (United States)

    Yoon, Bo Young; Choi, Ikseon; Choi, Seokjin; Kim, Tae-Hee; Roh, Hyerin; Rhee, Byoung Doo; Lee, Jong-Tae

    2016-06-01

    The quality of problem representation is critical for developing students' problem-solving abilities in problem-based learning (PBL). This study investigates preclinical students' experience with standardized patients (SPs) as a problem representation method compared to using video cases in PBL. A cohort of 99 second-year preclinical students from Inje University College of Medicine (IUCM) responded to a Likert scale questionnaire on their learning experiences after they had experienced both video cases and SPs in PBL. The questionnaire consisted of 14 items with eight subcategories: problem identification, hypothesis generation, motivation, collaborative learning, reflective thinking, authenticity, patient-doctor communication, and attitude toward patients. The results reveal that using SPs led to the preclinical students having significantly positive experiences in boosting patient-doctor communication skills; the perceived authenticity of their clinical situations; development of proper attitudes toward patients; and motivation, reflective thinking, and collaborative learning when compared to using video cases. The SPs also provided more challenges than the video cases during problem identification and hypotheses generation. SPs are more effective than video cases in delivering higher levels of authenticity in clinical problems for PBL. The interaction with SPs engages preclinical students in deeper thinking and discussion; growth of communication skills; development of proper attitudes toward patients; and motivation. Considering the higher cost of SPs compared with video cases, SPs could be used most advantageously during the preclinical period in the IUCM curriculum.

  12. Benchmarking B-Cell Epitope Prediction for the Design of Peptide-Based Vaccines: Problems and Prospects

    Directory of Open Access Journals (Sweden)

    Salvador Eugenio C. Caoili

    2010-01-01

    Full Text Available To better support the design of peptide-based vaccines, refinement of methods to predict B-cell epitopes necessitates meaningful benchmarking against empirical data on the cross-reactivity of polyclonal antipeptide antibodies with proteins, such that the positive data reflect functionally relevant cross-reactivity (which is consistent with antibody-mediated change in protein function and the negative data reflect genuine absence of cross-reactivity (rather than apparent absence of cross-reactivity due to artifactual masking of B-cell epitopes in immunoassays. These data are heterogeneous in view of multiple factors that complicate B-cell epitope prediction, notably physicochemical factors that define key structural differences between immunizing peptides and their cognate proteins (e.g., unmatched electrical charges along the peptide-protein sequence alignments. If the data are partitioned with respect to these factors, iterative parallel benchmarking against the resulting subsets of data provides a basis for systematically identifying and addressing the limitations of methods for B-cell epitope prediction as applied to vaccine design.

  13. Algebraic Multigrid Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    2017-08-01

    AMG is a parallel algebraic multigrid solver for linear systems arising from problems on unstructured grids. It has been derived directly from the BoomerAMG solver in the hypre library, a large linear solvers library that is being developed in the Center for Applied Scientific Computing (CASC) at LLNL and is very similar to the AMG2013 benchmark with additional optimizations. The driver provided in the benchmark can build various test problems. The default problem is a Laplace type problem with a 27-point stencil, which can be scaled up and is designed to solve a very large problem. A second problem simulates a time dependent problem, in which successively various smnllcr systems are solved.

  14. Benchmarking Complications Associated with Esophagectomy

    NARCIS (Netherlands)

    Low, Donald E.; Kuppusamy, Madhan Kumar; Alderson, Derek; Cecconello, Ivan; Chang, Andrew C.; Darling, Gail; Davies, Andrew; D'journo, Xavier Benoit; Gisbertz, Suzanne S.; Griffin, S. Michael; Hardwick, Richard; Hoelscher, Arnulf; Hofstetter, Wayne; Jobe, Blair; Kitagawa, Yuko; Law, Simon; Mariette, Christophe; Maynard, Nick; Morse, Christopher R.; Nafteux, Philippe; Pera, Manuel; Pramesh, C. S.; Puig, Sonia; Reynolds, John V.; Schroeder, Wolfgang; Smithers, Mark; Wijnhoven, B. P. L.

    2017-01-01

    Utilizing a standardized dataset with specific definitions to prospectively collect international data to provide a benchmark for complications and outcomes associated with esophagectomy. Outcome reporting in oncologic surgery has suffered from the lack of a standardized system for reporting

  15. The hierarchy problem of the electroweak standard model revisited

    Energy Technology Data Exchange (ETDEWEB)

    Jegerlehner, Fred [Humboldt-Universitaet, Berlin (Germany). Inst. fuer Physik; Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany)

    2013-05-15

    A careful renormalization group analysis of the electroweak Standard Model reveals that there is no hierarchy problem in the SM. In the broken phase a light Higgs turns out to be natural as it is self-protected and self-tuned by the Higgs mechanism. It means that the scalar Higgs needs not be protected by any extra symmetry, specifically super symmetry, in order not to be much heavier than the other SM particles which are protected by gauge- or chiral-symmetry. Thus the existence of quadratic cutoff effects in the SM cannot motivate the need for a super symmetric extensions of the SM, but in contrast plays an important role in triggering the electroweak phase transition and in shaping the Higgs potential in the early universe to drive inflation as supported by observation.

  16. Contribution from twenty two years of CSNI International Standard Problems

    International Nuclear Information System (INIS)

    1998-03-01

    This report provides a brief overview on the contribution of some CSNI International Standard Problems (ISPs) to nuclear reactor safety issues (41 ISPs performed over the last 22 years). This CSNI activity on ISPs has been one of the major activities of the Principal Working Group no.2 on Coolant System Behaviour. Its domain extended from thermal-hydraulics to several other accident domains following the main concerns of nuclear reactor safety, e.g., LOCA predictions fuel behaviour, operator procedures, containment thermal-hydraulics severe accidents, VVERs, etc. ISPs are providing unique material and benefits for some safety related issues. Clearly, all the technical findings and benefits provided by ISPs are still needed and contribute to advancement of nuclear safety. The report provides some overview on the general objectives of ISPs, content and types of ISPs, and technical domains covered by ISPs, followed by a synthesis of technical findings and benefits to the scientific community

  17. Transmission investment problems in Europe: Going beyond standard solutions

    International Nuclear Information System (INIS)

    Buijs, Patrik; Bekaert, David; Cole, Stijn; Van Hertem, Dirk; Belmans, Ronnie

    2011-01-01

    The European transmission grid is facing an investment challenge. There is a strong call for more transmission capacity. At the same time, the investment climate is fierce and troubled by public opposition, a complex regulatory framework, etc. Many transmission capacity expansion projects are delayed or canceled. In this paper different technology options suitable for increasing transmission capacity are discussed. The aim is to provide policy-makers with information on technologies without going too much into technical details. The focus is on opportunities and limitations to implement various technological alternatives in practice, including technical solutions that go beyond constructing new connection lines. The criteria used in this technology assessment are based on the obstacles reported in the European Priority Interconnection Plan. This ensures a realistic approach based on problems encountered in real projects. Although AC overhead lines (OHL) will remain the standard solution for grid expansion, it is argued that different technology options can overcome many obstacles that OHL face. Additionally, it is illustrated that the higher investment costs for some solutions can be offset with an increased benefit, e.g. by accomplishing investments with smaller delays due to fewer obstacles encountered. - Research highlights: → Assessment of real problems encountered in transmission investments. → Comparison of transmission technologies. → Techno-economic evaluation of transmission technologies.

  18. ISP33 standard problem on the PACTEL facility

    Energy Technology Data Exchange (ETDEWEB)

    Purhonen, H.; Kouhia, J. [VTT Energy, Lappeenranta (Finland); Kalli, H. [Lappeenranta Univ. of Technology (Finland)

    1995-09-01

    ISP33 is the first OECD/NEA/CSNI standard problem related to VVER type of pressurized water reactors. The reference reactor of the PACTEL test facility, which was used to carry out the ISP33 experiment, is the VVER-440 reactor, two of which are located near the Finnish city of Loviisa. The objective of the ISP33 test was to study the natural circulation behaviour of VVER-440 reactors at different coolant inventories. Natural circulation was considered as a suitable phenomenon to focus on by the first VVER related ISP due to its importance in most accidents and transients. The behaviour of the natural circulation was expected to be different compared to Western type of PWRs as a result of the effect of horizontal steam generators and the hot leg loop seals. This ISP was conducted as a blind problem. The experiment was started at full coolant inventory. Single-phase natural circulation transported the energy from the core to the steam generators. The inventory was then reduced stepwise at about 900 s intervals draining 60 kg each time from the bottom of the downcomer. the core power was about 3.7% of the nominal value. The test was terminated after the cladding temperatures began to rise. ATHLET, CATHARE, RELAP5 (MODs 3, 2.5 and 2), RELAP4/MOD6, DINAMIKA and TECH-M4 codes were used in 21 pre- and 20 posttest calculations submitted for the ISP33.

  19. Enabling benchmarking and improving operational efficiency at nuclear power plants through adoption of a common process model: SNPM (standard nuclear performance model)

    International Nuclear Information System (INIS)

    Pete Karns

    2006-01-01

    To support the projected increase in base-load electricity demand, nuclear operating companies must maintain or improve upon current generation rates, all while their assets continue to age. Certainly new plants are and will be built, however the bulk of the world's nuclear generation comes from plants constructed in the 1970's and 1980's. The nuclear energy industry in the United States has dramatically increased its electricity production over the past decade; from 75.1% in 1994 to 91.9% by 2002 (source NEI US Nuclear Industry Net Capacity Factors - 1980 to 2003). This increase, coupled with lowered production costs; $2.43 in 1994 to $1.71 in 2002 (factored for inflation source NEI US Nuclear Industry net Production Costs 1980 to 2002) is due in large part to a focus on operational excellence that is driven by an industry effort to develop and share best practices for the purposes of benchmarking and improving overall performance. These best-practice processes, known as the standard nuclear performance model (SNPM), present an opportunity for European nuclear power generators who are looking to improve current production rates. In essence the SNPM is a model for the safe, reliable, and economically competitive nuclear power generation. The SNPM has been a joint effort of several industry bodies: Nuclear Energy Institute, Electric Cost Utility Group, and Institute of Nuclear Power Operations (INPO). The standard nuclear performance model (see figure 1) is comprised of eight primary processes, supported by forty four sub-processes and a number of company specific activities and tasks. The processes were originally envisioned by INPO in 1994 and evolved into the SNPM that was originally launched in 1998. Since that time communities of practice (CoPs) have emerged via workshops to further improve the processes and their inter-operability, CoP representatives include people from: nuclear power operating companies, policy bodies, industry suppliers and consultants, and

  20. Applications of the Space-Time Conservation Element and Solution Element (CE/SE) Method to Computational Aeroacoustic Benchmark Problems

    Science.gov (United States)

    Wang, Xiao-Yen; Himansu, Ananda; Chang, Sin-Chung; Jorgenson, Philip C. E.

    2000-01-01

    The Internal Propagation problems, Fan Noise problem, and Turbomachinery Noise problems are solved using the space-time conservation element and solution element (CE/SE) method. The problems in internal propagation problems address the propagation of sound waves through a nozzle. Both the nonlinear and linear quasi 1D Euler equations are solved. Numerical solutions are presented and compared with the analytical solution. The fan noise problem concerns the effect of the sweep angle on the acoustic field generated by the interaction of a convected gust with a cascade of 3D flat plates. A parallel version of the 3D CE/SE Euler solver is developed and employed to obtain numerical solutions for a family of swept flat plates. Numerical solutions for sweep angles of 0, 5, 10, and 15 deg are presented. The turbomachinery problems describe the interaction of a 2D vortical gust with a cascade of flat-plate airfoils with/without a downstream moving grid. The 2D nonlinear Euler Equations are solved and the converged numerical solutions are presented and compared with the corresponding analytical solution. All the comparisons demonstrate that the CE/SE method is capable of solving aeroacoustic problems with/without shock waves in a simple and efficient manner. Furthermore, the simple non-reflecting boundary condition used in the CE/SE method which is not based on the characteristic theory works very well in 1D, 2D and 3D problems.

  1. Application of Jacobian-free Newton–Krylov method in implicitly solving two-fluid six-equation two-phase flow problems: Implementation, validation and benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Zou, Ling, E-mail: ling.zou@inl.gov; Zhao, Haihua; Zhang, Hongbin

    2016-04-15

    Highlights: • High-order spatial and fully implicit temporal numerical schemes in solving two-phase six-equation model. • Jacobian-free Newton–Krylov method was used to solve discretized nonlinear equations. • Realistic flow regimes and closure correlations were used. • Extensive code validation using experimental data, and benchmark with RELAP5-3D. - Abstract: This work represents a first-of-its-kind successful application to employ advanced numerical methods in solving realistic two-phase flow problems with two-fluid six-equation two-phase flow model. These advanced numerical methods include high-resolution spatial discretization scheme with staggered grids (high-order) fully implicit time integration schemes, and Jacobian-free Newton–Krylov (JFNK) method as the nonlinear solver. The computer code developed in this work has been extensively validated with existing experimental flow boiling data in vertical pipes and rod bundles, which cover wide ranges of experimental conditions, such as pressure, inlet mass flux, wall heat flux and exit void fraction. Additional code-to-code benchmark with the RELAP5-3D code further verifies the correct code implementation. The combined methods employed in this work exhibit strong robustness in solving two-phase flow problems even when phase appearance (boiling) and realistic discrete flow regimes are considered. Transitional flow regimes used in existing system analysis codes, normally introduced to overcome numerical difficulty, were completely removed in this work. This in turn provides the possibility to utilize more sophisticated flow regime maps in the future to further improve simulation accuracy.

  2. Finite element analyses for seismic shear wall international standard problem

    Energy Technology Data Exchange (ETDEWEB)

    Park, Y.J.; Hofmayer, C.H.

    1998-04-01

    Two identical reinforced concrete (RC) shear walls, which consist of web, flanges and massive top and bottom slabs, were tested up to ultimate failure under earthquake motions at the Nuclear Power Engineering Corporation`s (NUPEC) Tadotsu Engineering Laboratory, Japan. NUPEC provided the dynamic test results to the OECD (Organization for Economic Cooperation and Development), Nuclear Energy Agency (NEA) for use as an International Standard Problem (ISP). The shear walls were intended to be part of a typical reactor building. One of the major objectives of the Seismic Shear Wall ISP (SSWISP) was to evaluate various seismic analysis methods for concrete structures used for design and seismic margin assessment. It also offered a unique opportunity to assess the state-of-the-art in nonlinear dynamic analysis of reinforced concrete shear wall structures under severe earthquake loadings. As a participant of the SSWISP workshops, Brookhaven National Laboratory (BNL) performed finite element analyses under the sponsorship of the U.S. Nuclear Regulatory Commission (USNRC). Three types of analysis were performed, i.e., monotonic static (push-over), cyclic static and dynamic analyses. Additional monotonic static analyses were performed by two consultants, F. Vecchio of the University of Toronto (UT) and F. Filippou of the University of California at Berkeley (UCB). The analysis results by BNL and the consultants were presented during the second workshop in Yokohama, Japan in 1996. A total of 55 analyses were presented during the workshop by 30 participants from 11 different countries. The major findings on the presented analysis methods, as well as engineering insights regarding the applicability and reliability of the FEM codes are described in detail in this report. 16 refs., 60 figs., 16 tabs.

  3. The application of isogeometric analysis to the neutron diffusion equation for a pincell problem with an analytic benchmark

    International Nuclear Information System (INIS)

    Hall, S.K.; Eaton, M.D.; Williams, M.M.R.

    2012-01-01

    Highlights: ► Isogeometric analysis used to obtain solutions to the neutron diffusion equation. ► Exact geometry captured for a circular fuel pin within a square moderator. ► Comparisons are made between the finite element method and isogeometric analysis. ► Error and observed order of convergence found using an analytic solution. -- Abstract: In this paper the neutron diffusion equation is solved using Isogeometric Analysis (IGA), which is an attempt to generalise Finite Element Analysis (FEA) to include exact geometries. In contrast to FEA, the basis functions are rational functions instead of polynomials. These rational functions, called non-uniform rational B-splines, are used to capture both the geometry and approximate the solution. The method of manufactured solutions is used to verify a MatLab implementation of IGA, which is then applied to a pincell problem. This is a circular uranium fuel pin within a square block of graphite moderator. A new method is used to compute an analytic solution to a simplified version of this problem, and is then used to observe the order of convergence of the numerical scheme. Comparisons are made against quadratic finite elements for the pincell problem, and it is found that the disadvantage factor computed using IGA is less accurate. This is due to a cancellation of errors in the FEA solution. A modified pincell problem with vacuum boundary conditions is then considered. IGA is shown to outperform FEA in this situation.

  4. SafeCare: An Innovative Approach for Improving Quality Through Standards, Benchmarking, and Improvement in Low- and Middle- Income Countries.

    Science.gov (United States)

    Johnson, Michael C; Schellekens, Onno; Stewart, Jacqui; van Ostenberg, Paul; de Wit, Tobias Rinke; Spieker, Nicole

    2016-08-01

    In low- and middle-income countries (LMICs), patients often have limited access to high-quality care because of a shortage of facilities and human resources, inefficiency of resource allocation, and limited health insurance. SafeCare was developed to provide innovative health care standards; surveyor training; a grading system for quality of care; a quality improvement process that is broken down into achievable, measurable steps to facilitate incremental improvement; and a private sector-supported health financing model. Three organizations-PharmAccess Foundation, Joint Commission International, and the Council for Health Service Accreditation of Southern Africa-launched SafeCare in 2011 as a formal partnership. Five SafeCare levels of improvement are allocated on the basis of an algorithm that incorporates both the overall score and weighted criteria, so that certain high-risk criteria need to be in place before a facility can move to the next SafeCare certification level. A customized quality improvement plan based on the SafeCare assessment results lists the specific, measurable activities that should be undertaken to address gaps in quality found during the initial assessment and to meet the nextlevel SafeCare certificate. The standards have been implemented in more than 800 primary and secondary facilities by qualified local surveyors, in partnership with various local public and private partner organizations, in six sub-Saharan African countries (Ghana, Kenya, Nigeria, Namibia, Tanzania, and Zambia). Expanding access to care and improving health care quality in LMICs will require a coordinated effort between institutions and other stakeholders. SafeCare's standards and assessment methodology can help build trust between stakeholders and lay the foundation for country-led quality monitoring systems.

  5. Drowning - a scientometric analysis and data acquisition of a constant global problem employing density equalizing mapping and scientometric benchmarking procedures

    Directory of Open Access Journals (Sweden)

    Groneberg David A

    2011-10-01

    Full Text Available Abstract Background Drowning is a constant global problem which claims approximately half a million victims worldwide each year, whereas the number of near-drowning victims is considerably higher. Public health strategies to reduce the burden of death are still limited. While research activities in the subject drowning grow constantly, yet there is no scientometric evaluation of the existing literature at the present time. Methods The current study uses classical bibliometric tools and visualizing techniques such as density equalizing mapping to analyse and evaluate the scientific research in the field of drowning. The interpretation of the achieved results is also implemented in the context of the data collection of the WHO. Results All studies related to drowning and listed in the ISI-Web of Science database since 1900 were identified using the search term "drowning". Implementing bibliometric methods, a constant increase in quantitative markers such as number of publications per state, publication language or collaborations as well as qualitative markers such as citations were observed for research in the field of drowning. The combination with density equalizing mapping exposed different global patterns for research productivity and the total number of drowning deaths and drowning rates respectively. Chart techniques were used to illustrate bi- and multilateral research cooperation. Conclusions The present study provides the first scientometric approach that visualizes research activity on the subject of drowning. It can be assumed that the scientific approach to this topic will achieve even greater dimensions because of its continuing actuality.

  6. Drowning - a scientometric analysis and data acquisition of a constant global problem employing density equalizing mapping and scientometric benchmarking procedures

    Science.gov (United States)

    2011-01-01

    Background Drowning is a constant global problem which claims approximately half a million victims worldwide each year, whereas the number of near-drowning victims is considerably higher. Public health strategies to reduce the burden of death are still limited. While research activities in the subject drowning grow constantly, yet there is no scientometric evaluation of the existing literature at the present time. Methods The current study uses classical bibliometric tools and visualizing techniques such as density equalizing mapping to analyse and evaluate the scientific research in the field of drowning. The interpretation of the achieved results is also implemented in the context of the data collection of the WHO. Results All studies related to drowning and listed in the ISI-Web of Science database since 1900 were identified using the search term "drowning". Implementing bibliometric methods, a constant increase in quantitative markers such as number of publications per state, publication language or collaborations as well as qualitative markers such as citations were observed for research in the field of drowning. The combination with density equalizing mapping exposed different global patterns for research productivity and the total number of drowning deaths and drowning rates respectively. Chart techniques were used to illustrate bi- and multilateral research cooperation. Conclusions The present study provides the first scientometric approach that visualizes research activity on the subject of drowning. It can be assumed that the scientific approach to this topic will achieve even greater dimensions because of its continuing actuality. PMID:21999813

  7. Mathematical Problem Solving Ability of Eleventh Standard Students

    Science.gov (United States)

    Priya, J. Johnsi

    2017-01-01

    There is a general assertion among mathematics instructors that learners need to acquire problem solving expertise, figure out how to communicate using mathematics knowledge and aptitude, create numerical reasoning and thinking, to see the interconnectedness amongst mathematics and other subjects. Based on this perspective, the present study aims…

  8. Solution of a benchmark set problems for BWR and PWR reactors with UO{sub 2} and MOX fuels using CASMO-4; Solucion de un Conjunto de Problemas Benchmark para Reactores BWR y PWR con Combustible UO{sub 2} y MOX Usando CASMO-4

    Energy Technology Data Exchange (ETDEWEB)

    Martinez F, M.A.; Valle G, E. del; Alonso V, G. [IPN, ESFM, 07738 Mexico D.F. (Mexico)]. e-mail: mike_ipn_esfm@hotmail. com

    2007-07-01

    In this work some of the results for a group of benchmark problems of light water reactors that allow to study the physics of the fuels of these reactors are presented. These benchmark problems were proposed by Akio Yamamoto and collaborators in 2002 and they include two fuel types; uranium dioxide (UO{sub 2}) and mixed oxides (MOX). The range of problems that its cover embraces three different configurations: unitary cell for a fuel bar, fuel assemble of PWR and fuel assemble of BWR what allows to carry out an understanding analysis of the problems related with the fuel performance of new generation in light water reactors with high burnt. Also these benchmark problems help to understand the fuel administration in core of a BWR like of a PWR. The calculations were carried out with CMS (of their initials in English Core Management Software), particularly with CASMO-4 that is a code designed to carry out analysis of fuels burnt of fuel bars cells as well as fuel assemblies as much for PWR as for BWR and that it is part in turn of the CMS code. (Author)

  9. Handbook of critical experiments benchmarks

    International Nuclear Information System (INIS)

    Durst, B.M.; Bierman, S.R.; Clayton, E.D.

    1978-03-01

    Data from critical experiments have been collected together for use as benchmarks in evaluating calculational techniques and nuclear data. These benchmarks have been selected from the numerous experiments performed on homogeneous plutonium systems. No attempt has been made to reproduce all of the data that exists. The primary objective in the collection of these data is to present representative experimental data defined in a concise, standardized format that can easily be translated into computer code input

  10. Benchmarking the Netherlands. Benchmarking for growth

    International Nuclear Information System (INIS)

    2003-01-01

    This is the fourth edition of the Ministry of Economic Affairs' publication 'Benchmarking the Netherlands', which aims to assess the competitiveness of the Dutch economy. The methodology and objective of the benchmarking remain the same. The basic conditions for economic activity (institutions, regulation, etc.) in a number of benchmark countries are compared in order to learn from the solutions found by other countries for common economic problems. This publication is devoted entirely to the potential output of the Dutch economy. In other words, its ability to achieve sustainable growth and create work over a longer period without capacity becoming an obstacle. This is important because economic growth is needed to increase prosperity in the broad sense and meeting social needs. Prosperity in both a material (per capita GDP) and immaterial (living environment, environment, health, etc) sense, in other words. The economy's potential output is determined by two structural factors: the growth of potential employment and the structural increase in labour productivity. Analysis by the Netherlands Bureau for Economic Policy Analysis (CPB) shows that in recent years the increase in the capacity for economic growth has been realised mainly by increasing the supply of labour and reducing the equilibrium unemployment rate. In view of the ageing of the population in the coming years and decades the supply of labour is unlikely to continue growing at the pace we have become accustomed to in recent years. According to a number of recent studies, to achieve a respectable rate of sustainable economic growth the aim will therefore have to be to increase labour productivity. To realise this we have to focus on for six pillars of economic policy: (1) human capital, (2) functioning of markets, (3) entrepreneurship, (4) spatial planning, (5) innovation, and (6) sustainability. These six pillars determine the course for economic policy aiming at higher productivity growth. Throughout

  11. Verification of the TASS/SMR Code using Standard Problems

    Energy Technology Data Exchange (ETDEWEB)

    Kim, See Darl; Kim, S. H.; Lee, G. H.; Chung, Y. J.; Hwang, Y. D

    2008-06-15

    The TASS/SMR(Transient And Setpoint System-integrated Modular Reactor) code is a thermal hydraulic analysis computer program developed for the safety and transient analysis of integral type pressurizer water reactor SMART(System-integrated Modular Advanced ReacTor) plant. Thus, the various models reflected the design features of SMART were implemented into the TASS/SMR code. Since the technologies used for the integral type reactor SMART are different from the existing reactors, the mathematical models of the TASS/SMR code that describe the thermal hydraulic processes, phenomena and numerical solution methods are required to be verified for the various operating transient and accident conditions of SMART. Also the reliability of the analysis results of the code need to be verified using proper experimental data. This is the basic study on the model verification of the TASS/SMR code and its applicability to the safety analysis of SMART. For this purpose, the basic equations set and several special models of TASS/SMR code are reviewed and the basic conceptual and analytical problems are selected to assess the fundamental numerical analysis capability. The selected basic problems are analyzed using TASS/SMR code and the results are evaluated in comparison with the known reference solution or pertinent physical models to assess the numerical capability of the TASS/SMR code and the reliability of the analysis results of the code. TASS/SMR code 2.0 Version used this analysis.

  12. Solving the Standard Model Problems in Softened Gravity

    CERN Document Server

    Salvio, Alberto

    2016-11-16

    The Higgs naturalness problem is solved if the growth of Einstein's gravitational interaction is softened at an energy $ \\lesssim 10^{11}\\,$GeV (softened gravity). We work here within an explicit realization where the Einstein-Hilbert Lagrangian is extended to include terms quadratic in the curvature and a non-minimal coupling with the Higgs. We show that this solution is preserved by adding three right-handed neutrinos with masses below the electroweak scale, accounting for neutrino oscillations, dark matter and the baryon asymmetry. The smallness of the right-handed neutrino masses (compared to the Planck scale) and the QCD $\\theta$-term are also shown to be natural. We prove that a possible gravitational source of CP violation cannot spoil the model, thanks to the presence of right-handed neutrinos. Starobinsky inflation can occur in this context, even if we live in a metastable vacuum.

  13. Impact testing and analysis for structural code benchmarking

    International Nuclear Information System (INIS)

    Glass, R.E.

    1989-01-01

    Sandia National Laboratories, in cooperation with industry and other national laboratories, has been benchmarking computer codes used to predict the structural, thermal, criticality, and shielding behavior of radioactive materials packages. The first step in the benchmarking of the codes was to develop standard problem sets and to compare the results from several codes and users. This step for structural analysis codes has been completed as described in Structural Code Benchmarking for the Analysis of Impact Response of Nuclear Material Shipping Casks, R.E. Glass, Sandia National Laboratories, 1985. The problem set is shown in Fig. 1. This problem set exercised the ability of the codes to predict the response to end (axisymmetric) and side (plane strain) impacts with both elastic and elastic/plastic materials. The results from these problems showed that there is good agreement in predicting elastic response. Significant differences occurred in predicting strains for the elastic/plastic models. An example of the variation in predicting plastic behavior is given, which shows the hoop strain as a function of time at the impacting end of Model B. These differences in predicting plastic strains demonstrated a need for benchmark data for a cask-like problem

  14. Problems with the implementation of international standards for financial reporting and international audit standards

    OpenAIRE

    Dimitrova, Janka

    2012-01-01

    The International Financial Reporting Standards (IFRS) are designed for application in the financial reports with general purpose and the other financial reporting in all profit-oriented entities. The International Auditing Standards (IAS) are setting out the framework for carrying out the review process of financial reporting from entities referred to audit in order to verify the authenticity of the information and raising up the credibility of financial statements. Quality implementation...

  15. Co-ordinated research programme on benchmark study for the seismic analysis and testing of WWER-type nuclear power plants. V. 2. Generic material: Codes, standards, criteria. Working material

    International Nuclear Information System (INIS)

    1995-01-01

    The Co-ordinated research programme on the benchmark study for the seismic analysis and testing of WWER-type nuclear power plants was initiated subsequent to the request from representatives of Member States. The conclusions adopted at the Technical Committee Meeting on Seismic Issues related to existing nuclear power plants held in Tokyo in 1991 called for the harmonization of methods and criteria used in Member States in issues related to seismic safety. The Consulltants' Meeting which followed resulted in producing a working document for CRP. It was decided that a benchmark study is the most effective way to achieve the principal objective. Two types of WWER reactors (WWER-440/213 and WWER-1000) were selected as prototypes for the benchmark exercise to be tested on a full scale using explosions and/or vibration generators. The two prototypes are Kozloduy Units 5/6 for WWER-1000 and Paks for WWER-440/213 nuclear power plants. This volume of Working material contains reports related to generic material, namely codes, standards and criteria for benchmark analysis

  16. Business transactions and standards. Towards a system of concepts and a method for early problem identification in standard implementation projects

    NARCIS (Netherlands)

    Rukanova, B.D.

    2005-01-01

    To summarize, with respect to research question one we constructed a system of concepts, while in answer to research question two we proposed a method of how to apply this system of concepts in practice in order to identify potential problems in early stages of standard implementation projects.

  17. Radiation Detection Computational Benchmark Scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Shaver, Mark W.; Casella, Andrew M.; Wittman, Richard S.; McDonald, Ben S.

    2013-09-24

    Modeling forms an important component of radiation detection development, allowing for testing of new detector designs, evaluation of existing equipment against a wide variety of potential threat sources, and assessing operation performance of radiation detection systems. This can, however, result in large and complex scenarios which are time consuming to model. A variety of approaches to radiation transport modeling exist with complementary strengths and weaknesses for different problems. This variety of approaches, and the development of promising new tools (such as ORNL’s ADVANTG) which combine benefits of multiple approaches, illustrates the need for a means of evaluating or comparing different techniques for radiation detection problems. This report presents a set of 9 benchmark problems for comparing different types of radiation transport calculations, identifying appropriate tools for classes of problems, and testing and guiding the development of new methods. The benchmarks were drawn primarily from existing or previous calculations with a preference for scenarios which include experimental data, or otherwise have results with a high level of confidence, are non-sensitive, and represent problem sets of interest to NA-22. From a technical perspective, the benchmarks were chosen to span a range of difficulty and to include gamma transport, neutron transport, or both and represent different important physical processes and a range of sensitivity to angular or energy fidelity. Following benchmark identification, existing information about geometry, measurements, and previous calculations were assembled. Monte Carlo results (MCNP decks) were reviewed or created and re-run in order to attain accurate computational times and to verify agreement with experimental data, when present. Benchmark information was then conveyed to ORNL in order to guide testing and development of hybrid calculations. The results of those ADVANTG calculations were then sent to PNNL for

  18. A proposal for benchmarking learning objects

    OpenAIRE

    Rita Falcão; Alfredo Soeiro

    2007-01-01

    This article proposes a methodology for benchmarking learning objects. It aims to deal with twoproblems related to e-learning: the validation of learning using this method and the return oninvestment of the process of development and use: effectiveness and efficiency.This paper describes a proposal for evaluating learning objects (LOs) through benchmarking, basedon the Learning Object Metadata Standard and on an adaptation of the main tools of the BENVICproject. The Benchmarking of Learning O...

  19. Oblique projections and standard-form transformations for discrete inverse problems

    DEFF Research Database (Denmark)

    Hansen, Per Christian

    2013-01-01

    This tutorial paper considers a specific computational tool for the numerical solution of discrete inverse problems, known as the standard-form transformation, by which we can treat general Tikhonov regularization problems efficiently. In the tradition of B. N. Datta's expositions of numerical li...... linear algebra, we use the close relationship between oblique projections, pseudoinverses, and matrix computations to derive a simple geometric motivation and algebraic formulation of the standard-form transformation....

  20. The hierarchy problem and the cosmological constant problem in the Standard Model

    International Nuclear Information System (INIS)

    Jegerlehner, Fred

    2015-03-01

    We argue that the SM in the Higgs phase does not suffer form a ''hierarchy problem'' and that similarly the ''cosmological constant problem'' resolves itself if we understand the SM as a low energy effective theory emerging from a cut-off medium at the Planck scale. We discuss these issues under the condition of a stable Higgs vacuum, which allows to extend the SM up to the Planck length. The bare Higgs boson mass then changes sign below the Planck scale, such the the SM in the early universe is in the symmetric phase. The cut-off enhanced Higgs mass term as well as the quartically enhanced cosmological constant term trigger the inflation of the early universe. The coefficients of the shift between bare and renormalized Higgs mass as well as of the shift between bare and renormalized vacuum energy density exhibit close-by zeros at some point below the Planck scale. The zeros are matching points between short distance and the renormalized low energy quantities. Since inflation tunes the total energy density to take the critical value of a flat universe Ω tot =ρ tot /ρ crit =Ω Λ +Ω matter +Ω radiation =1 it is obvious that Ω Λ today is of order Ω tot given that 1>Ω matter , Ω radiation >0, which saturate the total density to about 26 % only, the dominant part being dark matter(21%).

  1. Benchmarking the UAF Tsunami Code

    Science.gov (United States)

    Nicolsky, D.; Suleimani, E.; West, D.; Hansen, R.

    2008-12-01

    We have developed a robust numerical model to simulate propagation and run-up of tsunami waves in the framework of non-linear shallow water theory. A temporal position of the shoreline is calculated using the free-surface moving boundary condition. The numerical code adopts a staggered leapfrog finite-difference scheme to solve the shallow water equations formulated for depth-averaged water fluxes in spherical coordinates. To increase spatial resolution, we construct a series of telescoping embedded grids that focus on areas of interest. For large scale problems, a parallel version of the algorithm is developed by employing a domain decomposition technique. The developed numerical model is benchmarked in an exhaustive series of tests suggested by NOAA. We conducted analytical and laboratory benchmarking for the cases of solitary wave runup on simple and composite beaches, run-up of a solitary wave on a conical island, and the extreme runup in the Monai Valley, Okushiri Island, Japan, during the 1993 Hokkaido-Nansei-Oki tsunami. Additionally, we field-tested the developed model to simulate the November 15, 2006 Kuril Islands tsunami, and compared the simulated water height to observations at several DART buoys. In all conducted tests we calculated a numerical solution with an accuracy recommended by NOAA standards. In this work we summarize results of numerical benchmarking of the code, its strengths and limits with regards to reproduction of fundamental features of coastal inundation, and also illustrate some possible improvements. We applied the developed model to simulate potential inundation of the city of Seward located in Resurrection Bay, Alaska. To calculate an aerial extent of potential inundation, we take into account available near-shore bathymetry and inland topography on a grid of 15 meter resolution. By choosing several scenarios of potential earthquakes, we calculated the maximal aerial extent of Seward inundation. As a test to validate our model, we

  2. Parameter Curation for Benchmark Queries

    NARCIS (Netherlands)

    Gubichev, Andrey; Boncz, Peter

    2014-01-01

    In this paper we consider the problem of generating parameters for benchmark queries so these have stable behavior despite being executed on datasets (real-world or synthetic) with skewed data distributions and value correlations. We show that uniform random sampling of the substitution parameters

  3. Aquatic Life Benchmarks

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Aquatic Life Benchmarks is an EPA-developed set of criteria for freshwater species. These benchmarks are based on toxicity values reviewed by EPA and used in the...

  4. A non-standard optimal control problem arising in an economics application

    Directory of Open Access Journals (Sweden)

    Alan Zinober

    2013-04-01

    Full Text Available A recent optimal control problem in the area of economics has mathematical properties that do not fall into the standard optimal control problem formulation. In our problem the state value at the final time the state, y(T = z, is free and unknown, and additionally the Lagrangian integrand in the functional is a piecewise constant function of the unknown value y(T. This is not a standard optimal control problem and cannot be solved using Pontryagin's Minimum Principle with the standard boundary conditions at the final time. In the standard problem a free final state y(T yields a necessary boundary condition p(T = 0, where p(t is the costate. Because the integrand is a function of y(T, the new necessary condition is that y(T should be equal to a certain integral that is a continuous function of y(T. We introduce a continuous approximation of the piecewise constant integrand function by using a hyperbolic tangent approach and solve an example using a C++ shooting algorithm with Newton iteration for solving the Two Point Boundary Value Problem (TPBVP. The minimising free value y(T is calculated in an outer loop iteration using the Golden Section or Brent algorithm. Comparative nonlinear programming (NP discrete-time results are also presented.

  5. A benchmarking study

    Directory of Open Access Journals (Sweden)

    H. Groessing

    2015-02-01

    Full Text Available A benchmark study for permeability measurement is presented. In the past studies of other research groups which focused on the reproducibility of 1D-permeability measurements showed high standard deviations of the gained permeability values (25%, even though a defined test rig with required specifications was used. Within this study, the reproducibility of capacitive in-plane permeability testing system measurements was benchmarked by comparing results of two research sites using this technology. The reproducibility was compared by using a glass fibre woven textile and carbon fibre non crimped fabric (NCF. These two material types were taken into consideration due to the different electrical properties of glass and carbon with respect to dielectric capacitive sensors of the permeability measurement systems. In order to determine the unsaturated permeability characteristics as function of fibre volume content the measurements were executed at three different fibre volume contents including five repetitions. It was found that the stability and reproducibility of the presentedin-plane permeability measurement system is very good in the case of the glass fibre woven textiles. This is true for the comparison of the repetition measurements as well as for the comparison between the two different permeameters. These positive results were confirmed by a comparison to permeability values of the same textile gained with an older generation permeameter applying the same measurement technology. Also it was shown, that a correct determination of the grammage and the material density are crucial for correct correlation of measured permeability values and fibre volume contents.

  6. [Problems in quality standard research of new traditional Chinese medicine compound].

    Science.gov (United States)

    Zhou, Gang; He, Yan-Ping

    2014-09-01

    The new traditional Chinese medicine compound is the main part of the research of new drug of traditional Chinese medicine (TCM), and the new Chinese herbal compound reflects the characteristics of TCM theory. The new traditional Chinese medicine compound quality standard research is one of the main content of pharmaceutical research, and is also the focus of the new medicine pharmaceutical evaluation content. Although in recent years the research level of new traditional Chinese medicine compound has been greatly improved, but the author during the review found still some common problems existing in new traditional Chinese medicine compound quality standard research data, this paper analyzed the current quality standards for new traditional Chinese medicine compound and the problems existing in the research data, respectively from measurement of the content of index selection, determine the scope of the content, and the quality standard design concept, the paper expounds developers need to concern. The quality of new traditional Chinese medicine compound quality standard is not only itself can be solved, but quality standards is to ensure the key and important content of product quality, improving the quality of products cannot do without quality standards. With the development of science and technology, on the basis of quality by design under the guidance of the concept, new traditional Chinese medicine compound quality standard system will be more scientific, systematic and perfect.

  7. Preliminary results of the seventh three-dimensional AER dynamic benchmark problem calculation. Solution with DYN3D and RELAP5-3D codes

    International Nuclear Information System (INIS)

    Bencik, M.; Hadek, J.

    2011-01-01

    The paper gives a brief survey of the seventh three-dimensional AER dynamic benchmark calculation results received with the codes DYN3D and RELAP5-3D at Nuclear Research Institute Rez. This benchmark was defined at the twentieth AER Symposium in Hanassari (Finland). It is focused on investigation of transient behaviour in a WWER-440 nuclear power plant. Its initiating event is opening of the main isolation valve and re-connection of the loop with its main circulation pump in operation. The WWER-440 plant is at the end of the first fuel cycle and in hot full power conditions. Stationary and burnup calculations were performed with the code DYN3D. Transient calculation was made with the system code RELAP5-3D. The two-group homogenized cross sections library HELGD05 created by HELIOS code was used for the generation of reactor core neutronic parameters. The detailed six loops model of NPP Dukovany was adopted for the seventh AER dynamic benchmark purposes. The RELAP5-3D full core neutronic model was coupled with 49 core thermal-hydraulic channels and 8 reflector channels connected with the three-dimensional model of the reactor vessel. The detailed nodalization of reactor downcomer, lower and upper plenum was used. Mixing in lower and upper plenum was simulated. The first part of paper contains a brief characteristic of RELAP5-3D system code and a short description of NPP input deck and reactor core model. The second part shows the time dependencies of important global and local parameters. (Authors)

  8. Benchmarking in University Toolbox

    Directory of Open Access Journals (Sweden)

    Katarzyna Kuźmicz

    2015-06-01

    Full Text Available In the face of global competition and rising challenges that higher education institutions (HEIs meet, it is imperative to increase innovativeness and efficiency of their management. Benchmarking can be the appropriate tool to search for a point of reference necessary to assess institution’s competitive position and learn from the best in order to improve. The primary purpose of the paper is to present in-depth analysis of benchmarking application in HEIs worldwide. The study involves indicating premises of using benchmarking in HEIs. It also contains detailed examination of types, approaches and scope of benchmarking initiatives. The thorough insight of benchmarking applications enabled developing classification of benchmarking undertakings in HEIs. The paper includes review of the most recent benchmarking projects and relating them to the classification according to the elaborated criteria (geographical range, scope, type of data, subject, support and continuity. The presented examples were chosen in order to exemplify different approaches to benchmarking in higher education setting. The study was performed on the basis of the published reports from benchmarking projects, scientific literature and the experience of the author from the active participation in benchmarking projects. The paper concludes with recommendations for university managers undertaking benchmarking, derived on the basis of the conducted analysis.

  9. Benchmarking Multilayer-HySEA model for landslide generated tsunami. HTHMP validation process.

    Science.gov (United States)

    Macias, J.; Escalante, C.; Castro, M. J.

    2017-12-01

    Landslide tsunami hazard may be dominant along significant parts of the coastline around the world, in particular in the USA, as compared to hazards from other tsunamigenic sources. This fact motivated NTHMP about the need of benchmarking models for landslide generated tsunamis, following the same methodology already used for standard tsunami models when the source is seismic. To perform the above-mentioned validation process, a set of candidate benchmarks were proposed. These benchmarks are based on a subset of available laboratory data sets for solid slide experiments and deformable slide experiments, and include both submarine and subaerial slides. A benchmark based on a historic field event (Valdez, AK, 1964) close the list of proposed benchmarks. A total of 7 benchmarks. The Multilayer-HySEA model including non-hydrostatic effects has been used to perform all the benchmarking problems dealing with laboratory experiments proposed in the workshop that was organized at Texas A&M University - Galveston, on January 9-11, 2017 by NTHMP. The aim of this presentation is to show some of the latest numerical results obtained with the Multilayer-HySEA (non-hydrostatic) model in the framework of this validation effort.Acknowledgements. This research has been partially supported by the Spanish Government Research project SIMURISK (MTM2015-70490-C02-01-R) and University of Malaga, Campus de Excelencia Internacional Andalucía Tech. The GPU computations were performed at the Unit of Numerical Methods (University of Malaga).

  10. California residential energy standards: problems and recommendations relating to implementation, enforcement, and design. [Thermal insulation

    Energy Technology Data Exchange (ETDEWEB)

    1977-08-01

    Documents relevant to the development and implementation of the California energy insulation standards for new residential buildings were evaluated and a survey was conducted to determine problems encountered in the implementation, enforcement, and design aspects of the standards. The impact of the standards on enforcement agencies, designers, builders and developers, manufacturers and suppliers, consumers, and the building process in general is summarized. The impact on construction costs and energy savings varies considerably because of the wide variation in prior insulation practices and climatic conditions in California. The report concludes with a series of recommendations covering all levels of government and the building process. (MCW)

  11. The problem of epistemic jurisdiction in global governance: The case of sustainability standards for biofuels.

    Science.gov (United States)

    Winickoff, David E; Mondou, Matthieu

    2017-02-01

    While there is ample scholarly work on regulatory science within the state, or single-sited global institutions, there is less on its operation within complex modes of global governance that are decentered, overlapping, multi-sectorial and multi-leveled. Using a co-productionist framework, this study identifies 'epistemic jurisdiction' - the power to produce or warrant technical knowledge for a given political community, topical arena or geographical territory - as a central problem for regulatory science in complex governance. We explore these dynamics in the arena of global sustainability standards for biofuels. We select three institutional fora as sites of inquiry: the European Union's Renewable Energy Directive, the Roundtable on Sustainable Biomaterials, and the International Organization for Standardization. These cases allow us to analyze how the co-production of sustainability science responds to problems of epistemic jurisdiction in the global regulatory order. First, different problems of epistemic jurisdiction beset different standard-setting bodies, and these problems shape both the content of regulatory science and the procedures designed to make it authoritative. Second, in order to produce global regulatory science, technical bodies must manage an array of conflicting imperatives - including scientific virtue, due process and the need to recruit adoptees to perpetuate the standard. At different levels of governance, standard drafters struggle to balance loyalties to country, to company or constituency and to the larger project of internationalization. Confronted with these sometimes conflicting pressures, actors across the standards system quite self-consciously maneuver to build or retain authority for their forum through a combination of scientific adjustment and political negotiation. Third, the evidentiary demands of regulatory science in global administrative spaces are deeply affected by 1) a market for standards, in which firms and states can

  12. The Relationship between Students' Performance on Conventional Standardized Mathematics Assessments and Complex Mathematical Modeling Problems

    Science.gov (United States)

    Kartal, Ozgul; Dunya, Beyza Aksu; Diefes-Dux, Heidi A.; Zawojewski, Judith S.

    2016-01-01

    Critical to many science, technology, engineering, and mathematics (STEM) career paths is mathematical modeling--specifically, the creation and adaptation of mathematical models to solve problems in complex settings. Conventional standardized measures of mathematics achievement are not structured to directly assess this type of mathematical…

  13. The development of code benchmarks

    International Nuclear Information System (INIS)

    Glass, R.E.

    1986-01-01

    Sandia National Laboratories has undertaken a code benchmarking effort to define a series of cask-like problems having both numerical solutions and experimental data. The development of the benchmarks includes: (1) model problem definition, (2) code intercomparison, and (3) experimental verification. The first two steps are complete and a series of experiments are planned. The experiments will examine the elastic/plastic behavior of cylinders for both the end and side impacts resulting from a nine meter drop. The cylinders will be made from stainless steel and aluminum to give a range of plastic deformations. This paper presents the results of analyses simulating the model's behavior using materials properties for stainless steel and aluminum

  14. The Inverted Pendulum Benchmark in Nonlinear Control Theory: A Survey

    Directory of Open Access Journals (Sweden)

    Olfa Boubaker

    2013-05-01

    Full Text Available Abstract For at least fifty years, the inverted pendulum has been the most popular benchmark, among others, in nonlinear control theory. The fundamental focus of this work is to enhance the wealth of this robotic benchmark and provide an overall picture of historical and current trend developments in nonlinear control theory, based on its simple structure and its rich nonlinear model. In this review, we will try to explain the high popularity of such a robotic benchmark, which is frequently used to realize experimental models, validate the efficiency of emerging control techniques and verify their implementation. We also attempt to provide details on how many standard techniques in control theory fail when tested on such a benchmark. More than 100 references in the open literature, dating back to 1960, are compiled to provide a survey of emerging ideas and challenging problems in nonlinear control theory accomplished and verified using this robotic system. Possible future trends that we can envision based on the review of this area are also presented.

  15. A biosegmentation benchmark for evaluation of bioimage analysis methods

    Directory of Open Access Journals (Sweden)

    Kvilekval Kristian

    2009-11-01

    Full Text Available Abstract Background We present a biosegmentation benchmark that includes infrastructure, datasets with associated ground truth, and validation methods for biological image analysis. The primary motivation for creating this resource comes from the fact that it is very difficult, if not impossible, for an end-user to choose from a wide range of segmentation methods available in the literature for a particular bioimaging problem. No single algorithm is likely to be equally effective on diverse set of images and each method has its own strengths and limitations. We hope that our benchmark resource would be of considerable help to both the bioimaging researchers looking for novel image processing methods and image processing researchers exploring application of their methods to biology. Results Our benchmark consists of different classes of images and ground truth data, ranging in scale from subcellular, cellular to tissue level, each of which pose their own set of challenges to image analysis. The associated ground truth data can be used to evaluate the effectiveness of different methods, to improve methods and to compare results. Standard evaluation methods and some analysis tools are integrated into a database framework that is available online at http://bioimage.ucsb.edu/biosegmentation/. Conclusion This online benchmark will facilitate integration and comparison of image analysis methods for bioimages. While the primary focus is on biological images, we believe that the dataset and infrastructure will be of interest to researchers and developers working with biological image analysis, image segmentation and object tracking in general.

  16. PRISMATIC CORE COUPLED TRANSIENT BENCHMARK

    Energy Technology Data Exchange (ETDEWEB)

    J. Ortensi; M.A. Pope; G. Strydom; R.S. Sen; M.D. DeHart; H.D. Gougar; C. Ellis; A. Baxter; V. Seker; T.J. Downar; K. Vierow; K. Ivanov

    2011-06-01

    The Prismatic Modular Reactor (PMR) is one of the High Temperature Reactor (HTR) design concepts that have existed for some time. Several prismatic units have operated in the world (DRAGON, Fort St. Vrain, Peach Bottom) and one unit is still in operation (HTTR). The deterministic neutronics and thermal-fluids transient analysis tools and methods currently available for the design and analysis of PMRs have lagged behind the state of the art compared to LWR reactor technologies. This has motivated the development of more accurate and efficient tools for the design and safety evaluations of the PMR. In addition to the work invested in new methods, it is essential to develop appropriate benchmarks to verify and validate the new methods in computer codes. The purpose of this benchmark is to establish a well-defined problem, based on a common given set of data, to compare methods and tools in core simulation and thermal hydraulics analysis with a specific focus on transient events. The benchmark-working group is currently seeking OECD/NEA sponsorship. This benchmark is being pursued and is heavily based on the success of the PBMR-400 exercise.

  17. How Activists Use Benchmarks

    DEFF Research Database (Denmark)

    Seabrooke, Leonard; Wigan, Duncan

    2015-01-01

    Non-governmental organisations use benchmarks as a form of symbolic violence to place political pressure on firms, states, and international organisations. The development of benchmarks requires three elements: (1) salience, that the community of concern is aware of the issue and views it as impo......Non-governmental organisations use benchmarks as a form of symbolic violence to place political pressure on firms, states, and international organisations. The development of benchmarks requires three elements: (1) salience, that the community of concern is aware of the issue and views...

  18. OECD/DOE/CEA VVER-1000 Coolant Transient Benchmark. Summary Record of the First Workshop (V1000-CT1)

    International Nuclear Information System (INIS)

    2003-01-01

    The first workshop for the VVER-1000 Coolant Transient Benchmark TT Benchmark was hosted by the Commissariat a l'Energie Atomique, Centre d'Etudes de Saclay, France. The V1000CT benchmark defines standard problems for validation of coupled three-dimensional (3-D) neutron-kinetics/system thermal-hydraulics codes for application to Soviet-designed VVER-1000 reactors using actual plant data without any scaling. The overall objective is to access computer codes used in the safety analysis of VVER power plants, specifically for their use in reactivity transient simulations in a VVER-1000. The V1000CT benchmark consists of two phases: V1000CT-1 - simulation of the switching on of one main coolant pump (MCP) while the other three MCP are in operation, and V1000CT- 2 - calculation of coolant mixing tests and Main Steam Line Break (MSLB) scenario. Further background information on this benchmark can be found at the OECD/NEA benchmark web site . The purpose of the first workshop was to review the benchmark activities after the Starter Meeting held last year in Dresden, Germany: to discuss the participants' feedback and modifications introduced in the Benchmark Specifications on Phase 1; to present and to discuss modelling issues and preliminary results from the three exercises of Phase 1; to discuss the modelling issues of Exercise 1 of Phase 2; and to define work plan and schedule in order to complete the two phases

  19. Solving non-standard packing problems by global optimization and heuristics

    CERN Document Server

    Fasano, Giorgio

    2014-01-01

    This book results from a long-term research effort aimed at tackling complex non-standard packing issues which arise in space engineering. The main research objective is to optimize cargo loading and arrangement, in compliance with a set of stringent rules. Complicated geometrical aspects are also taken into account, in addition to balancing conditions based on attitude control specifications. Chapter 1 introduces the class of non-standard packing problems studied. Chapter 2 gives a detailed explanation of a general model for the orthogonal packing of tetris-like items in a convex domain. A number of additional conditions are looked at in depth, including the prefixed orientation of subsets of items, the presence of unusable holes, separation planes and structural elements, relative distance bounds as well as static and dynamic balancing requirements. The relative feasibility sub-problem which is a special case that does not have an optimization criterion is discussed in Chapter 3. This setting can be exploit...

  20. Perspectives and Problems of Harmonizing Energy Legislation of Ukraine with the European Union Standards

    Directory of Open Access Journals (Sweden)

    Volodymyrivna Komelina Olha

    2017-12-01

    Full Text Available Essence, features and components of the energy market was investigated in the article. Regulatory support of energy efficiency and energy saving in the European Union and Ukraine was analyzed. Ukraine obligations due to the harmonization of the energy legislation with the EU standards were defined. Problems in the housing and communal services (HCS as one of the largest consumers of energy resources were revealed.

  1. Lithium isotopic abundances in metal-poor stars: a problem for standard big bang nucleosynthesis?

    International Nuclear Information System (INIS)

    Nissen, P.E.; Asplund, M.; Lambert, D.L.; Primas, F.; Smith, V.V.

    2005-01-01

    Spectral obtained with VLT/UVES suggest the existence of the 6 Li isotope in several metal-poor stars at a level that challenges ideas about its synthesis. The 7 Li abundance is, on the other hand, a factor of three lower than predicted by standard Big Bang nucleosynthesis theory. Both problems may be explained if decaying suppersymmetric particles affect the synthesis of light elements in the Big Bang. (orig.)

  2. IAEA sodium void reactivity benchmark calculations

    International Nuclear Information System (INIS)

    Hill, R.N.; Finck, P.J.

    1992-01-01

    In this paper, the IAEA-1 992 ''Benchmark Calculation of Sodium Void Reactivity Effect in Fast Reactor Core'' problem is evaluated. The proposed design is a large axially heterogeneous oxide-fueled fast reactor as described in Section 2; the core utilizes a sodium plenum above the core to enhance leakage effects. The calculation methods used in this benchmark evaluation are described in Section 3. In Section 4, the calculated core performance results for the benchmark reactor model are presented; and in Section 5, the influence of steel and interstitial sodium heterogeneity effects is estimated

  3. FEDERAL PERFORMANCE STANDARDS OF SELF-REGULATING ORGANIZATIONS OF ARBITRATION MANAGERS AND ARBITRATION MANAGERS: PROBLEMS AND PROSPECTS

    Directory of Open Access Journals (Sweden)

    V. N. Alferov

    2014-01-01

    Full Text Available This paper analyzes the practical aspects of the formation of the federal standards, internal standards and rules of self-regulating organizations of arbitration managers and arbitration managers. Identifi cation of unsolved problems concerning maintenance decision-making mechanisms in bankruptcy proceedings requiring refl ection in federal standards is carried, and appropriate proposals for inclusion in the federal standards are considered.

  4. Internet based benchmarking

    DEFF Research Database (Denmark)

    Bogetoft, Peter; Nielsen, Kurt

    2005-01-01

    We discuss the design of interactive, internet based benchmarking using parametric (statistical) as well as nonparametric (DEA) models. The user receives benchmarks and improvement potentials. The user is also given the possibility to search different efficiency frontiers and hereby to explore...

  5. Benchmark risk analysis models

    NARCIS (Netherlands)

    Ale BJM; Golbach GAM; Goos D; Ham K; Janssen LAM; Shield SR; LSO

    2002-01-01

    A so-called benchmark exercise was initiated in which the results of five sets of tools available in the Netherlands would be compared. In the benchmark exercise a quantified risk analysis was performed on a -hypothetical- non-existing hazardous establishment located on a randomly chosen location in

  6. The Drill Down Benchmark

    NARCIS (Netherlands)

    P.A. Boncz (Peter); T. Rühl (Tim); F. Kwakkel

    1998-01-01

    textabstractData Mining places specific requirements on DBMS query performance that cannot be evaluated satisfactorily using existing OLAP benchmarks. The DD Benchmark - defined here - provides a practical case and yardstick to explore how well a DBMS is able to support Data Mining applications. It

  7. Internet Based Benchmarking

    OpenAIRE

    Bogetoft, Peter; Nielsen, Kurt

    2002-01-01

    We discuss the design of interactive, internet based benchmarking using parametric (statistical) as well as non-parametric (DEA) models. The user receives benchmarks and improvement potentials. The user is also given the possibility to search different efficiency frontiers and hereby to explore alternative improvement strategies. An implementation of both a parametric and a non parametric model are presented.

  8. THE PROBLEM OF THE FINALITY OF WORSHIP AND THE STANDARD THOMISTIC ACCOUNT

    Directory of Open Access Journals (Sweden)

    Francisco J. Romero Carrasquillo

    2013-11-01

    Full Text Available This paper is an attempt to introduce the issue of the finality of religious worship into the analytical Thomist tradition. It aims to develop a response, based on an analysis of St. Thomas Aquinas’s texts, to the following questions: What is the end of worship? Why do we worship God? What benefit does God derive from our worship? Alternatively, perhaps, is it not ourselves, rather than God, who are the beneficiaries of our own worship? The paper aims to develop what may be called the ‘Standard Thomistic Account’ as a solution to this problem. In the first part (II, the paper examines the problem of the finality of worship within the context of Classical Theism. Part II presents the current state of the problem in the contemporary secondary literature concerning this issue. In the third part (III, the paper focuses on Cajetan’s version of the Standard Thomistic Account, and shows in which aspects it is in need of more nuance to be able to portray Aquinas’ complete solution. Finally, Part IV proposes a careful and faithful reading of the texts and lays out the foundations for a new and more nuanced solution to the problem.

  9. Vver-1000 Mox core computational benchmark

    International Nuclear Information System (INIS)

    2006-01-01

    The NEA Nuclear Science Committee has established an Expert Group that deals with the status and trends of reactor physics, fuel performance and fuel cycle issues related to disposing of weapons-grade plutonium in mixed-oxide fuel. The objectives of the group are to provide NEA member countries with up-to-date information on, and to develop consensus regarding, core and fuel cycle issues associated with burning weapons-grade plutonium in thermal water reactors (PWR, BWR, VVER-1000, CANDU) and fast reactors (BN-600). These issues concern core physics, fuel performance and reliability, and the capability and flexibility of thermal water reactors and fast reactors to dispose of weapons-grade plutonium in standard fuel cycles. The activities of the NEA Expert Group on Reactor-based Plutonium Disposition are carried out in close co-operation (jointly, in most cases) with the NEA Working Party on Scientific Issues in Reactor Systems (WPRS). A prominent part of these activities include benchmark studies. At the time of preparation of this report, the following benchmarks were completed or in progress: VENUS-2 MOX Core Benchmarks: carried out jointly with the WPRS (formerly the WPPR) (completed); VVER-1000 LEU and MOX Benchmark (completed); KRITZ-2 Benchmarks: carried out jointly with the WPRS (formerly the WPPR) (completed); Hollow and Solid MOX Fuel Behaviour Benchmark (completed); PRIMO MOX Fuel Performance Benchmark (ongoing); VENUS-2 MOX-fuelled Reactor Dosimetry Calculation (ongoing); VVER-1000 In-core Self-powered Neutron Detector Calculational Benchmark (started); MOX Fuel Rod Behaviour in Fast Power Pulse Conditions (started); Benchmark on the VENUS Plutonium Recycling Experiments Configuration 7 (started). This report describes the detailed results of the benchmark investigating the physics of a whole VVER-1000 reactor core using two-thirds low-enriched uranium (LEU) and one-third MOX fuel. It contributes to the computer code certification process and to the

  10. NEACRP thermal fission product benchmark

    International Nuclear Information System (INIS)

    Halsall, M.J.; Taubman, C.J.

    1989-09-01

    The objective of the thermal fission product benchmark was to compare the range of fission product data in use at the present time. A simple homogeneous problem was set with 200 atoms H/1 atom U235, to be burnt up to 1000 days and then decay for 1000 days. The problem was repeated with 200 atoms H/1 atom Pu239, 20 atoms H/1 atom U235 and 20 atoms H/1 atom Pu239. There were ten participants and the submissions received are detailed in this report. (author)

  11. Verification and validation benchmarks.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy

    2007-02-01

    Verification and validation (V&V) are the primary means to assess the accuracy and reliability of computational simulations. V&V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V&V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the level of

  12. Verification and validation benchmarks

    International Nuclear Information System (INIS)

    Oberkampf, William L.; Trucano, Timothy G.

    2008-01-01

    Verification and validation (V and V) are the primary means to assess the accuracy and reliability of computational simulations. V and V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V and V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the

  13. CBLIB 2014: a benchmark library for conic mixed-integer and continuous optimization

    DEFF Research Database (Denmark)

    Friberg, Henrik Alsing

    2016-01-01

    The Conic Benchmark Library is an ongoing community-driven project aiming to challenge commercial and open source solvers on mainstream cone support. In this paper, 121 mixed-integer and continuous second-order cone problem instances have been selected from 11 categories as representative for the...... for the instances available online. Since current file formats were found incapable, we embrace the new Conic Benchmark Format as standard for conic optimization. Tools are provided to aid integration of this format with other software packages....

  14. Toxicological Benchmarks for Wildlife

    Energy Technology Data Exchange (ETDEWEB)

    Sample, B.E. Opresko, D.M. Suter, G.W.

    1993-01-01

    Ecological risks of environmental contaminants are evaluated by using a two-tiered process. In the first tier, a screening assessment is performed where concentrations of contaminants in the environment are compared to no observed adverse effects level (NOAEL)-based toxicological benchmarks. These benchmarks represent concentrations of chemicals (i.e., concentrations presumed to be nonhazardous to the biota) in environmental media (water, sediment, soil, food, etc.). While exceedance of these benchmarks does not indicate any particular level or type of risk, concentrations below the benchmarks should not result in significant effects. In practice, when contaminant concentrations in food or water resources are less than these toxicological benchmarks, the contaminants may be excluded from further consideration. However, if the concentration of a contaminant exceeds a benchmark, that contaminant should be retained as a contaminant of potential concern (COPC) and investigated further. The second tier in ecological risk assessment, the baseline ecological risk assessment, may use toxicological benchmarks as part of a weight-of-evidence approach (Suter 1993). Under this approach, based toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. Other sources of evidence include media toxicity tests, surveys of biota (abundance and diversity), measures of contaminant body burdens, and biomarkers. This report presents NOAEL- and lowest observed adverse effects level (LOAEL)-based toxicological benchmarks for assessment of effects of 85 chemicals on 9 representative mammalian wildlife species (short-tailed shrew, little brown bat, meadow vole, white-footed mouse, cottontail rabbit, mink, red fox, and whitetail deer) or 11 avian wildlife species (American robin, rough-winged swallow, American woodcock, wild turkey, belted kingfisher, great blue heron, barred owl, barn owl, Cooper's hawk, and red

  15. Present status of International Criticality Safety Benchmark Evaluation Project (ICSBEP)

    International Nuclear Information System (INIS)

    Miyoshi, Yoshinori

    2000-01-01

    The International Criticality Safety Evaluation Project, ICSBEP was designed to identify and evaluate a comprehensive set of critical experiment benchmark data. Compilation of the data into a standardized format are made by reviewing original and subsequently revised documentation for calculating each experiment with standard criticality safety codes. Five handbooks of evaluated criticality safety benchmark experiments have been published since 1995. (author)

  16. Problems of the development of international standards of “green building” in Russia

    Science.gov (United States)

    Meshcheryakova, Tatiana

    2017-10-01

    Problems of environmental friendliness and energy efficiency in recent decades have become not only the most important issues of economic development of the main industrial economies, but also the basis for the processes of maintaining the security and relative stability of the global ecosystem. The article presents the results of the study of the status and trends of the development of environmental standards for the construction and maintenance of real estate in the world and particularly in Russia. Special market instruments for assessing the compliance with the quality of real estate projects under construction and modern principles of environmental friendliness and energy efficiency include voluntary building certification systems that are actively used in international practice. In Russia there is active use of the following international systems of certification: BREEAM, LEED, DGNB, HQE. Also in the Russian certification market, the national standard STO NOSTROY 2.35.4-2011 “Residential and public buildings” is being implemented, which summarizes the best international experience of the rating evaluation procedure. Comparative characteristics of the “green” standards and the principles of rating assessments of the ecological compatibility of buildings give an idea of applying these standards in Russia.

  17. Hydrogen-migration modeling for the EPRI/HEDL standard problems

    International Nuclear Information System (INIS)

    Travis, J.R.

    1982-01-01

    A numerical technique has been developed for calculating the full three-dimensional time-dependent Navier-Stokes equations with multiple species transport. The method is a modified form of the Implicit Continuous-fluid Eulerian (ICE) technique to solve the governing equations for low Mach number flows where pressure waves and local variations in compression and expansion are not significant. Large density variations, due to thermal and species concentration gradients, are accounted for without the restrictions of the classical Boussinesq approximation. Calculations of the EPRI/HEDL standard problems verify the feasibility of using this finite-difference technique for analyzing hydrogen dispersion within LWR containments

  18. XWeB: The XML Warehouse Benchmark

    Science.gov (United States)

    Mahboubi, Hadj; Darmont, Jérôme

    With the emergence of XML as a standard for representing business data, new decision support applications are being developed. These XML data warehouses aim at supporting On-Line Analytical Processing (OLAP) operations that manipulate irregular XML data. To ensure feasibility of these new tools, important performance issues must be addressed. Performance is customarily assessed with the help of benchmarks. However, decision support benchmarks do not currently support XML features. In this paper, we introduce the XML Warehouse Benchmark (XWeB), which aims at filling this gap. XWeB derives from the relational decision support benchmark TPC-H. It is mainly composed of a test data warehouse that is based on a unified reference model for XML warehouses and that features XML-specific structures, and its associate XQuery decision support workload. XWeB's usage is illustrated by experiments on several XML database management systems.

  19. On Big Data Benchmarking

    OpenAIRE

    Han, Rui; Lu, Xiaoyi

    2014-01-01

    Big data systems address the challenges of capturing, storing, managing, analyzing, and visualizing big data. Within this context, developing benchmarks to evaluate and compare big data systems has become an active topic for both research and industry communities. To date, most of the state-of-the-art big data benchmarks are designed for specific types of systems. Based on our experience, however, we argue that considering the complexity, diversity, and rapid evolution of big data systems, fo...

  20. Fundamental challenging problems for developing new nuclear safety standard computer codes

    International Nuclear Information System (INIS)

    Wong, P.K.; Wong, A.E.; Wong, A.

    2005-01-01

    Based on the claims of the US Basic patents number 5,084,232; 5,848,377 and 6,430,516 that can be obtained from typing the Patent Numbers into the Box of the Web site http://164.195.100.11/netahtml/srchnum.htm and their associated published technical papers having been presented and published at International Conferences in the last three years and that all these had been sent into US-NRC by E-mail on March 26, 2003 at 2:46 PM., three fundamental challenging problems for developing new nuclear safety standard computer codes had been presented at the US-NRC RIC2003 Session W4. 2:15-3:15 PM. at the Washington D.C. Capital Hilton Hotel, Presidential Ballroom on April 16, 2003 in front of more than 800 nuclear professionals from many countries worldwide. The objective and scope of this paper is to invite all nuclear professionals to examine and evaluate all the current computer codes being used in their own countries by means of comparison of numerical data from these three specific openly challenging fundamental problems in order to set up a global safety standard for all nuclear power plants in the world. (authors)

  1. Solving the flavour problem in supersymmetric Standard Models with three Higgs families

    International Nuclear Information System (INIS)

    Howl, R.; King, S.F.

    2010-01-01

    We show how a non-Abelian family symmetry Δ 27 can be used to solve the flavour problem of supersymmetric Standard Models containing three Higgs families such as the Exceptional Supersymmetric Standard Model (E 6 SSM). The three 27-dimensional families of the E 6 SSM, including the three families of Higgs fields, transform in a triplet representation of the Δ 27 family symmetry, allowing the family symmetry to commute with a possible high energy E 6 symmetry. The Δ 27 family symmetry here provides a high energy understanding of the Z 2 H symmetry of the E 6 SSM, which solves the flavour changing neutral current problem of the three families of Higgs fields. The main phenomenological predictions of the model are tri-bi-maximal mixing for leptons, two almost degenerate LSPs and two almost degenerate families of colour triplet D-fermions, providing a clear prediction for the LHC. In addition the model predicts PGBs with masses below the TeV scale, and possibly much lighter, which appears to be a quite general and robust prediction of all models based on the D-term vacuum alignment mechanism.

  2. WIPP Benchmark calculations with the large strain SPECTROM codes

    Energy Technology Data Exchange (ETDEWEB)

    Callahan, G.D.; DeVries, K.L. [RE/SPEC, Inc., Rapid City, SD (United States)

    1995-08-01

    This report provides calculational results from the updated Lagrangian structural finite-element programs SPECTROM-32 and SPECTROM-333 for the purpose of qualifying these codes to perform analyses of structural situations in the Waste Isolation Pilot Plant (WIPP). Results are presented for the Second WIPP Benchmark (Benchmark II) Problems and for a simplified heated room problem used in a parallel design calculation study. The Benchmark II problems consist of an isothermal room problem and a heated room problem. The stratigraphy involves 27 distinct geologic layers including ten clay seams of which four are modeled as frictionless sliding interfaces. The analyses of the Benchmark II problems consider a 10-year simulation period. The evaluation of nine structural codes used in the Benchmark II problems shows that inclusion of finite-strain effects is not as significant as observed for the simplified heated room problem, and a variety of finite-strain and small-strain formulations produced similar results. The simplified heated room problem provides stratigraphic complexity equivalent to the Benchmark II problems but neglects sliding along the clay seams. The simplified heated problem does, however, provide a calculational check case where the small strain-formulation produced room closures about 20 percent greater than those obtained using finite-strain formulations. A discussion is given of each of the solved problems, and the computational results are compared with available published results. In general, the results of the two SPECTROM large strain codes compare favorably with results from other codes used to solve the problems.

  3. CSNI international standard problem procedures - CSNI Report No. 17 - Revision 4

    International Nuclear Information System (INIS)

    Micaelli, J.C.

    2004-01-01

    Assessing the safety of a nuclear installation requires the use of a number of highly specialised tools: computer codes, experimental facilities and their instrumentation, special measurement techniques, methods for testing materials and components and so on. These tools may vary to some extent in different countries and many of them are extremely complex and costly to produce and use. A highly effective way of increasing confidence in the validity and accuracy of such tools is provided by International Standard Problem (ISP) Exercises in which they are gauged against one another and/or against an agreed standard. For example, predictions of different computer codes for a given physical problem may be compared with each other and with the results of a carefully controlled experimental study which also could be a real plant transient. This kind of comparative exercise is clearly suitable for an international venture. CSNI is of the opinion that ISP exercises are useful and should be continued. ISPs are performed as 'open' or as 'blind' problems. In an open problem results of an experiment are available to participants before it is evaluated. In a blind problem results of the experiment are not made known to the participants until after delivery of the calculated results. Depending on the kind of experiment and its objectives, certain boundary and initial conditions of the experiment are communicated to the participants before they start the exercise. This is necessary where it is difficult to guarantee the reproducibility of experiments. For all ISPs the participants are provided with a complete description of the experimental facility. The Lead Country (proposing the ISP) must decide whether the data can be withheld temporarily (blind ISP) or whether the data will be published before the analysis of participating countries is completed (open ISP). It is recommended that ISPs be conducted blind, where possible. ISPs require a considerable expenditure of resources

  4. PNNL Information Technology Benchmarking

    Energy Technology Data Exchange (ETDEWEB)

    DD Hostetler

    1999-09-08

    Benchmarking is a methodology for searching out industry best practices that lead to superior performance. It is exchanging information, not just with any organization, but with organizations known to be the best within PNNL, in industry, or in dissimilar industries with equivalent functions. It is used as a continuous improvement tool for business and technical processes, products, and services. Information technology--comprising all computer and electronic communication products and services--underpins the development and/or delivery of many PNNL products and services. This document describes the Pacific Northwest National Laboratory's (PNNL's) approach to information technology (IT) benchmarking. The purpose is to engage other organizations in the collaborative process of benchmarking in order to improve the value of IT services provided to customers. TM document's intended audience consists of other US Department of Energy (DOE) national laboratories and their IT staff. Although the individual participants must define the scope of collaborative benchmarking, an outline of IT service areas for possible benchmarking is described.

  5. Benchmarking infrastructure for mutation text mining.

    Science.gov (United States)

    Klein, Artjom; Riazanov, Alexandre; Hindle, Matthew M; Baker, Christopher Jo

    2014-02-25

    Experimental research on the automatic extraction of information about mutations from texts is greatly hindered by the lack of consensus evaluation infrastructure for the testing and benchmarking of mutation text mining systems. We propose a community-oriented annotation and benchmarking infrastructure to support development, testing, benchmarking, and comparison of mutation text mining systems. The design is based on semantic standards, where RDF is used to represent annotations, an OWL ontology provides an extensible schema for the data and SPARQL is used to compute various performance metrics, so that in many cases no programming is needed to analyze results from a text mining system. While large benchmark corpora for biological entity and relation extraction are focused mostly on genes, proteins, diseases, and species, our benchmarking infrastructure fills the gap for mutation information. The core infrastructure comprises (1) an ontology for modelling annotations, (2) SPARQL queries for computing performance metrics, and (3) a sizeable collection of manually curated documents, that can support mutation grounding and mutation impact extraction experiments. We have developed the principal infrastructure for the benchmarking of mutation text mining tasks. The use of RDF and OWL as the representation for corpora ensures extensibility. The infrastructure is suitable for out-of-the-box use in several important scenarios and is ready, in its current state, for initial community adoption.

  6. Benchmarking infrastructure for mutation text mining

    Science.gov (United States)

    2014-01-01

    Background Experimental research on the automatic extraction of information about mutations from texts is greatly hindered by the lack of consensus evaluation infrastructure for the testing and benchmarking of mutation text mining systems. Results We propose a community-oriented annotation and benchmarking infrastructure to support development, testing, benchmarking, and comparison of mutation text mining systems. The design is based on semantic standards, where RDF is used to represent annotations, an OWL ontology provides an extensible schema for the data and SPARQL is used to compute various performance metrics, so that in many cases no programming is needed to analyze results from a text mining system. While large benchmark corpora for biological entity and relation extraction are focused mostly on genes, proteins, diseases, and species, our benchmarking infrastructure fills the gap for mutation information. The core infrastructure comprises (1) an ontology for modelling annotations, (2) SPARQL queries for computing performance metrics, and (3) a sizeable collection of manually curated documents, that can support mutation grounding and mutation impact extraction experiments. Conclusion We have developed the principal infrastructure for the benchmarking of mutation text mining tasks. The use of RDF and OWL as the representation for corpora ensures extensibility. The infrastructure is suitable for out-of-the-box use in several important scenarios and is ready, in its current state, for initial community adoption. PMID:24568600

  7. A framework for benchmarking land models

    Directory of Open Access Journals (Sweden)

    Y. Q. Luo

    2012-10-01

    Full Text Available Land models, which have been developed by the modeling community in the past few decades to predict future states of ecosystems and climate, have to be critically evaluated for their performance skills of simulating ecosystem responses and feedback to climate change. Benchmarking is an emerging procedure to measure performance of models against a set of defined standards. This paper proposes a benchmarking framework for evaluation of land model performances and, meanwhile, highlights major challenges at this infant stage of benchmark analysis. The framework includes (1 targeted aspects of model performance to be evaluated, (2 a set of benchmarks as defined references to test model performance, (3 metrics to measure and compare performance skills among models so as to identify model strengths and deficiencies, and (4 model improvement. Land models are required to simulate exchange of water, energy, carbon and sometimes other trace gases between the atmosphere and land surface, and should be evaluated for their simulations of biophysical processes, biogeochemical cycles, and vegetation dynamics in response to climate change across broad temporal and spatial scales. Thus, one major challenge is to select and define a limited number of benchmarks to effectively evaluate land model performance. The second challenge is to develop metrics of measuring mismatches between models and benchmarks. The metrics may include (1 a priori thresholds of acceptable model performance and (2 a scoring system to combine data–model mismatches for various processes at different temporal and spatial scales. The benchmark analyses should identify clues of weak model performance to guide future development, thus enabling improved predictions of future states of ecosystems and climate. The near-future research effort should be on development of a set of widely acceptable benchmarks that can be used to objectively, effectively, and reliably evaluate fundamental properties

  8. [Problems and ways of solutions to harmonize standards for air pollution].

    Science.gov (United States)

    Avaliani, S L; Novikov, S M; Shashina, T A; Skvortsova, N S; Kislitsin, V A; Mishina, A L

    2012-01-01

    In the article the basic problems of harmonization of domestic regulatory framework of air pollution with the WHO recommendations and normative values adopted in the EU, U.S. and other countries are considered. The important role of health risk analysis methodology in the process of harmonization of regulation and control of air quality has been pointed out. The necessity of radical changes in the structure and content of the basic normative document GN 2.1.6.1338-03 "maximum permissible concentration (MPC) of pollutants in the air of populated areas" has been shown. The algorithm of the procedure that justifies the new list of normative values in the air harmonized with international recommendations and standards of developed countries has been proposed.

  9. Benchmarking optimization solvers for structural topology optimization

    DEFF Research Database (Denmark)

    Rojas Labanda, Susana; Stolpe, Mathias

    2015-01-01

    sizes is developed for this benchmarking. The problems are based on a material interpolation scheme combined with a density filter. Different optimization solvers including Optimality Criteria (OC), the Method of Moving Asymptotes (MMA) and its globally convergent version GCMMA, the interior point...... solvers in IPOPT and FMINCON, and the sequential quadratic programming method in SNOPT, are benchmarked on the library using performance profiles. Whenever possible the methods are applied to both the nested and the Simultaneous Analysis and Design (SAND) formulations of the problem. The performance...... profiles conclude that general solvers are as efficient and reliable as classical structural topology optimization solvers. Moreover, the use of the exact Hessians in SAND formulations, generally produce designs with better objective function values. However, with the benchmarked implementations solving...

  10. Drowning--a scientometric analysis and data acquisition of a constant global problem employing density equalizing mapping and scientometric benchmarking procedures.

    Science.gov (United States)

    Groneberg, David A; Schilling, Ute; Scutaru, Cristian; Uibel, Stefanie; Zitnik, Simona; Mueller, Daniel; Klingelhoefer, Doris; Kloft, Beatrix

    2011-10-14

    Drowning is a constant global problem which claims approximately half a million victims worldwide each year, whereas the number of near-drowning victims is considerably higher. Public health strategies to reduce the burden of death are still limited. While research activities in the subject drowning grow constantly, yet there is no scientometric evaluation of the existing literature at the present time. The current study uses classical bibliometric tools and visualizing techniques such as density equalizing mapping to analyse and evaluate the scientific research in the field of drowning. The interpretation of the achieved results is also implemented in the context of the data collection of the WHO. All studies related to drowning and listed in the ISI-Web of Science database since 1900 were identified using the search term "drowning". Implementing bibliometric methods, a constant increase in quantitative markers such as number of publications per state, publication language or collaborations as well as qualitative markers such as citations were observed for research in the field of drowning. The combination with density equalizing mapping exposed different global patterns for research productivity and the total number of drowning deaths and drowning rates respectively. Chart techniques were used to illustrate bi- and multilateral research cooperation. The present study provides the first scientometric approach that visualizes research activity on the subject of drowning. It can be assumed that the scientific approach to this topic will achieve even greater dimensions because of its continuing actuality.

  11. Deviating From the Benchmarks

    DEFF Research Database (Denmark)

    Rocha, Vera; Van Praag, Mirjam; Carneiro, Anabela

    This paper studies three related questions: To what extent otherwise similar startups employ different quantities and qualities of human capital at the moment of entry? How persistent are initial human capital choices over time? And how does deviating from human capital benchmarks influence firm...... survival? The analysis is based on a matched employer-employee dataset and covers about 17,500 startups in manufacturing and services. We adopt a new procedure to estimate individual benchmarks for the quantity and quality of initial human resources, acknowledging correlations between hiring decisions...... the benchmark can be substantial, are persistent over time, and hinder the survival of firms. The implications may, however, vary according to the sector and the ownership structure at entry. Given the stickiness of initial choices, wrong human capital decisions at entry turn out to be a close to irreversible...

  12. Deviating From the Benchmarks

    DEFF Research Database (Denmark)

    Rocha, Vera; Van Praag, Mirjam; Carneiro, Anabela

    survival? The analysis is based on a matched employer-employee dataset and covers about 17,500 startups in manufacturing and services. We adopt a new procedure to estimate individual benchmarks for the quantity and quality of initial human resources, acknowledging correlations between hiring decisions......This paper studies three related questions: To what extent otherwise similar startups employ different quantities and qualities of human capital at the moment of entry? How persistent are initial human capital choices over time? And how does deviating from human capital benchmarks influence firm......, founders human capital, and the ownership structure of startups (solo entrepreneurs versus entrepreneurial teams). We then study the survival implications of exogenous deviations from these benchmarks, based on spline models for survival data. Our results indicate that (especially negative) deviations from...

  13. A Comparison of a Standard Genetic Algorithm with a Hybrid Genetic Algorithm Applied to Cell Formation Problem

    Directory of Open Access Journals (Sweden)

    Waqas Javaid

    2014-09-01

    Full Text Available Though there are a number of benefits associated with cellular manufacturing systems, its implementation (identification of part families and corresponding machine groups for real life problems is still a challenging task. To handle the complexity of optimizing multiple objectives and larger size of the problem, most of the researchers in the past two decades or so have focused on developing genetic algorithm (GA based techniques. Recently this trend has shifted from standard GA to hybrid GA (HGA based approaches in the quest for greater effectiveness as far as convergence on to the optimum solution is concerned. In order to prove the point, that HGAs possess better convergence abilities than standard GAs, a methodology, initially based on standard GA and later on hybridized with a local search heuristic (LSH, has been developed during this research. Computational experience shows that HGA maintains its accuracy level with increase in problem size, whereas standard GA looses its effectiveness as the problem size grows.

  14. Problems of standardizing and technical regulation in the electric power industry

    Science.gov (United States)

    Grabchak, E. P.

    2016-12-01

    A mandatory condition to ensure normal operation of a power system and efficiency in the sector is standardization and legal regulation of technological activities of electric power engineering entities and consumers. Compared to the times of USSR, the present-time technical guidance documents are not mandatory to follow in most cases, being of an advisory nature due to the lack of new ones. During the last five years, the industry has been showing a deterioration of the situation in terms of ensuring reliability and engineering controllability as a result of the dominant impact of short-term market stimuli and the differences in basic technological policies. In absence of clear requirements regarding the engineering aspects of such activities, production operation does not contribute to the preserving of technical integrity of the Russian power system, which leads to the loss of performance capability and controllability and causes disturbances in the power supply to consumers. The result of this problem is a high rate of accident incidence. The dynamics of accidents by the type of equipment is given, indicating a persisting trend of growth in the number of accidents, which are of a systematic nature. Several problematic aspects of engineering activities of electric power engineering entities, requiring standardization and legal regulation are pointed out: in the domestic power system, a large number of power electrotechnical and generating equipment operate along with systems of regulation, which do not comply with the principles and technical rules representing a framework where the Energy System of Russia is built and functioning

  15. Pre-test CFD Calculations for a Bypass Flow Standard Problem

    Energy Technology Data Exchange (ETDEWEB)

    Rich Johnson

    2011-11-01

    The bypass flow in a prismatic high temperature gas-cooled reactor (HTGR) is the flow that occurs between adjacent graphite blocks. Gaps exist between blocks due to variances in their manufacture and installation and because of the expansion and shrinkage of the blocks from heating and irradiation. Although the temperature of fuel compacts and graphite is sensitive to the presence of bypass flow, there is great uncertainty in the level and effects of the bypass flow. The Next Generation Nuclear Plant (NGNP) program at the Idaho National Laboratory has undertaken to produce experimental data of isothermal bypass flow between three adjacent graphite blocks. These data are intended to provide validation for computational fluid dynamic (CFD) analyses of the bypass flow. Such validation data sets are called Standard Problems in the nuclear safety analysis field. Details of the experimental apparatus as well as several pre-test calculations of the bypass flow are provided. Pre-test calculations are useful in examining the nature of the flow and to see if there are any problems associated with the flow and its measurement. The apparatus is designed to be able to provide three different gap widths in the vertical direction (the direction of the normal coolant flow) and two gap widths in the horizontal direction. It is expected that the vertical bypass flow will range from laminar to transitional to turbulent flow for the different gap widths that will be available.

  16. Full sphere hydrodynamic and dynamo benchmarks

    KAUST Repository

    Marti, P.

    2014-01-26

    Convection in planetary cores can generate fluid flow and magnetic fields, and a number of sophisticated codes exist to simulate the dynamic behaviour of such systems. We report on the first community activity to compare numerical results of computer codes designed to calculate fluid flow within a whole sphere. The flows are incompressible and rapidly rotating and the forcing of the flow is either due to thermal convection or due to moving boundaries. All problems defined have solutions that alloweasy comparison, since they are either steady, slowly drifting or perfectly periodic. The first two benchmarks are defined based on uniform internal heating within the sphere under the Boussinesq approximation with boundary conditions that are uniform in temperature and stress-free for the flow. Benchmark 1 is purely hydrodynamic, and has a drifting solution. Benchmark 2 is a magnetohydrodynamic benchmark that can generate oscillatory, purely periodic, flows and magnetic fields. In contrast, Benchmark 3 is a hydrodynamic rotating bubble benchmark using no slip boundary conditions that has a stationary solution. Results from a variety of types of code are reported, including codes that are fully spectral (based on spherical harmonic expansions in angular coordinates and polynomial expansions in radius), mixed spectral and finite difference, finite volume, finite element and also a mixed Fourier-finite element code. There is good agreement between codes. It is found that in Benchmarks 1 and 2, the approximation of a whole sphere problem by a domain that is a spherical shell (a sphere possessing an inner core) does not represent an adequate approximation to the system, since the results differ from whole sphere results. © The Authors 2014. Published by Oxford University Press on behalf of The Royal Astronomical Society.

  17. Benchmarking for Best Practice

    CERN Document Server

    Zairi, Mohamed

    1998-01-01

    Benchmarking for Best Practice uses up-to-the-minute case-studies of individual companies and industry-wide quality schemes to show how and why implementation has succeeded. For any practitioner wanting to establish best practice in a wide variety of business areas, this book makes essential reading. .It is also an ideal textbook on the applications of TQM since it describes concepts, covers definitions and illustrates the applications with first-hand examples. Professor Mohamed Zairi is an international expert and leading figure in the field of benchmarking. His pioneering work in this area l

  18. HPCG Benchmark Technical Specification

    Energy Technology Data Exchange (ETDEWEB)

    Heroux, Michael Allen [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Dongarra, Jack [Univ. of Tennessee, Knoxville, TN (United States); Luszczek, Piotr [Univ. of Tennessee, Knoxville, TN (United States)

    2013-10-01

    The High Performance Conjugate Gradient (HPCG) benchmark [cite SNL, UTK reports] is a tool for ranking computer systems based on a simple additive Schwarz, symmetric Gauss-Seidel preconditioned conjugate gradient solver. HPCG is similar to the High Performance Linpack (HPL), or Top 500, benchmark [1] in its purpose, but HPCG is intended to better represent how today’s applications perform. In this paper we describe the technical details of HPCG: how it is designed and implemented, what code transformations are permitted and how to interpret and report results.

  19. Glassy Chimeras Could Be Blind to Quantum Speedup: Designing Better Benchmarks for Quantum Annealing Machines

    Directory of Open Access Journals (Sweden)

    Helmut G. Katzgraber

    2014-04-01

    Full Text Available Recently, a programmable quantum annealing machine has been built that minimizes the cost function of hard optimization problems by, in principle, adiabatically quenching quantum fluctuations. Tests performed by different research teams have shown that, indeed, the machine seems to exploit quantum effects. However, experiments on a class of random-bond instances have not yet demonstrated an advantage over classical optimization algorithms on traditional computer hardware. Here, we present evidence as to why this might be the case. These engineered quantum annealing machines effectively operate coupled to a decohering thermal bath. Therefore, we study the finite-temperature critical behavior of the standard benchmark problem used to assess the computational capabilities of these complex machines. We simulate both random-bond Ising models and spin glasses with bimodal and Gaussian disorder on the D-Wave Chimera topology. Our results show that while the worst-case complexity of finding a ground state of an Ising spin glass on the Chimera graph is not polynomial, the finite-temperature phase space is likely rather simple because spin glasses on Chimera have only a zero-temperature transition. This means that benchmarking optimization methods using spin glasses on the Chimera graph might not be the best benchmark problems to test quantum speedup. We propose alternative benchmarks by embedding potentially harder problems on the Chimera topology. Finally, we also study the (reentrant disorder-temperature phase diagram of the random-bond Ising model on the Chimera graph and show that a finite-temperature ferromagnetic phase is stable up to 19.85(15% antiferromagnetic bonds. Beyond this threshold, the system only displays a zero-temperature spin-glass phase. Our results therefore show that a careful design of the hardware architecture and benchmark problems is key when building quantum annealing machines.

  20. Verification of the code DYN3D/R with the help of international benchmarks

    International Nuclear Information System (INIS)

    Grundmann, U.; Rohde, U.

    1997-10-01

    Different benchmarks for reactors with quadratic fuel assemblies were calculated with the code DYN3D/R. In this report comparisons with the results of the reference solutions are carried out. The results of DYN3D/R and the reference calculation for the eigenvalue k eff and the power distribution are shown for the steady-state 3-dimensional IAEA-Benchmark. The results of NEACRP-Benchmarks on control rod ejections in a standard PWR were compared with the reference solutions published by the NEA Data Bank. For assessing the accuracy of DYN3D/R results in comparison to other codes the deviations to the reference solutions are considered. Detailed comparisons with the published reference solutions of the NEA-NSC Benchmarks on uncontrolled withdrawal of control rods are made. The influence of the axial nodalization is also investigated. All in all, a good agreement of the DYN3D/R results with the reference solutions can be seen for the considered benchmark problems. (orig.) [de

  1. Pupils' Visual Representations in Standard and Problematic Problem Solving in Mathematics: Their Role in the Breach of the Didactical Contract

    Science.gov (United States)

    Deliyianni, Eleni; Monoyiou, Annita; Elia, Iliada; Georgiou, Chryso; Zannettou, Eleni

    2009-01-01

    This study investigated the modes of representations generated by kindergarteners and first graders while solving standard and problematic problems in mathematics. Furthermore, it examined the influence of pupils' visual representations on the breach of the didactical contract rules in problem solving. The sample of the study consisted of 38…

  2. Why and How to Benchmark XML Databases

    NARCIS (Netherlands)

    A.R. Schmidt; F. Waas; M.L. Kersten (Martin); D. Florescu; M.J. Carey; I. Manolescu; R. Busse

    2001-01-01

    textabstractBenchmarks belong to the very standard repertory of tools deployed in database development. Assessing the capabilities of a system, analyzing actual and potential bottlenecks, and, naturally, comparing the pros and cons of different systems architectures have become indispensable tasks

  3. The COST Benchmark

    DEFF Research Database (Denmark)

    Jensen, Christian Søndergaard; Tiesyte, Dalia; Tradisauskas, Nerius

    2006-01-01

    takes into account that the available positions of the moving objects are inaccurate, an aspect largely ignored in previous indexing research. The concepts of data and query enlargement are introduced for addressing inaccuracy. As proof of concepts of the benchmark, the paper covers the application...

  4. Benchmarking and Performance Management

    Directory of Open Access Journals (Sweden)

    Adrian TANTAU

    2010-12-01

    Full Text Available The relevance of the chosen topic is explained by the meaning of the firm efficiency concept - the firm efficiency means the revealed performance (how well the firm performs in the actual market environment given the basic characteristics of the firms and their markets that are expected to drive their profitability (firm size, market power etc.. This complex and relative performance could be due to such things as product innovation, management quality, work organization, some other factors can be a cause even if they are not directly observed by the researcher. The critical need for the management individuals/group to continuously improve their firm/company’s efficiency and effectiveness, the need for the managers to know which are the success factors and the competitiveness determinants determine consequently, what performance measures are most critical in determining their firm’s overall success. Benchmarking, when done properly, can accurately identify both successful companies and the underlying reasons for their success. Innovation and benchmarking firm level performance are critical interdependent activities. Firm level variables, used to infer performance, are often interdependent due to operational reasons. Hence, the managers need to take the dependencies among these variables into account when forecasting and benchmarking performance. This paper studies firm level performance using financial ratio and other type of profitability measures. It uses econometric models to describe and then propose a method to forecast and benchmark performance.

  5. Benchmarking of workplace performance

    NARCIS (Netherlands)

    van der Voordt, Theo; Jensen, Per Anker

    2017-01-01

    This paper aims to present a process model of value adding corporate real estate and facilities management and to discuss which indicators can be used to measure and benchmark workplace performance.

    In order to add value to the organisation, the work environment has to provide value for

  6. Case mix classification and a benchmark set for surgery scheduling

    NARCIS (Netherlands)

    Leeftink, Gréanne; Hans, Erwin W.

    Numerous benchmark sets exist for combinatorial optimization problems. However, in healthcare scheduling, only a few benchmark sets are known, mainly focused on nurse rostering. One of the most studied topics in the healthcare scheduling literature is surgery scheduling, for which there is no widely

  7. The suite of analytical benchmarks for neutral particle transport in infinite isotropically scattering media

    International Nuclear Information System (INIS)

    Kornreich, D.E.; Ganapol, B.D.

    1997-01-01

    The linear Boltzmann equation for the transport of neutral particles is investigated with the objective of generating benchmark-quality evaluations of solutions for homogeneous infinite media. In all cases, the problems are stationary, of one energy group, and the scattering is isotropic. The solutions are generally obtained through the use of Fourier transform methods with the numerical inversions constructed from standard numerical techniques such as Gauss-Legendre quadrature, summation of infinite series, and convergence acceleration. Consideration of the suite of benchmarks in infinite homogeneous media begins with the standard one-dimensional problems: an isotropic point source, an isotropic planar source, and an isotropic infinite line source. The physical and mathematical relationships between these source configurations are investigated. The progression of complexity then leads to multidimensional problems with source configurations that also emit particles isotropically: the finite line source, the disk source, and the rectangular source. The scalar flux from the finite isotropic line and disk sources will have a two-dimensional spatial variation, whereas a finite rectangular source will have a three-dimensional variation in the scalar flux. Next, sources emitting particles anisotropically are considered. The most basic such source is the point beam giving rise to the Green's function, which is physically the most fundamental transport problem, yet may be constructed from the isotropic point source solution. Finally, the anisotropic plane and anisotropically emitting infinite line sources are considered. Thus, a firm theoretical and numerical base is established for the most fundamental neutral particle benchmarks in infinite homogeneous media

  8. International standard problem ISP-47 on containment thermal hydraulics - Final report

    International Nuclear Information System (INIS)

    Allelein, H. J.; Schwarz, S.; Fischer, K.; Vendel, J.; Malet, J.; Bentaib, A.; Studer, E.; Paillere, H.; Houkema, M.

    2007-01-01

    The main objective of the ISP-47 is to assess the capabilities of Lumped Parameter (LP) and Computational Fluid Dynamics (CFD) codes in the area of containment thermal-hydraulics. Following the recommendations made in the state-of-the-art report on 'Containment Thermal-hydraulics and Hydrogen Distribution' this ISP was based on the application of different complementary experimental facilities and a progressive modelling difficulty. The three experimental facilities TOSQAN, MISTRA and ThAI have shown the quality of the provided experimental data suitable for CFD and LP code benchmarking in steady-state and transient conditions (control of the initial and boundary conditions, and the accuracy of the measurement techniques). This mainly includes pressure transients and gas temperature field as in former exercises. Detailed gas velocity and gas concentration (air, steam and helium) fields were obtained for the first time in such an exercise. ISP-47 was executed in two main steps: Step 1 was dedicated to the validation of the codes in the separate effects facility TOSQAN (7 m 3 ). Wall condensation, steam injection in air or air / helium atmospheres, and buoyancy were addressed under well-controlled initial conditions in a simple geometry. Furthermore, the interactions of phenomena such as condensation / stratification, turbulence / buoyancy, etc. were addressed using the larger scale of MISTRA (100 m 3 ) facility. Both TOSQAN and MISTRA were specifically designed to produce data for CFD codes with state-of-the-art instrumentation. The TOSQAN benchmark was open, whereas the MISTRA benchmark was blind. Step 2 addressed the code assessment using an experiment in the multi-compartment ThAI (60 m 3 ) facility with different steam and helium injection phases, transient stratification and mixing conditions in the atmosphere, development of natural convection, wall condensate distribution, fog formation, and transient thermal response of heat conducting walls. Detailed

  9. Verification of thermal-hydraulic computer codes against standard problems for WWER reflooding

    International Nuclear Information System (INIS)

    Alexander D Efanov; Vladimir N Vinogradov; Victor V Sergeev; Oleg A Sudnitsyn

    2005-01-01

    Full text of publication follows: The computational assessment of reactor core components behavior under accident conditions is impossible without knowledge of the thermal-hydraulic processes occurring in this case. The adequacy of the results obtained using the computer codes to the real processes is verified by carrying out a number of standard problems. In 2000-2003, the fulfillment of three Russian standard problems on WWER core reflooding was arranged using the experiments on full-height electrically heated WWER 37-rod bundle model cooldown in regimes of bottom (SP-1), top (SP-2) and combined (SP-3) reflooding. The representatives from the eight MINATOM's organizations took part in this work, in the course of which the 'blind' and posttest calculations were performed using various versions of the RELAP5, ATHLET, CATHARE, COBRA-TF, TRAP, KORSAR computer codes. The paper presents a brief description of the test facility, test section, test scenarios and conditions as well as the basic results of computational analysis of the experiments. The analysis of the test data revealed a significantly non-one-dimensional nature of cooldown and rewetting of heater rods heated up to a high temperature in a model bundle. This was most pronounced at top and combined reflooding. The verification of the model reflooding computer codes showed that most of computer codes fairly predict the peak rod temperature and the time of bundle cooldown. The exception is provided by the results of calculations with the ATHLET and CATHARE codes. The nature and rate of rewetting front advance in the lower half of the bundle are fairly predicted practically by all computer codes. The disagreement between the calculations and experimental results for the upper half of the bundle is caused by the difficulties of computational simulation of multidimensional effects by 1-D computer codes. In this regard, a quasi-two-dimensional computer code COBRA-TF offers certain advantages. Overall, the closest

  10. Storage-Intensive Supercomputing Benchmark Study

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, J; Dossa, D; Gokhale, M; Hysom, D; May, J; Pearce, R; Yoo, A

    2007-10-30

    Critical data science applications requiring frequent access to storage perform poorly on today's computing architectures. This project addresses efficient computation of data-intensive problems in national security and basic science by exploring, advancing, and applying a new form of computing called storage-intensive supercomputing (SISC). Our goal is to enable applications that simply cannot run on current systems, and, for a broad range of data-intensive problems, to deliver an order of magnitude improvement in price/performance over today's data-intensive architectures. This technical report documents much of the work done under LDRD 07-ERD-063 Storage Intensive Supercomputing during the period 05/07-09/07. The following chapters describe: (1) a new file I/O monitoring tool iotrace developed to capture the dynamic I/O profiles of Linux processes; (2) an out-of-core graph benchmark for level-set expansion of scale-free graphs; (3) an entity extraction benchmark consisting of a pipeline of eight components; and (4) an image resampling benchmark drawn from the SWarp program in the LSST data processing pipeline. The performance of the graph and entity extraction benchmarks was measured in three different scenarios: data sets residing on the NFS file server and accessed over the network; data sets stored on local disk; and data sets stored on the Fusion I/O parallel NAND Flash array. The image resampling benchmark compared performance of software-only to GPU-accelerated. In addition to the work reported here, an additional text processing application was developed that used an FPGA to accelerate n-gram profiling for language classification. The n-gram application will be presented at SC07 at the High Performance Reconfigurable Computing Technologies and Applications Workshop. The graph and entity extraction benchmarks were run on a Supermicro server housing the NAND Flash 40GB parallel disk array, the Fusion-io. The Fusion system specs are as follows

  11. Results of international standard problem No. 36 severe fuel damage experiment of a VVER fuel bundle

    Energy Technology Data Exchange (ETDEWEB)

    Firnhaber, M. [Gesellschaft fuer Anlagen-und Reaktorsicherheit, Koeln (Germany); Yegorova, L. [Nuclear Safety Institute of Russian Research Center, Moscow (Russian Federation); Brockmeier, U. [Ruhr-Univ. of Bochum (Germany)] [and others

    1995-09-01

    International Standard Problems (ISP) organized by the OECD are defined as comparative exercises in which predictions with different computer codes for a given physical problem are compared with each other and with a carefully controlled experimental study. The main goal of ISP is to increase confidence in the validity and accuracy of analytical tools used in assessing the safety of nuclear installations. In addition, it enables the code user to gain experience and to improve his competence. This paper presents the results and assessment of ISP No. 36, which deals with the early core degradation phase during an unmitigated severe LWR accident in a Russian type VVER. Representatives of 17 organizations participated in the ISP using the codes ATHLET-CD, ICARE2, KESS-III, MELCOR, SCDAP/RELAP5 and RAPTA. Some participants performed several calculations with different codes. As experimental basis the severe fuel damage experiment CORA-W2 was selected. The main phenomena investigated are thermal behavior of fuel rods, onset of temperature escalation, material behavior and hydrogen generation. In general, the calculations give the right tendency of the experimental results for the thermal behavior, the hydrogen generation and, partly, for the material behavior. However, some calculations deviate in important quantities - e.g. some material behavior data - showing remarkable discrepancies between each other and from the experiments. The temperature history of the bundle up to the beginning of significant oxidation was calculated quite well. Deviations seem to be related to the overall heat balance. Since the material behavior of the bundle is to a great extent influenced by the cladding failure criteria a more realistic cladding failure model should be developed at least for the detailed, mechanistic codes. Regarding the material behavior and flow blockage some models for the material interaction as well as for relocation and refreezing requires further improvement.

  12. Standardized benchmarking in the quest for orthologs

    DEFF Research Database (Denmark)

    Altenhoff, Adrian M; Boeckmann, Brigitte; Capella-Gutierrez, Salvador

    2016-01-01

    Achieving high accuracy in orthology inference is essential for many comparative, evolutionary and functional genomic analyses, yet the true evolutionary history of genes is generally unknown and orthologs are used for very different applications across phyla, requiring different precision-recall...

  13. Performance Targets and External Benchmarking

    DEFF Research Database (Denmark)

    Friis, Ivar; Hansen, Allan; Vámosi, Tamás S.

    Research on relative performance measures, transfer pricing, beyond budgeting initiatives, target costing, piece rates systems and value based management has for decades underlined the importance of external benchmarking in performance management. Research conceptualises external benchmarking...

  14. Comparison Report of Open Calculations for ATLAS Domestic Standard Problem (DSP-01)

    International Nuclear Information System (INIS)

    Choi, Ki Yong; Kim, Y. S.; Kang, K. H.

    2010-06-01

    KAERI (Korea Atomic Energy Research Institute) has been operating an integral effect test facility, the ATLAS (Advanced Thermal-Hydraulic Test Loop for Accident Simulation) for accident simulations of advanced pressurized water reactors (PWRs). As an integral effect test database for major design basis accidents has been accumulated, a Domestic Standard Problem (DSP) exercise using the ATLAS was proposed in order to transfer the database to domestic nuclear industries and to contribute to improving safety analysis methodology for PWRs. This ATLAS DSP exercise was led by KAERI in collaboration with KINS and it was the first-ever exercise in Korea. This exercise aims at effective utilization of integral effect database obtained from the ATLAS, establishment of cooperation framework among the domestic nuclear industry, better understanding of thermal hydraulic phenomena, and investigation of the possible limitation of the existing best-estimate safety analysis codes. As the first DSP exercise, 100% break scenario of the DVI nozzle was determined by considering its technical importance and by incorporating with comments from participants. Twelve domestic organizations joined this DSP exercise. Finally, ten among the joined organizations submitted their calculation results. They include universities, government, and nuclear industries. This first DSP exercise was performed in an open calculation environment; integral effect test data was open to participants prior to code calculations. This report includes all information of the first DSP-01 exercise as well as comparison results between the calculations and the experimental data

  15. Second ATLAS Domestic Standard Problem (DSP-02) For A Code Assessment

    International Nuclear Information System (INIS)

    Kim, Yeonsik; Choi, Kiyong; Cho, Seok; Park, Hyunsik; Kang, Kyungho; Song, Chulhwa; Baek, Wonpil

    2013-01-01

    KAERI (Korea Atomic Energy Research Institute) has been operating an integral effect test facility, the Advanced Thermal-Hydraulic Test Loop for Accident Simulation (ATLAS), for transient and accident simulations of advanced pressurized water reactors (PWRs). Using ATLAS, a high-quality integral effect test database has been established for major design basis accidents of the APR1400 plant. A Domestic Standard Problem (DSP) exercise using the ATLAS database was promoted to transfer the database to domestic nuclear industries and contribute to improving a safety analysis methodology for PWRs. This 2 nd ATLAS DSP (DSP-02) exercise aims at an effective utilization of an integral effect database obtained from ATLAS, the establishment of a cooperation framework among the domestic nuclear industry, a better understanding of the thermal hydraulic phenomena, and an investigation into the possible limitation of the existing best-estimate safety analysis codes. A small break loss of coolant accident with a 6-inch break at the cold leg was determined as a target scenario by considering its technical importance and by incorporating interests from participants. This DSP exercise was performed in an open calculation environment where the integral effect test data was open to participants prior to the code calculations. This paper includes major information of the DSP-02 exercise as well as comparison results between the calculations and the experimental data

  16. On physics of the hydrogen plasticization and embrittlement of metallic materials, relevance to the safety and standards' problems

    International Nuclear Information System (INIS)

    Yury S Nechaev; Georgy A Filippov; T Nejat Veziroglu

    2006-01-01

    In the present contribution, some related fundamental problems of revealing micro mechanisms of hydrogen plasticization, superplasticity, embrittlement, cracking, blistering and delayed fracture of some technologically important industrial metallic materials are formulated. The ways are considered of these problems' solution and optimizing the technological processes and materials, particularly in the hydrogen and gas-petroleum industries, some aircraft, aerospace and automobile systems. The results are related to the safety and standardization problems of metallic materials, and to the problem of their compatibility with hydrogen. (authors)

  17. Mask Waves Benchmark

    Science.gov (United States)

    2007-10-01

    frequenciesfoeahpbeswllsa"gdnsmtrc fo eah/Rbe. /Qthe acuation are de fiamn aprltmethod raetheorta cmiurve fTtn,wihe ies whynee select ful cycle wisdoimporat tob...See Figure 22 for a comparison of measured waves, linear waves, and non- linear Stokes waves. Looking at the selected 16 runs from the trough-to-peak...Figure 23 for the benchmark data set, the relation of obtained frequency verses desired frequency is almost completely linear . The slight variation at

  18. [Research progress on standards of commodity classes of Chinese materia medica and discussion on several key problems].

    Science.gov (United States)

    Yang, Guang; Zeng, Yan; Guo, Lan-Ping; Huang, Lu-Qi; Jin, Yan; Zheng, Yu-Guang; Wang, Yong-Yan

    2014-05-01

    Standards of commodity classes of Chinese materia medica is an important way to solve the "Lemons Problem" of traditional Chinese medicine market. Standards of commodity classes are also helpful to rebuild market mechanisms for "high price for good quality". The previous edition of commodity classes standards of Chinese materia medica was made 30 years ago. It is no longer adapted to the market demand. This article researched progress on standards of commodity classes of Chinese materia medica. It considered that biological activity is a better choice than chemical constituents for standards of commodity classes of Chinese materia medica. It is also considered that the key point to set standards of commodity classes is finding the influencing factors between "good quality" and "bad quality". The article also discussed the range of commodity classes of Chinese materia medica, and how to coordinate standards of pharmacopoeia and commodity classes. According to different demands, diversiform standards can be used in commodity classes of Chinese materia medica, but efficacy is considered the most important index of commodity standard. Decoction pieces can be included in standards of commodity classes of Chinese materia medica. The authors also formulated the standards of commodity classes of Notoginseng Radix as an example, and hope this study can make a positive and promotion effect on traditional Chinese medicine market related research.

  19. Benchmark tests of JENDL-1

    International Nuclear Information System (INIS)

    Kikuchi, Yasuyuki; Hasegawa, Akira; Takano, Hideki; Kamei, Takanobu; Hojuyama, Takeshi; Sasaki, Makoto; Seki, Yuji; Zukeran, Atsushi; Otake, Iwao.

    1982-02-01

    Various benchmark tests were made on JENDL-1. At the first stage, various core center characteristics were tested for many critical assemblies with one-dimensional model. At the second stage, applicability of JENDL-1 was further tested to more sophisticated problems for MOZART and ZPPR-3 assemblies with two-dimensional model. It was proved that JENDL-1 predicted various quantities of fast reactors satisfactorily as a whole. However, the following problems were pointed out: 1) There exists discrepancy of 0.9% in the k sub(eff)-values between the Pu- and U-cores. 2) The fission rate ratio of 239 Pu to 235 U is underestimated by 3%. 3) The Doppler reactivity coefficients are overestimated by about 10%. 4) The control rod worths are underestimated by 4%. 5) The fission rates of 235 U and 239 Pu are underestimated considerably in the outer core and radial blanket regions. 6) The negative sodium void reactivities are overestimated, when the sodium is removed from the outer core. As a whole, most of problems of JENDL-1 seem to be related with the neutron leakage and the neutron spectrum. It was found through the further study that most of these problems came from too small diffusion coefficients and too large elastic removal cross sections above 100 keV, which might be probably caused by overestimation of the total and elastic scattering cross sections for structural materials in the unresolved resonance region up to several MeV. (author)

  20. Benchmarking Cloud Resources for HEP

    Science.gov (United States)

    Alef, M.; Cordeiro, C.; De Salvo, A.; Di Girolamo, A.; Field, L.; Giordano, D.; Guerri, M.; Schiavi, F. C.; Wiebalck, A.

    2017-10-01

    In a commercial cloud environment, exhaustive resource profiling is beneficial to cope with the intrinsic variability of the virtualised environment, allowing to promptly identify performance degradation. In the context of its commercial cloud initiatives, CERN has acquired extensive experience in benchmarking commercial cloud resources. Ultimately, this activity provides information on the actual delivered performance of invoiced resources. In this report we discuss the experience acquired and the results collected using several fast benchmark applications adopted by the HEP community. These benchmarks span from open-source benchmarks to specific user applications and synthetic benchmarks. The workflow put in place to collect and analyse performance metrics is also described.

  1. Comparison report of open calculations for ATLAS Domestic Standard Problem (DSP 02)

    International Nuclear Information System (INIS)

    Choi, Ki Yong; Kim, Y. S.; Kang, K. H.; Cho, S.; Park, H. S.; Choi, N. H.; Kim, B. D.; Min, K. H.; Park, J. K.; Chun, H. G.; Yu, Xin Guo; Kim, H. T.; Song, C. H.; Sim, S. K.; Jeon, S. S.; Kim, S. Y.; Kang, D. G.; Choi, T. S.; Kim, Y. M.; Lim, S. G.; Kim, H. S.; Kang, D. H.; Lee, G. H.; Jang, M. J.

    2012-09-01

    KAERI (Korea Atomic Energy Research Institute) has been operating an integral effect test facility, the Advanced Thermal Hydraulic Test Loop for Accident Simulation (ATLAS) for transient and accident simulations of advanced pressurized water reactors (PWRs). By using the ATLAS, a high quality integral effect test database has been established for major design basis accidents of the APR1400. A Domestic Standard Problem (DSP) exercise using the ATLAS database was promoted in order to transfer the database to domestic nuclear industries and to contribute to improving safety analysis methodology for PWRs. This 2nd ATLAS DSP exercise was led by KAERI in collaboration with KINS since the successful completion of the 1st ATLAS DSP in 2009. This exercise aims at effective utilization of integral effect database obtained from the ATLAS, establishment of cooperation framework among the domestic nuclear industry, better understanding of thermal hydraulic phenomena, and investigation of the possible limitation of the existing best estimate safety analysis codes. A small break loss of coolant accident of 6 inch break at the cold leg was determined as a target scenario by considering its technical importance and by incorporating with interests from participants. Twelve domestic organizations joined this DSP 02 exercise. Finally, eleven out of the joined organizations submitted their calculation results, including universities, government, and nuclear industries. This DSP exercise was performed in an open calculation environment where the integral effect test data was open to participants prior to code calculations. This report includes all information of the 2nd ATLAS DSP (DSP 02) exercise as well as comparison results between the calculations and the experimental data

  2. Experimental and computational benchmark tests

    International Nuclear Information System (INIS)

    Gilliam, D.M.; Briesmeister, J.F.

    1994-01-01

    A program involving principally NIST, LANL, and ORNL has been in progress for about four years now to establish a series of benchmark measurements and calculations related to the moderation and leakage of 252 Cf neutrons from a source surrounded by spherical aqueous moderators of various thicknesses and compositions. The motivation for these studies comes from problems in criticality calculations concerning arrays of multiplying components, where the leakage from one component acts as a source for the other components. This talk compares experimental and calculated values for the fission rates of four nuclides - 235 U, 239 Pu, 238 U, and 237 Np - in the leakage spectrum from moderator spheres of diameters 76.2 mm, 101.6 mm, and 127.0 mm, with either pure water or enriched B-10 solutions as the moderator. Very detailed Monte Carlo calculations were done with the MCNP code, using a open-quotes light waterclose quotes S(α,β) scattering kernel

  3. Analytical three-dimensional neutron transport benchmarks for verification of nuclear engineering codes. Final report

    International Nuclear Information System (INIS)

    Ganapol, B.D.; Kornreich, D.E.

    1997-01-01

    Because of the requirement of accountability and quality control in the scientific world, a demand for high-quality analytical benchmark calculations has arisen in the neutron transport community. The intent of these benchmarks is to provide a numerical standard to which production neutron transport codes may be compared in order to verify proper operation. The overall investigation as modified in the second year renewal application includes the following three primary tasks. Task 1 on two dimensional neutron transport is divided into (a) single medium searchlight problem (SLP) and (b) two-adjacent half-space SLP. Task 2 on three-dimensional neutron transport covers (a) point source in arbitrary geometry, (b) single medium SLP, and (c) two-adjacent half-space SLP. Task 3 on code verification, includes deterministic and probabilistic codes. The primary aim of the proposed investigation was to provide a suite of comprehensive two- and three-dimensional analytical benchmarks for neutron transport theory applications. This objective has been achieved. The suite of benchmarks in infinite media and the three-dimensional SLP are a relatively comprehensive set of one-group benchmarks for isotropically scattering media. Because of time and resource limitations, the extensions of the benchmarks to include multi-group and anisotropic scattering are not included here. Presently, however, enormous advances in the solution for the planar Green's function in an anisotropically scattering medium have been made and will eventually be implemented in the two- and three-dimensional solutions considered under this grant. Of particular note in this work are the numerical results for the three-dimensional SLP, which have never before been presented. The results presented were made possible only because of the tremendous advances in computing power that have occurred during the past decade

  4. Analytical three-dimensional neutron transport benchmarks for verification of nuclear engineering codes. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Ganapol, B.D.; Kornreich, D.E. [Univ. of Arizona, Tucson, AZ (United States). Dept. of Nuclear Engineering

    1997-07-01

    Because of the requirement of accountability and quality control in the scientific world, a demand for high-quality analytical benchmark calculations has arisen in the neutron transport community. The intent of these benchmarks is to provide a numerical standard to which production neutron transport codes may be compared in order to verify proper operation. The overall investigation as modified in the second year renewal application includes the following three primary tasks. Task 1 on two dimensional neutron transport is divided into (a) single medium searchlight problem (SLP) and (b) two-adjacent half-space SLP. Task 2 on three-dimensional neutron transport covers (a) point source in arbitrary geometry, (b) single medium SLP, and (c) two-adjacent half-space SLP. Task 3 on code verification, includes deterministic and probabilistic codes. The primary aim of the proposed investigation was to provide a suite of comprehensive two- and three-dimensional analytical benchmarks for neutron transport theory applications. This objective has been achieved. The suite of benchmarks in infinite media and the three-dimensional SLP are a relatively comprehensive set of one-group benchmarks for isotropically scattering media. Because of time and resource limitations, the extensions of the benchmarks to include multi-group and anisotropic scattering are not included here. Presently, however, enormous advances in the solution for the planar Green`s function in an anisotropically scattering medium have been made and will eventually be implemented in the two- and three-dimensional solutions considered under this grant. Of particular note in this work are the numerical results for the three-dimensional SLP, which have never before been presented. The results presented were made possible only because of the tremendous advances in computing power that have occurred during the past decade.

  5. THE APPLICATION OF INTERNATIONAL FINANCIAL REPORTING STANDARDS IN ROMANIA: ADVANTAGES AND MAIN PROBLEMS

    OpenAIRE

    Diana-Andreea, TRAISTARU

    2014-01-01

    This work is meant to analyze the implementation of International Financial Reporting Standards in Romania. The work tries to focus on the benefits and challenges of International Financial Reporting Standards, mainly on factors pertaining to its adoption connections, statistics and other types of analyses were used in order to show the importance that International Financial Reporting Standards adoption could represent for a large number of stakeholders. The most important features of Intern...

  6. Comparison and Interpretation Report of the OECD International Standard Problem No. 45 - Exercise (QUENCH-06)

    International Nuclear Information System (INIS)

    Hering, W.; Homann, Ch.; Lamy, J.S.; Miassoedov, A.; Schanz, G.; Sepold, L.; Steinbrueck, M.

    2002-10-01

    The International Standard Problem (ISP) No. 45 is part of the overall ISP program of the OECD/NEA and is dedicated to the behavior of heat-up and delayed reflood of fuel elements in nuclear reactors during a hypothetical accident. ISP-45 is related to the out-of-pile bundle quench experiment QUENCH-06, performed at Forschungszentrum Karlsruhe (FZK), Germany, on December 13, 2000. Special attention was paid to hydrogen production. To assess the ability of severe accident codes to simulate processes during core heat-up and reflood at temperatures above 2000 K, the behavior of the bundle during the whole experiment should be calculated on the basis of the necessary experimental initial and boundary conditions, but without knowing further experimental details. In this so-called blind phase 21 participants from 15 nations contributed with 8 different code systems (ATHLET-CD, ICARE/CATHARE, IMPACT/SAMPSON, GENFLO, MAAP, MELCOR, SCDAPSIM, SCDAP-3D). Additionally, posttest calculations using the in-house version SCDAP/RELAP5 mod3.2.irs are used for comparison. After the end of the blind phase all measured data were made available and the participants were invited to deliver a second calculation, where this knowledge could be used (so-called open phase). In this report, results of the blind calculations are presented, analyzed, and compared to experimental data. During heat-up most results do not deviate significantly from one another, except as a consequence of some obvious user errors, so that a definition of a mainstream is justified. For the quench phase the lack of adequate hydraulic modeling becomes obvious: some participants could not match the observed cool-down rates, others had to use very fine meshes to compensate code deficiencies. To overcome this insufficiency some newly developed reflood models were used in MAAP and MELCOR. In QUENCH-06, oxide layers were thick enough to protect the cladding from melting and failure below 2200 K, so that no massive hydrogen

  7. International Standard Problems and Small Break Loss-Of-Coolant Accident (SBLOCA)

    International Nuclear Information System (INIS)

    Aksan, N.

    2008-01-01

    Best-estimate thermalhydraulic system codes are widely used to perform safety and licensing analyses of nuclear power plants and also used in the design of advance reactors. Evaluation of the capabilities and the performance of these codes can be accomplished by comparing the code predictions with measured experimental data obtained on different test facilities. In this respect, parallel to other national and international programs, OECD/Nea (OECD Nuclear Energy Agency) Committee on the Safety of Nuclear Installations (CSNI) has promoted, over the last twenty-nine years some forty-eight International Standard Problems (ISPs). These ISPs were performed in different fields as in-vessel thermalhydraulic behaviour, fuel behaviour under accident conditions, fission product release and transport, core/concrete interactions, hydrogen distribution and mixing, containment thermalhydraulic behaviour. 80% of these ISPs were related to the working domain of Principal Working Group no. 2 on Coolant System Behaviour (PWG2). The ISPs have been one of the major PWG2 activities for many years. The individual ISP comparison reports include the analysis and conclusions of the specific ISP exercises. A global review and synthesis on the contribution that ISPs have made to address nuclear reactor safety issues was initiated by CSNI-PWG2 and an overview on the subject of small break LOCA ISP's is given in this paper based on a report prepared by a CSNI-PWG2 writing group. In addition, the relevance of small break LOCA in a PWR with relation to nuclear reactor safety and the reorientation of the reactor safety program after TMI-2 accident, specifically small break LOCA, are shortly summarized. Five small break LOCA related ISP's are considered, since these were used for the assessment of the advanced best-estimate codes. The considered ISP's deal with the phenomenon typical of small break LOCAs in Western design PWRs. The experiments in four integral test facilities, LOBI, SPES, BETHSY

  8. Comparison and interpretation report of the OECD International Standard Problem No. 45 Exercise (QUENCH-06)

    International Nuclear Information System (INIS)

    Hering, W.; Homann, C.; Lamy, J.S.; Miassoedov, A.; Schanz, G.; Sepold, L.; Steinbrueck, M.

    2002-07-01

    The International Standard Problem (ISP) No. 45 is part of the overall ISP program of the OECD/NEA and is dedicated to the behavior of heat-up and delayed reflood of fuel elements in nuclear reactors. ISP-45 is related to the out-of-pile bundle quench experiment QUENCH-06, performed at Forschungszentrum Karlsruhe (FZK), Germany, on December 13, 2000. Special attention was paid to hydrogen production. To assess the ability of severe accident codes to simulate processes during core heat-up and reflood at temperatures above 2000 K, the behavior of the bundle during the whole experiment should be calculated on the basis of experimental initial and boundary conditions, but without knowing further experimental details (blind phase). In the blind phase 21 participants from 15 nations contributed with 8 different code systems (ATHLET-CD, ICARE/CATHARE, IMPACT/SAMPSON, GENFLO, MAAP, MELCOR, SCDAPSIM, SCDAP-3D). After the end of the blind phase all measured data were made available and the participants were invited to deliver a second calculation, where this knowledge could be used (open phase). In this report, results of the blind calculations are presented, analyzed, and compared to experimental data. Additionally, post-test calculations using the in-house version SCDAP/RELAP5 mod3.2.irs are used for comparison. During heat-up most results do not deviate significantly from one another, except as a consequence of some obvious user errors, so that a definition of a mainstream is justified. During quenching the lack of adequate hydraulic modeling becomes obvious: some participants could not match the observed cool-down rates, others had to use a very fine mesh to compensate code deficiencies. To overcome this insufficiency some newly developed reflood models were used in MAAP and MELCOR. In QUENCH-06, the sufficiently thick oxide layers protected the cladding from melting and failure below 2200 K, so that no massive hydrogen release during reflood was found. This behavior

  9. International Standard Problems and Small Break Loss-of-Coolant Accident (SBLOCA

    Directory of Open Access Journals (Sweden)

    N. Aksan

    2008-01-01

    Full Text Available Best-estimate thermal-hydraulic system codes are widely used to perform safety and licensing analyses of nuclear power plants and also used in the design of advance reactors. Evaluation of the capabilities and the performance of these codes can be accomplished by comparing the code predictions with measured experimental data obtained on different test facilities. OECD/NEA Committee on the Safety of Nuclear Installations (CSNI has promoted, over the last twenty-nine years, some forty-eight international standard problems (ISPs. These ISPs were performed in different fields as in-vessel thermal-hydraulic behaviour, fuel behaviour under accident conditions, fission product release and transport, core/concrete interactions, hydrogen distribution and mixing, containment thermal-hydraulic behaviour. 80% of these ISPs were related to the working domain of principal working group no.2 on coolant system behaviour (PWG2 and were one of the major PWG2 activities for many years. A global review and synthesis on the contribution that ISPs have made to address nuclear reactor safety issues was initiated by CSNI-PWG2 and an overview on the subject of small break LOCA ISPs is given in this paper based on a report prepared by a writing group. In addition, the relevance of small break LOCA in a PWR with relation to nuclear reactor safety and the reorientation of the reactor safety program after TMI-2 accident are shortly summarized. The experiments in four integral test facilities, LOBI, SPES, BETHSY, ROSA IV/LSTF and the recorded data during a steam generator tube rupture transient in the DOEL-2 PWR (Belgium were the basis of the five small break LOCA related ISP exercises, which deal with the phenomenon typical of small break LOCAs in Western design PWRs. Some lessons learned from these small break LOCA ISPs are identified in relation to code deficiencies and capabilities, progress in the code capabilities, possibility of scaling, and various additional aspects

  10. Benchmarking Cloud Storage Systems

    OpenAIRE

    Wang, Xing

    2014-01-01

    With the rise of cloud computing, many cloud storage systems like Dropbox, Google Drive and Mega have been built to provide decentralized and reliable file storage. It is thus of prime importance to know their features, performance, and the best way to make use of them. In this context, we introduce BenchCloud, a tool designed as part of this thesis to conveniently and efficiently benchmark any cloud storage system. First, we provide a study of six commonly-used cloud storage systems to ident...

  11. Benchmarking Danish Industries

    DEFF Research Database (Denmark)

    Gammelgaard, Britta; Bentzen, Eric; Aagaard Andreassen, Mette

    2003-01-01

    compatible survey. The International Manufacturing Strategy Survey (IMSS) doesbring up the question of supply chain management, but unfortunately, we did not have access to thedatabase. Data from the members of the SCOR-model, in the form of benchmarked performance data,may exist, but are nonetheless......This report is based on the survey "Industrial Companies in Denmark - Today and Tomorrow',section IV: Supply Chain Management - Practices and Performance, question number 4.9 onperformance assessment. To our knowledge, this survey is unique, as we have not been able to findresults from any...

  12. Looking beyond RtI Standard Treatment Approach: It's Not Too Late to Embrace the Problem-Solving Approach

    Science.gov (United States)

    King, Diane; Coughlin, Patricia Kathleen

    2016-01-01

    There are two approaches for providing Tier 2 interventions within Response to Intervention (RtI): standard treatment protocol (STP) and the problem-solving approach (PSA). This article describes the multi-tiered RtI prevention model being implemented across the United States through an analysis of these two approaches in reading instruction. It…

  13. Input data preparation and simulation of the second standard problem of IAEA using the Trac/PF1 code

    International Nuclear Information System (INIS)

    Madeira, A.A.; Pontedeiro, A.C.; Silva Galetti, M.R. da; Borges, R.C.

    1989-10-01

    The second Standard Problem sponsored by IAEA consists in the simulation of a small LOCA located in the downcomer of a PMK-NVH integral test facility, which models WWER/440 type reactor. This report presents input data preparation and comparison between TRAC-PF1 results and experimental measurements. (author) [pt

  14. Measurement standards and the general problem of reference points in chemical analysis

    International Nuclear Information System (INIS)

    Richter, W.; Dube, G.

    2002-01-01

    Besides the measurement standards available in general metrology in the form of the realisations of the units of measurement, measurement standards of chemical composition are needed for the vast field of chemical measurement (measurements of the chemical composition), because it is the main aim of such measurements to quantify non-isolated substances, often in complicated matrices, to which the 'classical' measurement standards and their lower- level derivatives are not directly applicable. At present, material artefacts as well as standard measurement devices serve as chemical measurement standards. These are measurement standards in the full metrological sense only, however, if they are firmly linked to the SI unit in which the composition represented by the standard is expressed. This requirement has the consequence that only a very restricted number of really reliable chemical measurement standards exist at present. Since it is very difficult and time consuming to increase this number substantially and, on the other hand, reliable reference points are increasingly needed for all kinds of chemical measurements, primary methods of measurement and high-level reference measurements will play an increasingly important role for the establishment of worldwide comparability and hence mutual acceptance of chemical measurement results. (author)

  15. The Electronic Patient Record and Second Generation Clinical Databases: Problems of Standards and Nomenclature.

    Science.gov (United States)

    Monteith, Brian D.

    1991-01-01

    Three principles of classification are stressed in the development of electronic dental patient records and clinical databases: (1) the classification must have a suitable organizing principle; (2) use must be made of standard terminology; and (3) there must be standard operational criteria. (DB)

  16. Planning Model of Physics Learning In Senior High School To Develop Problem Solving Creativity Based On National Standard Of Education

    Science.gov (United States)

    Putra, A.; Masril, M.; Yurnetti, Y.

    2018-04-01

    One of the causes of low achievement of student’s competence in physics learning in high school is the process which they have not been able to develop student’s creativity in problem solving. This is shown that the teacher’s learning plan is not accordance with the National Eduction Standard. This study aims to produce a reconstruction model of physics learning that fullfil the competency standards, content standards, and assessment standards in accordance with applicable curriculum standards. The development process follows: Needs analysis, product design, product development, implementation, and product evaluation. The research process involves 2 peers judgment, 4 experts judgment and two study groups of high school students in Padang. The data obtained, in the form of qualitative and quantitative data that collected through documentation, observation, questionnaires, and tests. The result of this research up to the product development stage that obtained the physics learning plan model that meets the validity of the content and the validity of the construction in terms of the fulfillment of Basic Competence, Content Standards, Process Standards and Assessment Standards.

  17. [Problems of hygienic standardization of electromagnetic fields produced by teletransmitting objects].

    Science.gov (United States)

    Karachev, I I

    1989-10-01

    Maximum allowable electromagnetic field levels produced by teletransmitting stations and differentiated by frequency have been described. The prospects of further studies on the improvement of hygienic standardization of electromagnetic fields have been set forth.

  18. Benchmarking the Multidimensional Stellar Implicit Code MUSIC

    Science.gov (United States)

    Goffrey, T.; Pratt, J.; Viallet, M.; Baraffe, I.; Popov, M. V.; Walder, R.; Folini, D.; Geroux, C.; Constantino, T.

    2017-04-01

    We present the results of a numerical benchmark study for the MUltidimensional Stellar Implicit Code (MUSIC) based on widely applicable two- and three-dimensional compressible hydrodynamics problems relevant to stellar interiors. MUSIC is an implicit large eddy simulation code that uses implicit time integration, implemented as a Jacobian-free Newton Krylov method. A physics based preconditioning technique which can be adjusted to target varying physics is used to improve the performance of the solver. The problems used for this benchmark study include the Rayleigh-Taylor and Kelvin-Helmholtz instabilities, and the decay of the Taylor-Green vortex. Additionally we show a test of hydrostatic equilibrium, in a stellar environment which is dominated by radiative effects. In this setting the flexibility of the preconditioning technique is demonstrated. This work aims to bridge the gap between the hydrodynamic test problems typically used during development of numerical methods and the complex flows of stellar interiors. A series of multidimensional tests were performed and analysed. Each of these test cases was analysed with a simple, scalar diagnostic, with the aim of enabling direct code comparisons. As the tests performed do not have analytic solutions, we verify MUSIC by comparing it to established codes including ATHENA and the PENCIL code. MUSIC is able to both reproduce behaviour from established and widely-used codes as well as results expected from theoretical predictions. This benchmarking study concludes a series of papers describing the development of the MUSIC code and provides confidence in future applications.

  19. NASA Software Engineering Benchmarking Study

    Science.gov (United States)

    Rarick, Heather L.; Godfrey, Sara H.; Kelly, John C.; Crumbley, Robert T.; Wifl, Joel M.

    2013-01-01

    was its software assurance practices, which seemed to rate well in comparison to the other organizational groups and also seemed to include a larger scope of activities. An unexpected benefit of the software benchmarking study was the identification of many opportunities for collaboration in areas including metrics, training, sharing of CMMI experiences and resources such as instructors and CMMI Lead Appraisers, and even sharing of assets such as documented processes. A further unexpected benefit of the study was the feedback on NASA practices that was received from some of the organizations interviewed. From that feedback, other potential areas where NASA could improve were highlighted, such as accuracy of software cost estimation and budgetary practices. The detailed report contains discussion of the practices noted in each of the topic areas, as well as a summary of observations and recommendations from each of the topic areas. The resulting 24 recommendations from the topic areas were then consolidated to eliminate duplication and culled into a set of 14 suggested actionable recommendations. This final set of actionable recommendations, listed below, are items that can be implemented to improve NASA's software engineering practices and to help address many of the items that were listed in the NASA top software engineering issues. 1. Develop and implement standard contract language for software procurements. 2. Advance accurate and trusted software cost estimates for both procured and in-house software and improve the capture of actual cost data to facilitate further improvements. 3. Establish a consistent set of objectives and expectations, specifically types of metrics at the Agency level, so key trends and models can be identified and used to continuously improve software processes and each software development effort. 4. Maintain the CMMI Maturity Level requirement for critical NASA projects and use CMMI to measure organizations developing software for NASA. 5

  20. Shielding benchmark test

    International Nuclear Information System (INIS)

    Kawai, Masayoshi

    1984-01-01

    Iron data in JENDL-2 have been tested by analyzing shielding benchmark experiments for neutron transmission through iron block performed at KFK using CF-252 neutron source and at ORNL using collimated neutron beam from reactor. The analyses are made by a shielding analysis code system RADHEAT-V4 developed at JAERI. The calculated results are compared with the measured data. As for the KFK experiments, the C/E values are about 1.1. For the ORNL experiments, the calculated values agree with the measured data within an accuracy of 33% for the off-center geometry. The d-t neutron transmission measurements through carbon sphere made at LLNL are also analyzed preliminarily by using the revised JENDL data for fusion neutronics calculation. (author)

  1. Benchmarking foreign electronics technologies

    Energy Technology Data Exchange (ETDEWEB)

    Bostian, C.W.; Hodges, D.A.; Leachman, R.C.; Sheridan, T.B.; Tsang, W.T.; White, R.M.

    1994-12-01

    This report has been drafted in response to a request from the Japanese Technology Evaluation Center`s (JTEC) Panel on Benchmarking Select Technologies. Since April 1991, the Competitive Semiconductor Manufacturing (CSM) Program at the University of California at Berkeley has been engaged in a detailed study of quality, productivity, and competitiveness in semiconductor manufacturing worldwide. The program is a joint activity of the College of Engineering, the Haas School of Business, and the Berkeley Roundtable on the International Economy, under sponsorship of the Alfred P. Sloan Foundation, and with the cooperation of semiconductor producers from Asia, Europe and the United States. Professors David A. Hodges and Robert C. Leachman are the project`s Co-Directors. The present report for JTEC is primarily based on data and analysis drawn from that continuing program. The CSM program is being conducted by faculty, graduate students and research staff from UC Berkeley`s Schools of Engineering and Business, and Department of Economics. Many of the participating firms are represented on the program`s Industry Advisory Board. The Board played an important role in defining the research agenda. A pilot study was conducted in 1991 with the cooperation of three semiconductor plants. The research plan and survey documents were thereby refined. The main phase of the CSM benchmarking study began in mid-1992 and will continue at least through 1997. reports are presented on the manufacture of integrated circuits; data storage; wireless technology; human-machine interfaces; and optoelectronics. Selected papers are indexed separately for inclusion in the Energy Science and Technology Database.

  2. Standardization of UV-visible data in a food adulteration classification problem.

    Science.gov (United States)

    Di Anibal, Carolina V; Ruisánchez, Itziar; Fernández, Mailén; Forteza, Rafel; Cerdà, Victor; Pilar Callao, M

    2012-10-15

    This study evaluates the performance of multivariate calibration transfer methods in a classification context. The spectral variation caused by some experimental conditions can worsen the performance of the initial multivariate classification model but this situation can be solved by implementing standardization methods such as Piecewise Direct Standardization (PDS). This study looks at the adulteration of culinary spices with banned dyes such as Sudan I, II, III and IV. The samples are characterised by their UV-visible spectra and Partial Least Squares-Discriminant Analysis (PLS-DA) is used to discriminate between unadulterated samples and samples adulterated with any of the four Sudan dyes. Two different datasets that need to be standardised are presented. The standardization process yields positive classification results comparable to those obtained from the initial PLS-DA model, in which high classification performance was achieved. Copyright © 2012 Elsevier Ltd. All rights reserved.

  3. Towards a Standard-based Domain-specific Platform to Solve Machine Learning-based Problems

    Directory of Open Access Journals (Sweden)

    Vicente García-Díaz

    2015-12-01

    Full Text Available Machine learning is one of the most important subfields of computer science and can be used to solve a variety of interesting artificial intelligence problems. There are different languages, framework and tools to define the data needed to solve machine learning-based problems. However, there is a great number of very diverse alternatives which makes it difficult the intercommunication, portability and re-usability of the definitions, designs or algorithms that any developer may create. In this paper, we take the first step towards a language and a development environment independent of the underlying technologies, allowing developers to design solutions to solve machine learning-based problems in a simple and fast way, automatically generating code for other technologies. That can be considered a transparent bridge among current technologies. We rely on Model-Driven Engineering approach, focusing on the creation of models to abstract the definition of artifacts from the underlying technologies.

  4. Multigrid method based on a space-time approach with standard coarsening for parabolic problems

    NARCIS (Netherlands)

    S.R. Franco (Sebastião Romero); F.J. Gaspar Lorenz (Franscisco); M.A. Villela Pinto (Marcio Augusto); C. Rodrigo (Carmen)

    2018-01-01

    textabstractIn this work, a space-time multigrid method which uses standard coarsening in both temporal and spatial domains and combines the use of different smoothers is proposed for the solution of the heat equation in one and two space dimensions. In particular, an adaptive smoothing strategy,

  5. Methods and problems of testing TiO2 photocatalytic products and standardization

    Czech Academy of Sciences Publication Activity Database

    Peterka, F.; Kavan, Ladislav; Balek, Vladimír; Šubrt, Jan; Štengl, Václav; Lukeš, Petr; Krýsa, J.; Jirkovský, Jaromír

    2002-01-01

    Roč. 9, č. 12 (2002), s. 194-195 ISSN 1345-5818. [Symposium on Recent Advances in Photocatalysis /9./. Tokio, 02.12.2002] R&D Projects: GA MŠk ME 540 Institutional research plan: CEZ:AV0Z2043910 Keywords : titanium oxide, photocatalysis, standardization Subject RIV: CB - Analytical Chemistry, Separation

  6. India Needs International Standards in Accreditation Problems in Adoption and Implementation

    Science.gov (United States)

    Naik, B. M.

    2012-01-01

    The paper outlines in brief, need and importance of introducing global quality standards in accreditation, prescribed by the international agreement "Washington Accord". This agreement is initially provisional and after scrutiny, if found fit, it is upgraded to Signatory status. It is this status which empowers students of engineering,…

  7. Benchmark job – Watch out!

    CERN Multimedia

    Staff Association

    2017-01-01

    On 12 December 2016, in Echo No. 259, we already discussed at length the MERIT and benchmark jobs. Still, we find that a couple of issues warrant further discussion. Benchmark job – administrative decision on 1 July 2017 On 12 January 2017, the HR Department informed all staff members of a change to the effective date of the administrative decision regarding benchmark jobs. The benchmark job title of each staff member will be confirmed on 1 July 2017, instead of 1 May 2017 as originally announced in HR’s letter on 18 August 2016. Postponing the administrative decision by two months will leave a little more time to address the issues related to incorrect placement in a benchmark job. Benchmark job – discuss with your supervisor, at the latest during the MERIT interview In order to rectify an incorrect placement in a benchmark job, it is essential that the supervisor and the supervisee go over the assigned benchmark job together. In most cases, this placement has been done autom...

  8. EPA's Benchmark Dose Modeling Software

    Science.gov (United States)

    The EPA developed the Benchmark Dose Software (BMDS) as a tool to help Agency risk assessors facilitate applying benchmark dose (BMD) method’s to EPA’s human health risk assessment (HHRA) documents. The application of BMD methods overcomes many well know limitations ...

  9. A Field-Based Aquatic Life Benchmark for Conductivity in ...

    Science.gov (United States)

    This report adapts the standard U.S. EPA methodology for deriving ambient water quality criteria. Rather than use toxicity test results, the adaptation uses field data to determine the loss of 5% of genera from streams. The method is applied to derive effect benchmarks for dissolved salts as measured by conductivity in Central Appalachian streams using data from West Virginia and Kentucky. This report provides scientific evidence for a conductivity benchmark in a specific region rather than for the entire United States.

  10. Benchmarking and Sustainable Transport Policy

    DEFF Research Database (Denmark)

    Gudmundsson, Henrik; Wyatt, Andrew; Gordon, Lucy

    2004-01-01

    is generally not advised. Several other ways in which benchmarking and policy can support one another are identified in the analysis. This leads to a range of recommended initiatives to exploit the benefits of benchmarking in transport while avoiding some of the lurking pitfalls and dead ends......Order to learn from the best. In 2000 the European Commission initiated research to explore benchmarking as a tool to promote policies for ‘sustainable transport’. This paper reports findings and recommendations on how to address this challenge. The findings suggest that benchmarking is a valuable...... tool that may indeed help to move forward the transport policy agenda. However, there are major conditions and limitations. First of all it is not always so straightforward to delimit, measure and compare transport services in order to establish a clear benchmark. Secondly ‘sustainable transport...

  11. A Benchmark for Virtual Camera Control

    DEFF Research Database (Denmark)

    Burelli, Paolo; Yannakakis, Georgios N.

    2015-01-01

    Automatically animating and placing the virtual camera in a dynamic environment is a challenging task. The camera is expected to maximise and maintain a set of properties — i.e. visual composition — while smoothly moving through the environment and avoiding obstacles. A large number of different....... For this reason, in this paper, we propose a benchmark for the problem of virtual camera control and we analyse a number of different problems in different virtual environments. Each of these scenarios is described through a set of complexity measures and, as a result of this analysis, a subset of scenarios...

  12. Problems and solutions in application of IEEE standards at Savannah River Site, Department of Energy (DOE) nuclear facilities

    International Nuclear Information System (INIS)

    Lee, Y.S.; Bowers, T.L.; Chopra, B.J.; Thompson, T.T.; Zimmerman, E.W.

    1993-01-01

    The Department of Energy (DOE) Nuclear Material Production Facilities at the Savannah River Site (SRS) were designed, constructed, and placed into operation in the early 1950's, based on existing industry codes/standards, design criteria, analytical procedures. Since that time, DOE has developed Orders and Polices for the planning, design and construction of DOE Nuclear Reactor Facilities which invoke or reference commercial nuclear reactor codes and standards. The application of IEEE reactor design requirements such as Equipment Qualification, Seismic Qualification, Single Failure Criteria, and Separation Requirement, to non-reactor facilities has been a problem since the IEEE reactor criteria do not directly confirm to the needs of non-reactor facilities. SRS Systems Engineering is developing a methodology for the application of IEEE Standards to non-reactor facilities at SRS

  13. Problems of professional ethics standards use in auditors’ practice in Ukraine

    Directory of Open Access Journals (Sweden)

    Bondar V.P.

    2017-03-01

    Full Text Available Significant problems of the professional ethics principles violation by auditors cause many problems not only in reliability of disclosed audit opinion, but also cause problems of global stakeholders’ mistrust to the audit profession. This generally creates the barriers for ensuring the transparency of mechanisms of disclosure and verification of Ukrainian business data and does not help form appropriate investment climate. The article finds that auditor’s ethical principles should be regulated and organized on all the levels of the audit quality control ensuring. According to the results of this study, the author highlights these four levels (international, national, local, personal and describes the contribution of each level of documents in the organization of the quality control (in part of ethical principles has been. The research proves the system of organizational support for creating ethical principles compliance environment during carrying out audit assignment based on identifying and eliminating threats to auditors’ independence. In this regard, the author proposes the structure and content of organizational and administrative documents, which are the part of the internal audit quality control system.

  14. Assessing and benchmarking multiphoton microscopes for biologists.

    Science.gov (United States)

    Corbin, Kaitlin; Pinkard, Henry; Peck, Sebastian; Beemiller, Peter; Krummel, Matthew F

    2014-01-01

    Multiphoton microscopy has become staple tool for tracking cells within tissues and organs due to superior depth of penetration, low excitation volumes, and reduced phototoxicity. Many factors, ranging from laser pulse width to relay optics to detectors and electronics, contribute to the overall ability of these microscopes to excite and detect fluorescence deep within tissues. However, we have found that there are few standard ways already described in the literature to distinguish between microscopes or to benchmark existing microscopes to measure the overall quality and efficiency of these instruments. Here, we discuss some simple parameters and methods that can either be used within a multiphoton facility or by a prospective purchaser to benchmark performance. This can both assist in identifying decay in microscope performance and in choosing features of a scope that are suited to experimental needs. © 2014 Elsevier Inc. All rights reserved.

  15. Assessing and benchmarking multiphoton microscopes for biologists

    Science.gov (United States)

    Corbin, Kaitlin; Pinkard, Henry; Peck, Sebastian; Beemiller, Peter; Krummel, Matthew F.

    2017-01-01

    Multiphoton microscopy has become staple tool for tracking cells within tissues and organs due to superior depth of penetration, low excitation volumes, and reduced phototoxicity. Many factors, ranging from laser pulse width to relay optics to detectors and electronics, contribute to the overall ability of these microscopes to excite and detect fluorescence deep within tissues. However, we have found that there are few standard ways already described in the literature to distinguish between microscopes or to benchmark existing microscopes to measure the overall quality and efficiency of these instruments. Here, we discuss some simple parameters and methods that can either be used within a multiphoton facility or by a prospective purchaser to benchmark performance. This can both assist in identifying decay in microscope performance and in choosing features of a scope that are suited to experimental needs. PMID:24974026

  16. Toward Automated Benchmarking of Atomistic Force Fields: Neat Liquid Densities and Static Dielectric Constants from the ThermoML Data Archive.

    Science.gov (United States)

    Beauchamp, Kyle A; Behr, Julie M; Rustenburg, Ariën S; Bayly, Christopher I; Kroenlein, Kenneth; Chodera, John D

    2015-10-08

    Atomistic molecular simulations are a powerful way to make quantitative predictions, but the accuracy of these predictions depends entirely on the quality of the force field employed. Although experimental measurements of fundamental physical properties offer a straightforward approach for evaluating force field quality, the bulk of this information has been tied up in formats that are not machine-readable. Compiling benchmark data sets of physical properties from non-machine-readable sources requires substantial human effort and is prone to the accumulation of human errors, hindering the development of reproducible benchmarks of force-field accuracy. Here, we examine the feasibility of benchmarking atomistic force fields against the NIST ThermoML data archive of physicochemical measurements, which aggregates thousands of experimental measurements in a portable, machine-readable, self-annotating IUPAC-standard format. As a proof of concept, we present a detailed benchmark of the generalized Amber small-molecule force field (GAFF) using the AM1-BCC charge model against experimental measurements (specifically, bulk liquid densities and static dielectric constants at ambient pressure) automatically extracted from the archive and discuss the extent of data available for use in larger scale (or continuously performed) benchmarks. The results of even this limited initial benchmark highlight a general problem with fixed-charge force fields in the representation low-dielectric environments, such as those seen in binding cavities or biological membranes.

  17. Benchmarking biofuels; Biobrandstoffen benchmarken

    Energy Technology Data Exchange (ETDEWEB)

    Croezen, H.; Kampman, B.; Bergsma, G.

    2012-03-15

    A sustainability benchmark for transport biofuels has been developed and used to evaluate the various biofuels currently on the market. For comparison, electric vehicles, hydrogen vehicles and petrol/diesel vehicles were also included. A range of studies as well as growing insight are making it ever clearer that biomass-based transport fuels may have just as big a carbon footprint as fossil fuels like petrol or diesel, or even bigger. At the request of Greenpeace Netherlands, CE Delft has brought together current understanding on the sustainability of fossil fuels, biofuels and electric vehicles, with particular focus on the performance of the respective energy carriers on three sustainability criteria, with the first weighing the heaviest: (1) Greenhouse gas emissions; (2) Land use; and (3) Nutrient consumption [Dutch] Greenpeace Nederland heeft CE Delft gevraagd een duurzaamheidsmeetlat voor biobrandstoffen voor transport te ontwerpen en hierop de verschillende biobrandstoffen te scoren. Voor een vergelijk zijn ook elektrisch rijden, rijden op waterstof en rijden op benzine of diesel opgenomen. Door onderzoek en voortschrijdend inzicht blijkt steeds vaker dat transportbrandstoffen op basis van biomassa soms net zoveel of zelfs meer broeikasgassen veroorzaken dan fossiele brandstoffen als benzine en diesel. CE Delft heeft voor Greenpeace Nederland op een rijtje gezet wat de huidige inzichten zijn over de duurzaamheid van fossiele brandstoffen, biobrandstoffen en elektrisch rijden. Daarbij is gekeken naar de effecten van de brandstoffen op drie duurzaamheidscriteria, waarbij broeikasgasemissies het zwaarst wegen: (1) Broeikasgasemissies; (2) Landgebruik; en (3) Nutriëntengebruik.

  18. Virtual machine performance benchmarking.

    Science.gov (United States)

    Langer, Steve G; French, Todd

    2011-10-01

    The attractions of virtual computing are many: reduced costs, reduced resources and simplified maintenance. Any one of these would be compelling for a medical imaging professional attempting to support a complex practice on limited resources in an era of ever tightened reimbursement. In particular, the ability to run multiple operating systems optimized for different tasks (computational image processing on Linux versus office tasks on Microsoft operating systems) on a single physical machine is compelling. However, there are also potential drawbacks. High performance requirements need to be carefully considered if they are to be executed in an environment where the running software has to execute through multiple layers of device drivers before reaching the real disk or network interface. Our lab has attempted to gain insight into the impact of virtualization on performance by benchmarking the following metrics on both physical and virtual platforms: local memory and disk bandwidth, network bandwidth, and integer and floating point performance. The virtual performance metrics are compared to baseline performance on "bare metal." The results are complex, and indeed somewhat surprising.

  19. Benchmarking of energy time series

    Energy Technology Data Exchange (ETDEWEB)

    Williamson, M.A.

    1990-04-01

    Benchmarking consists of the adjustment of time series data from one source in order to achieve agreement with similar data from a second source. The data from the latter source are referred to as the benchmark(s), and often differ in that they are observed at a lower frequency, represent a higher level of temporal aggregation, and/or are considered to be of greater accuracy. This report provides an extensive survey of benchmarking procedures which have appeared in the statistical literature, and reviews specific benchmarking procedures currently used by the Energy Information Administration (EIA). The literature survey includes a technical summary of the major benchmarking methods and their statistical properties. Factors influencing the choice and application of particular techniques are described and the impact of benchmark accuracy is discussed. EIA applications and procedures are reviewed and evaluated for residential natural gas deliveries series and coal production series. It is found that the current method of adjusting the natural gas series is consistent with the behavior of the series and the methods used in obtaining the initial data. As a result, no change is recommended. For the coal production series, a staged approach based on a first differencing technique is recommended over the current procedure. A comparison of the adjustments produced by the two methods is made for the 1987 Indiana coal production series. 32 refs., 5 figs., 1 tab.

  20. Analysis of a multigroup stylized CANDU half-core benchmark

    International Nuclear Information System (INIS)

    Pounders, Justin M.; Rahnema, Farzad; Serghiuta, Dumitru

    2011-01-01

    Highlights: → This paper provides a benchmark that is a stylized model problem in more than two energy groups that is realistic with respect to the underlying physics. → An 8-group cross section library is provided to augment a previously published 2-group 3D stylized half-core CANDU benchmark problem. → Reference eigenvalues and selected pin and bundle fission rates are included. → 2-, 4- and 47-group Monte Carlo solutions are compared to analyze homogenization-free transport approximations that result from energy condensation. - Abstract: An 8-group cross section library is provided to augment a previously published 2-group 3D stylized half-core Canadian deuterium uranium (CANDU) reactor benchmark problem. Reference eigenvalues and selected pin and bundle fission rates are also included. This benchmark is intended to provide computational reactor physicists and methods developers with a stylized model problem in more than two energy groups that is realistic with respect to the underlying physics. In addition to transport theory code verification, the 8-group energy structure provides reactor physicist with an ideal problem for examining cross section homogenization and collapsing effects in a full-core environment. To this end, additional 2-, 4- and 47-group full-core Monte Carlo benchmark solutions are compared to analyze homogenization-free transport approximations incurred as a result of energy group condensation.

  1. Lithium in Very Metal-poor Dwarf Stars -- Problems for Standard Big Bang Nucleosynthesis?

    International Nuclear Information System (INIS)

    Lambert, David L.

    2004-01-01

    The standard model of primordial nucleosynthesis by the Big Bang as selected by the WMAP-based estimate of the baryon density (Ωbh2) predicts an abundance of 7Li that is a factor of three greater than the generally reported abundance for stars on the Spite plateau, and an abundance of 6Li that is about a thousand times less than is found for some stars on the plateau. This review discusses and examines these two discrepancies. They can likely be resolved without major surgery on the standard model of the Big Bang. In particular, stars on the Spite plateau may have depleted their surface lithium abundance over their long lifetime from the WMAP-based predicted abundances down to presently observed abundances, and synthesis of 6Li (and 7Li) via α + α fusion reactions may have occurred in the early Galaxy. Yet, there remain fascinating ways in which to remove the two discrepancies involving aspects of a new cosmology, particularly through the introduction of exotic particles

  2. Problems in Standardization of Orthodontic Shear Bond Strength Tests; A Brief Review

    Directory of Open Access Journals (Sweden)

    M.S. A. Akhoundi

    2005-03-01

    Full Text Available Bonding brackets to the enamel surface has gained much popularity today. New adhesive systems have been introduced and marketed and a considerable increase in research regarding bond strength has been published. A considerable amount of these studies deal with shear bond strength of adhesives designed for orthodontic purpose.Previous studies have used variety of test designs. This diversity in test design is due to the fact that there is no standard method for evaluating shear bond strength in orthodontics. Therefore comparison of data obtained from different study is almost impossible.This article tries to briefly discuss the developments occurred in the process of shear bond strength measurement of orthodontic adhesives with an emphasis on the type of test set up and load application.Although the test designs for measuring shear bond strength in orthodontics are still far from ideal, attempts must be made to standardize these tests especially in order to makecomparison of different data easier. It is recommended that test designs be set up in such a manner that better matches with the purpose of the study.

  3. California commercial building energy benchmarking

    Energy Technology Data Exchange (ETDEWEB)

    Kinney, Satkartar; Piette, Mary Ann

    2003-07-01

    Building energy benchmarking is the comparison of whole-building energy use relative to a set of similar buildings. It provides a useful starting point for individual energy audits and for targeting buildings for energy-saving measures in multiple-site audits. Benchmarking is of interest and practical use to a number of groups. Energy service companies and performance contractors communicate energy savings potential with ''typical'' and ''best-practice'' benchmarks while control companies and utilities can provide direct tracking of energy use and combine data from multiple buildings. Benchmarking is also useful in the design stage of a new building or retrofit to determine if a design is relatively efficient. Energy managers and building owners have an ongoing interest in comparing energy performance to others. Large corporations, schools, and government agencies with numerous facilities also use benchmarking methods to compare their buildings to each other. The primary goal of Task 2.1.1 Web-based Benchmarking was the development of a web-based benchmarking tool, dubbed Cal-Arch, for benchmarking energy use in California commercial buildings. While there were several other benchmarking tools available to California consumers prior to the development of Cal-Arch, there were none that were based solely on California data. Most available benchmarking information, including the Energy Star performance rating, were developed using DOE's Commercial Building Energy Consumption Survey (CBECS), which does not provide state-level data. Each database and tool has advantages as well as limitations, such as the number of buildings and the coverage by type, climate regions and end uses. There is considerable commercial interest in benchmarking because it provides an inexpensive method of screening buildings for tune-ups and retrofits. However, private companies who collect and manage consumption data are concerned that the

  4. Benchmarking & european sustainable transport policies

    DEFF Research Database (Denmark)

    Gudmundsson, Henrik

    2003-01-01

    way forward is to ensure a higher level of environmental integration in transport policy benchmarking. To this effect the paper will discuss the possible role of the socalled Transport and Environment Reporting Mechanism developed by the European Environment Agency. The paper provides an independent...... to support Sustainable European Transport Policies. The key message is that transport benchmarking has not yet been developed to cope with the challenges of this task. Rather than backing down completely, the paper suggests some critical conditions for applying and adopting benchmarking for this purpose. One...

  5. A Heterogeneous Medium Analytical Benchmark

    International Nuclear Information System (INIS)

    Ganapol, B.D.

    1999-01-01

    A benchmark, called benchmark BLUE, has been developed for one-group neutral particle (neutron or photon) transport in a one-dimensional sub-critical heterogeneous plane parallel medium with surface illumination. General anisotropic scattering is accommodated through the Green's Function Method (GFM). Numerical Fourier transform inversion is used to generate the required Green's functions which are kernels to coupled integral equations that give the exiting angular fluxes. The interior scalar flux is then obtained through quadrature. A compound iterative procedure for quadrature order and slab surface source convergence provides highly accurate benchmark qualities (4- to 5- places of accuracy) results

  6. An Eigenstructure Assignment Approach to FDI for the Industrial Actuator Benchmark Test

    DEFF Research Database (Denmark)

    Jørgensen, R.B.; Patton, R.J.; Chen, J.

    1995-01-01

    This paper examines the robustness in modelling uncertainties of an observer-based fault detection and isolation scheme applied to the industrial actuator benchmark problem.......This paper examines the robustness in modelling uncertainties of an observer-based fault detection and isolation scheme applied to the industrial actuator benchmark problem....

  7. Water Level Superseded Benchmark Sheets

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Images of National Coast & Geodetic Survey (now NOAA's National Geodetic Survey/NGS) tidal benchmarks which have been superseded by new markers or locations....

  8. Benchmark simulation models, quo vadis?

    DEFF Research Database (Denmark)

    Jeppsson, U.; Alex, J; Batstone, D. J.

    2013-01-01

    As the work of the IWA Task Group on Benchmarking of Control Strategies for wastewater treatment plants (WWTPs) is coming to an end, it is essential to disseminate the knowledge gained. For this reason, all authors of the IWA Scientific and Technical Report on benchmarking have come together...... to provide their insights, highlighting areas where knowledge may still be deficient and where new opportunities are emerging, and to propose potential avenues for future development and application of the general benchmarking framework and its associated tools. The paper focuses on the topics of temporal...... and spatial extension, process modifications within the WWTP, the realism of models, control strategy extensions and the potential for new evaluation tools within the existing benchmark system. We find that there are major opportunities for application within all of these areas, either from existing work...

  9. Vacuum oscillation solution to the solar neutrino problem in standard and nonstandard pictures

    International Nuclear Information System (INIS)

    Berezhiani, Z.G.; Rossi, A.

    1995-01-01

    The neutrino long wavelength (just-so) oscillation is reexamined as a solution to the solar neutrino problem. We consider the just-so scenario in various cases: in the framework of the solar models with a relaxed prediction of the boron neutrino flux, as well as in the presence of the nonstandard weak range interactions between neutrino and matter constituents. We show that the fit of the experimental data in the just-so scenario is not very good for any reasonable value of the 8 B neutrino flux, but it substantially improves if the nonstandard τ-neutrino--electron interaction is included. These new interactions could also remove the conflict of the just-so picture with the shape of the SN 1987A neutrino spectrum. Special attention is devoted to the potential of the future real-time solar neutrino detectors such as Super-Kamiokande, SNO, and BOREXINO, which could provide the model-independent tests for the just-so scenario. In particular, these imply a specific deformation of the original solar neutrino energy spectra and time variation of the intermediate energy monochromatic neutrino ( 7 Be and pep) signals

  10. Analysis of Benchmark 2 results

    International Nuclear Information System (INIS)

    Bacha, F.; Lefievre, B.; Maillard, J.; Silva, J.

    1994-01-01

    The code GEANT315 has been compared to different codes in two benchmarks. We analyze its performances through our results, especially in the thick target case. In spite of gaps in nucleus-nucleus interaction theories at intermediate energies, benchmarks allow possible improvements of physical models used in our codes. Thereafter, a scheme of radioactive waste burning system is studied. (authors). 4 refs., 7 figs., 1 tab

  11. Research on computer systems benchmarking

    Science.gov (United States)

    Smith, Alan Jay (Principal Investigator)

    1996-01-01

    This grant addresses the topic of research on computer systems benchmarking and is more generally concerned with performance issues in computer systems. This report reviews work in those areas during the period of NASA support under this grant. The bulk of the work performed concerned benchmarking and analysis of CPUs, compilers, caches, and benchmark programs. The first part of this work concerned the issue of benchmark performance prediction. A new approach to benchmarking and machine characterization was reported, using a machine characterizer that measures the performance of a given system in terms of a Fortran abstract machine. Another report focused on analyzing compiler performance. The performance impact of optimization in the context of our methodology for CPU performance characterization was based on the abstract machine model. Benchmark programs are analyzed in another paper. A machine-independent model of program execution was developed to characterize both machine performance and program execution. By merging these machine and program characterizations, execution time can be estimated for arbitrary machine/program combinations. The work was continued into the domain of parallel and vector machines, including the issue of caches in vector processors and multiprocessors. All of the afore-mentioned accomplishments are more specifically summarized in this report, as well as those smaller in magnitude supported by this grant.

  12. Aeroelasticity Benchmark Assessment: Subsonic Fixed Wing Program

    Science.gov (United States)

    Florance, Jennifer P.; Chwalowski, Pawel; Wieseman, Carol D.

    2010-01-01

    The fundamental technical challenge in computational aeroelasticity is the accurate prediction of unsteady aerodynamic phenomena and the effect on the aeroelastic response of a vehicle. Currently, a benchmarking standard for use in validating the accuracy of computational aeroelasticity codes does not exist. Many aeroelastic data sets have been obtained in wind-tunnel and flight testing throughout the world; however, none have been globally presented or accepted as an ideal data set. There are numerous reasons for this. One reason is that often, such aeroelastic data sets focus on the aeroelastic phenomena alone (flutter, for example) and do not contain associated information such as unsteady pressures and time-correlated structural dynamic deflections. Other available data sets focus solely on the unsteady pressures and do not address the aeroelastic phenomena. Other discrepancies can include omission of relevant data, such as flutter frequency and / or the acquisition of only qualitative deflection data. In addition to these content deficiencies, all of the available data sets present both experimental and computational technical challenges. Experimental issues include facility influences, nonlinearities beyond those being modeled, and data processing. From the computational perspective, technical challenges include modeling geometric complexities, coupling between the flow and the structure, grid issues, and boundary conditions. The Aeroelasticity Benchmark Assessment task seeks to examine the existing potential experimental data sets and ultimately choose the one that is viewed as the most suitable for computational benchmarking. An initial computational evaluation of that configuration will then be performed using the Langley-developed computational fluid dynamics (CFD) software FUN3D1 as part of its code validation process. In addition to the benchmarking activity, this task also includes an examination of future research directions. Researchers within the

  13. Problems encountered in embodying the principles of ICRP-26 and the revised IAEA safety standards into UK national legislation

    International Nuclear Information System (INIS)

    Beaver, P.F.

    1979-01-01

    This paper describes the United Kingdom procedures and format for safety legislation and goes on to show how the necessary legislation for radiological protection will fit into the general framework. The United Kingdom, as a member of the European Community and EURATOM, is bound to implement the Euratom Directive on radiological protection within the next few years. The latest draft of the Directive takes account of the recommendations of ICRP-26 and further, a recent draft of the revised IAEA Basic Safety Standards is a composite of both the Directive and ICRP-26. Thus, the effect of embodying the principles of the Directive is to embody the principles of ICRP-26 and the Basic Safety Standards. Some of the problems which have been met are described and in particular there is discussion of the problems arising from the incorporation of the three ICRP-26 facets of dose control, namely justification, optimization and limitation, into a legislative package. The UK system of evolving safety legislation now requires considerable participation by all the parties affected (or by their representatives). This paper indicates that the involvement of persons affected, coupled with a legislative package which consists of a hierarchy of (a) regulations; (b) codes of practice; and (c) guidance notes, will result in the fundamental principles of ICRP-26 being incorporated into UK legislation in a totally acceptable way. (author)

  14. Standard Model–axion–seesaw–Higgs portal inflation. Five problems of particle physics and cosmology solved in one stroke

    Energy Technology Data Exchange (ETDEWEB)

    Ballesteros, Guillermo [Institut de Physique Théorique, Université Paris Saclay, CEA, CNRS, 91191 Gif-sur-Yvette (France); Redondo, Javier [Departamento de Física Teórica, Universidad de Zaragoza, Pedro Cerbuna 12, E-50009, Zaragoza (Spain); Ringwald, Andreas [DESY, Notkestr. 85, 22607 Hamburg (Germany); Tamarit, Carlos, E-mail: guillermo.ballesteros@cea.fr, E-mail: jredondo@unizar.es, E-mail: andreas.ringwald@desy.de, E-mail: carlos.tamarit@durham.ac.uk [Institute for Particle Physics Phenomenology, Durham University, South Road, DH1 3LE (United Kingdom)

    2017-08-01

    We present a minimal extension of the Standard Model (SM) providing a consistent picture of particle physics from the electroweak scale to the Planck scale and of cosmology from inflation until today. Three right-handed neutrinos N {sub i} , a new color triplet Q and a complex SM-singlet scalar σ, whose vacuum expectation value v {sub σ} ∼ 10{sup 11} GeV breaks lepton number and a Peccei-Quinn symmetry simultaneously, are added to the SM. At low energies, the model reduces to the SM, augmented by seesaw generated neutrino masses and mixing, plus the axion. The latter solves the strong CP problem and accounts for the cold dark matter in the Universe. The inflaton is comprised by a mixture of σ and the SM Higgs, and reheating of the Universe after inflation proceeds via the Higgs portal. Baryogenesis occurs via thermal leptogenesis. Thus, five fundamental problems of particle physics and cosmology are solved at one stroke in this unified Standard Model—axion—seesaw—Higgs portal inflation (SMASH) model. It can be probed decisively by upcoming cosmic microwave background and axion dark matter experiments.

  15. Benchmarking of human resources management

    Directory of Open Access Journals (Sweden)

    David M. Akinnusi

    2008-11-01

    Full Text Available This paper reviews the role of human resource management (HRM which, today, plays a strategic partnership role in management. The focus of the paper is on HRM in the public sector, where much hope rests on HRM as a means of transforming the public service and achieving much needed service delivery. However, a critical evaluation of HRM practices in the public sector reveals that these services leave much to be desired. The paper suggests the adoption of benchmarking as a process to revamp HRM in the public sector so that it is able to deliver on its promises. It describes the nature and process of benchmarking and highlights the inherent difficulties in applying benchmarking in HRM. It concludes with some suggestions for a plan of action. The process of identifying “best” practices in HRM requires the best collaborative efforts of HRM practitioners and academicians. If used creatively, benchmarking has the potential to bring about radical and positive changes in HRM in the public sector. The adoption of the benchmarking process is, in itself, a litmus test of the extent to which HRM in the public sector has grown professionally.

  16. Research on IoT-based water environment benchmark data acquisition management

    Science.gov (United States)

    Yan, Bai; Xue, Bai; Ling, Lin; Jin, Huang; Ren, Liu

    2017-11-01

    Over the past more than 30 years of reform and opening up, China’s economy has developed at a full speed. However, this rapid growth is under restrictions of resource exhaustion and environmental pollution. Green sustainable development has become a common goal of all humans. As part of environmental resources, water resources are faced with such problems as pollution and shortage, thus hindering sustainable development. The top priority in water resources protection and research is to manage the basic data on water resources, and determine what is the footstone and scientific foundation of water environment management. By studying the aquatic organisms in the Yangtze River Basin, the Yellow River Basin, the Liaohe River Basin and the 5 lake areas, this paper puts forward an IoT-based water environment benchmark data management platform which can transform parameters measured to electric signals by way of chemical probe identification, and then send the benchmark test data of the water environment to node servers. The management platform will provide data and theoretical support for environmental chemistry, toxicology, ecology, etc., promote researches on environmental sciences, lay a solid foundation for comprehensive and systematic research on China’s regional environment characteristics, biotoxicity effects and environment criteria, and provide objective data for compiling standards of the water environment benchmark data.

  17. SPICE benchmark for global tomographic methods

    Science.gov (United States)

    Qin, Yilong; Capdeville, Yann; Maupin, Valerie; Montagner, Jean-Paul; Lebedev, Sergei; Beucler, Eric

    2008-11-01

    The existing global tomographic methods result in different models due to different parametrization, scale resolution and theoretical approach. To test how current imaging techniques are limited by approximations in theory and by the inadequacy of data quality and coverage, it is necessary to perform a global-scale benchmark to understand the resolving properties of each specific imaging algorithm. In the framework of the Seismic wave Propagation and Imaging in Complex media: a European network (SPICE) project, it was decided to perform a benchmark experiment of global inversion algorithms. First, a preliminary benchmark with a simple isotropic model is carried out to check the feasibility in terms of acquisition geometry and numerical accuracy. Then, to fully validate tomographic schemes with a challenging synthetic data set, we constructed one complex anisotropic global model, which is characterized by 21 elastic constants and includes 3-D heterogeneities in velocity, anisotropy (radial and azimuthal anisotropy), attenuation, density, as well as surface topography and bathymetry. The intermediate-period (>32 s), high fidelity anisotropic modelling was performed by using state-of-the-art anisotropic anelastic modelling code, that is, coupled spectral element method (CSEM), on modern massively parallel computing resources. The benchmark data set consists of 29 events and three-component seismograms are recorded by 256 stations. Because of the limitation of the available computing power, synthetic seismograms have a minimum period of 32 s and a length of 10 500 s. The inversion of the benchmark data set demonstrates several well-known problems of classical surface wave tomography, such as the importance of crustal correction to recover the shallow structures, the loss of resolution with depth, the smearing effect, both horizontal and vertical, the inaccuracy of amplitude of isotropic S-wave velocity variation, the difficulty of retrieving the magnitude of azimuthal

  18. Building America Research Benchmark Definition, Updated December 29, 2004

    Energy Technology Data Exchange (ETDEWEB)

    Hendron, R.

    2005-02-01

    To track progress toward aggressive multi-year whole-house energy savings goals of 40-70% and onsite power production of up to 30%, the U.S. Department of Energy (DOE) Residential Buildings Program and the National Renewable Energy Laboratory (NREL) developed the Building America Research Benchmark in consultation with the Building America industry teams. The Benchmark is generally consistent with mid-1990s standard practice, as reflected in the Home Energy Rating System (HERS) Technical Guidelines (RESNET 2002), with additional definitions that allow the analyst to evaluate all residential end-uses, an extension of the traditional HERS rating approach that focuses on space conditioning and hot water. A series of user profiles, intended to represent the behavior of a ''standard'' set of occupants, was created for use in conjunction with the Benchmark.

  19. FLOWTRAN-TF code benchmarking

    Energy Technology Data Exchange (ETDEWEB)

    Flach, G.P. (ed.)

    1990-12-01

    FLOWTRAN-TF is a two-component (air-water), two-phase thermal-hydraulics code designed for performing accident analyses of SRS reactor fuel assemblies during the Emergency Cooling System (ECS) phase of a Double Ended Guillotine Break (DEGB) Loss Of Coolant Accident (LOCA). A description of the code is given by Flach et al. (1990). This report provides benchmarking results for the version of FLOWTRAN-TF used to compute the Recommended K-Reactor Restart ECS Power Limit (Smith et al., 1990a; 1990b). Individual constitutive relations are benchmarked in Sections 2 through 5 while in Sections 6 and 7 integral code benchmarking results are presented. An overall assessment of FLOWTRAN-TF for its intended use in computing the ECS power limit completes the document.

  20. 'Wasteaware' benchmark indicators for integrated sustainable waste management in cities.

    Science.gov (United States)

    Wilson, David C; Rodic, Ljiljana; Cowing, Michael J; Velis, Costas A; Whiteman, Andrew D; Scheinberg, Anne; Vilches, Recaredo; Masterson, Darragh; Stretz, Joachim; Oelz, Barbara

    2015-01-01

    This paper addresses a major problem in international solid waste management, which is twofold: a lack of data, and a lack of consistent data to allow comparison between cities. The paper presents an indicator set for integrated sustainable waste management (ISWM) in cities both North and South, to allow benchmarking of a city's performance, comparing cities and monitoring developments over time. It builds on pioneering work for UN-Habitat's solid waste management in the World's cities. The comprehensive analytical framework of a city's solid waste management system is divided into two overlapping 'triangles' - one comprising the three physical components, i.e. collection, recycling, and disposal, and the other comprising three governance aspects, i.e. inclusivity; financial sustainability; and sound institutions and proactive policies. The indicator set includes essential quantitative indicators as well as qualitative composite indicators. This updated and revised 'Wasteaware' set of ISWM benchmark indicators is the cumulative result of testing various prototypes in more than 50 cities around the world. This experience confirms the utility of indicators in allowing comprehensive performance measurement and comparison of both 'hard' physical components and 'soft' governance aspects; and in prioritising 'next steps' in developing a city's solid waste management system, by identifying both local strengths that can be built on and weak points to be addressed. The Wasteaware ISWM indicators are applicable to a broad range of cities with very different levels of income and solid waste management practices. Their wide application as a standard methodology will help to fill the historical data gap. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. Ciemat Contribution to The International Standard Problem ISP-34: Contain Analysis of Fal-ISP 1 Test

    International Nuclear Information System (INIS)

    Herranz, L. E.; Polo, J.

    1994-01-01

    CIEMAT, along with a great number of international laboratories, has participated in the open exercise of the first International Standard Problem addressing fission product transport issues. The FAL-ISP 1, aimed to study particle agglomeration, has been simulated with CONTAIN code. The therma hydraulic results obtained have been satisfactory and aerosol ones have been reasonably accurate. However, some discrepancies appeared between predictions and experimental data; these are essentially related to the injection phase of the experiment, where the major influence of input approximations took place. In addition, the rationalization of discrepancies pointed potential data inconsistencies. Some parametric studies showed the results sensitivity to input assumptions concerning aerosol characterization and default values in CONTAIN; in general, they confirmed the suitability of most of the approximations taken. (Author) 11 refs

  2. CIEMAT contribution to the international standard problem ISP-34: contain analysis of FAL-ISP 1 test

    International Nuclear Information System (INIS)

    Herranz, L.E.; Polo, J.

    1994-01-01

    CIEMAT, along with a great number of international laboratories, has participated in the open exercise of the first International Standard Problem addressing fission product transport issues. The FAL-ISP 1, aimed to study particle agglomeration, has been simulated with CONTAIN code. The thermalhydraulic results obtained have been satisfactory and aerosols ones have been reasonably accurate. However, some discrepancies appeared between predictions and experimental data; these are essentially related to the injection phase of the experiment, where the major influence of input approximations took place. In addition, the rationalization of discrepancies pointed potential data inconsistencies. Some parametric studies showed the results sensitivity to input assumptions concerning aerosol characterization and default values in CONTAIN; in general, they confirmed the suitability of most of the approximations taken. (Author)

  3. Benchmark of a Cubieboard cluster

    Science.gov (United States)

    Schnepf, M. J.; Gudu, D.; Rische, B.; Fischer, M.; Jung, C.; Hardt, M.

    2015-12-01

    We built a cluster of ARM-based Cubieboards2 which has a SATA interface to connect a harddrive. This cluster was set up as a storage system using Ceph and as a compute cluster for high energy physics analyses. To study the performance in these applications, we ran two benchmarks on this cluster. We also checked the energy efficiency of the cluster using the preseted benchmarks. Performance and energy efficency of our cluster were compared with a network-attached storage (NAS), and with a desktop PC.

  4. EVA Health and Human Performance Benchmarking Study

    Science.gov (United States)

    Abercromby, A. F.; Norcross, J.; Jarvis, S. L.

    2016-01-01

    Multiple HRP Risks and Gaps require detailed characterization of human health and performance during exploration extravehicular activity (EVA) tasks; however, a rigorous and comprehensive methodology for characterizing and comparing the health and human performance implications of current and future EVA spacesuit designs does not exist. This study will identify and implement functional tasks and metrics, both objective and subjective, that are relevant to health and human performance, such as metabolic expenditure, suit fit, discomfort, suited postural stability, cognitive performance, and potentially biochemical responses for humans working inside different EVA suits doing functional tasks under the appropriate simulated reduced gravity environments. This study will provide health and human performance benchmark data for humans working in current EVA suits (EMU, Mark III, and Z2) as well as shirtsleeves using a standard set of tasks and metrics with quantified reliability. Results and methodologies developed during this test will provide benchmark data against which future EVA suits, and different suit configurations (eg, varied pressure, mass, CG) may be reliably compared in subsequent tests. Results will also inform fitness for duty standards as well as design requirements and operations concepts for future EVA suits and other exploration systems.

  5. SSI [soil-structure interactions] and structural benchmarks

    International Nuclear Information System (INIS)

    Philippacopoulos, A.J.; Miller, C.A.; Costantino, C.J.; Graves, H.

    1986-01-01

    This paper presents the latest results of the ongoing program entitled, ''Standard Problems for Structural Computer Codes'', currently being worked on at BNL for the USNRC, Office of Nuclear Regulatory Research. During FY 1986, efforts were focussed on three tasks, namely, (1) an investigation of ground water effects on the response of Category I structures, (2) the Soil-Structure Interaction Workshop and (3) studies on structural benchmarks associated with Category I structures. The objective of the studies on ground water effects is to verify the applicability and the limitations of the SSI methods currently used by the industry in performing seismic evaluations of nuclear plants which are located at sites with high water tables. In a previous study by BNL (NUREG/CR-4588), it has been concluded that the pore water can influence significantly the soil-structure interaction process. This result, however, is based on the assumption of fully saturated soil profiles. Consequently, the work was further extended to include cases associated with variable water table depths. In this paper, results related to ''cut-off'' depths beyond which the pore water effects can be ignored in seismic calculations, are addressed. Comprehensive numerical data are given for soil configurations typical to those encountered in nuclear plant sites. These data were generated by using a modified version of the SLAM code which is capable of handling problems related to the dynamic response of saturated soils

  6. ESTABLISHING A METHODOLOGY FOR BENCHMARKING SPEECH SYNTHESIS FOR COMPUTER-ASSISTED LANGUAGE LEARNING (CALL

    Directory of Open Access Journals (Sweden)

    Zöe Handley

    2005-09-01

    Full Text Available Despite the new possibilities that speech synthesis brings about, few Computer-Assisted Language Learning (CALL applications integrating speech synthesis have found their way onto the market. One potential reason is that the suitability and benefits of the use of speech synthesis in CALL have not been proven. One way to do this is through evaluation. Yet, very few formal evaluations of speech synthesis for CALL purposes have been conducted. One possible reason for the neglect of evaluation in this context is the fact that it is expensive in terms of time and resources. An important concern given that there are several levels of evaluation from which such applications would benefit. Benchmarking, the comparison of the score obtained by a system with that obtained by one which is known, to guarantee user satisfaction in a standard task or set of tasks, is introduced as a potential solution to this problem. In this article, we report on our progress towards the development of one of these benchmarks, namely a benchmark for determining the adequacy of speech synthesis systems for use in CALL. We do so by presenting the results of a case study which aimed to identify the criteria which determine the adequacy of the output of speech synthesis systems for use in its various roles in CALL with a view to the selection of benchmark tests which will address these criteria. These roles (reading machine, pronunciation model, and conversational partner are also discussed here. An agenda for further research and evaluation is proposed in the conclusion.

  7. Measuring and Benchmarking Food Environments and Policies in ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    Non-communicable diseases (NCDs) are responsible for three out of every four deaths in Latin America. Poor diet is increasingly contributing to preventable, premature deaths and illnesses related to NCDs. This project will monitor and benchmark food policies and environments in Mexico and Chile to address the problem.

  8. Benchmarked Library Websites Comparative Study

    KAUST Repository

    Ramli, Rindra M.

    2015-01-01

    This presentation provides an analysis of services provided by the benchmarked library websites. The exploratory study includes comparison of these websites against a list of criterion and presents a list of services that are most commonly deployed by the selected websites. In addition to that, the investigators proposed a list of services that could be provided via the KAUST library website.

  9. A Uranium Bioremediation Reactive Transport Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Yabusaki, Steven B.; Sengor, Sevinc; Fang, Yilin

    2015-06-01

    A reactive transport benchmark problem set has been developed based on in situ uranium bio-immobilization experiments that have been performed at a former uranium mill tailings site in Rifle, Colorado, USA. Acetate-amended groundwater stimulates indigenous microorganisms to catalyze the reduction of U(VI) to a sparingly soluble U(IV) mineral. The interplay between the flow, acetate loading periods and rates, microbially-mediated and geochemical reactions leads to dynamic behavior in metal- and sulfate-reducing bacteria, pH, alkalinity, and reactive mineral surfaces. The benchmark is based on an 8.5 m long one-dimensional model domain with constant saturated flow and uniform porosity. The 159-day simulation introduces acetate and bromide through the upgradient boundary in 14-day and 85-day pulses separated by a 10 day interruption. Acetate loading is tripled during the second pulse, which is followed by a 50 day recovery period. Terminal electron accepting processes for goethite, phyllosilicate Fe(III), U(VI), and sulfate are modeled using Monod-type rate laws. Major ion geochemistry modeled includes mineral reactions, as well as aqueous and surface complexation reactions for UO2++, Fe++, and H+. In addition to the dynamics imparted by the transport of the acetate pulses, U(VI) behavior involves the interplay between bioreduction, which is dependent on acetate availability, and speciation-controlled surface complexation, which is dependent on pH, alkalinity and available surface complexation sites. The general difficulty of this benchmark is the large number of reactions (74), multiple rate law formulations, a multisite uranium surface complexation model, and the strong interdependency and sensitivity of the reaction processes. Results are presented for three simulators: HYDROGEOCHEM, PHT3D, and PHREEQC.

  10. Estimating the Need for Palliative Radiation Therapy: A Benchmarking Approach

    Energy Technology Data Exchange (ETDEWEB)

    Mackillop, William J., E-mail: william.mackillop@krcc.on.ca [Cancer Care and Epidemiology, Queen' s Cancer Research Institute, Queen' s University, Kingston, Ontario (Canada); Department of Public Health Sciences, Queen' s University, Kingston, Ontario (Canada); Department of Oncology, Queen' s University, Kingston, Ontario (Canada); Kong, Weidong [Cancer Care and Epidemiology, Queen' s Cancer Research Institute, Queen' s University, Kingston, Ontario (Canada)

    2016-01-01

    Purpose: Palliative radiation therapy (PRT) benefits many patients with incurable cancer, but the overall need for PRT is unknown. Our primary objective was to estimate the appropriate rate of use of PRT in Ontario. Methods and Materials: The Ontario Cancer Registry identified patients who died of cancer in Ontario between 2006 and 2010. Comprehensive RT records were linked to the registry. Multivariate analysis identified social and health system-related factors affecting the use of PRT, enabling us to define a benchmark population of patients with unimpeded access to PRT. The proportion of cases treated at any time (PRT{sub lifetime}), the proportion of cases treated in the last 2 years of life (PRT{sub 2y}), and number of courses of PRT per thousand cancer deaths were measured in the benchmark population. These benchmarks were standardized to the characteristics of the overall population, and province-wide PRT rates were then compared to benchmarks. Results: Cases diagnosed at hospitals with no RT on-site and residents of poorer communities and those who lived farther from an RT center, were significantly less likely than others to receive PRT. However, availability of RT at the diagnosing hospital was the dominant factor. Neither socioeconomic status nor distance from home to nearest RT center had a significant effect on the use of PRT in patients diagnosed at a hospital with RT facilities. The benchmark population therefore consisted of patients diagnosed at a hospital with RT facilities. The standardized benchmark for PRT{sub lifetime} was 33.9%, and the corresponding province-wide rate was 28.5%. The standardized benchmark for PRT{sub 2y} was 32.4%, and the corresponding province-wide rate was 27.0%. The standardized benchmark for the number of courses of PRT per thousand cancer deaths was 652, and the corresponding province-wide rate was 542. Conclusions: Approximately one-third of patients who die of cancer in Ontario need PRT, but many of them are never

  11. A Singlet Extension of the Minimal Supersymmetric Standard Model: Towards a More Natural Solution to the Little Hierarchy Problem

    Energy Technology Data Exchange (ETDEWEB)

    de la Puente, Alejandro [Univ. of Notre Dame, IN (United States)

    2012-05-01

    In this work, I present a generalization of the Next-to-Minimal Supersymmetric Standard Model (NMSSM), with an explicit μ-term and a supersymmetric mass for the singlet superfield, as a route to alleviating the little hierarchy problem of the Minimal Supersymmetric Standard Model (MSSM). I analyze two limiting cases of the model, characterized by the size of the supersymmetric mass for the singlet superfield. The small and large limits of this mass parameter are studied, and I find that I can generate masses for the lightest neutral Higgs boson up to 140 GeV with top squarks below the TeV scale, all couplings perturbative up to the gauge unification scale, and with no need to fine tune parameters in the scalar potential. This model, which I call the S-MSSM is also embedded in a gauge-mediated supersymmetry breaking scheme. I find that even with a minimal embedding of the S-MSSM into a gauge mediated scheme, the mass for the lightest Higgs boson can easily be above 114 GeV, while keeping the top squarks below the TeV scale. Furthermore, I also study the forward-backward asymmetry in the t¯t system within the framework of the S-MSSM. For this purpose, non-renormalizable couplings between the first and third generation of quarks to scalars are introduced. The two limiting cases of the S-MSSM, characterized by the size of the supersymmetric mass for the singlet superfield is analyzed, and I find that in the region of small singlet supersymmetric mass a large asymmetry can be obtained while being consistent with constraints arising from flavor physics, quark masses and top quark decays.

  12. Problems of drawing up standards for persons simultaneously engaged in more than one activity involving radiation hazards

    International Nuclear Information System (INIS)

    Lucci, F.; Pelliccioni, M.

    1979-01-01

    The authors examine, from the points of view of the ICRP recommendations and of national and international standards, radiation protection problems posed by persons simultaneously engaged in professional activities involving radiation hazards in more than one place. The consequences of this type of situation, for the radiological protection classification of workers and for the evaluation and recording of doses received, are described in detail. In order to ensure proper monitoring of doses, agreements must be reached in advance between those in charge of the different areas of activity. Three cases seem to be of particular relevance: (a) that of workers who, while working for a single employer, perform in more than one place activities in which they are exposed to ionizing radiation (scientists working at different research centres, employees of companies specialized in the nuclear field, including the use of isotopes, accelerators, etc.); (b) that of workers who are engaged by more than one employer and are exposed to ionizing radiations as a result of their activities at different establishments (a special case is that of doctors who are radiologists or specialists in some other branch of nuclear medicine and work both as employees and independently in their own practices); and (c) that of employees of outside organizations not directly concerned with nuclear activities who are only exposed to ionizing radiation when called upon to work in establishments possessing sources of radiation. Finally, the authors suggest some ways of solving these problems - though they are rather difficult to define objectively (for example the case of medical practioners). (author)

  13. Benchmarking computer platforms for lattice QCD applications

    International Nuclear Information System (INIS)

    Hasenbusch, M.; Jansen, K.; Pleiter, D.; Wegner, P.; Wettig, T.

    2003-09-01

    We define a benchmark suite for lattice QCD and report on benchmark results from several computer platforms. The platforms considered are apeNEXT, CRAY T3E, Hitachi SR8000, IBM p690, PC-Clusters, and QCDOC. (orig.)

  14. Benchmarking computer platforms for lattice QCD applications

    International Nuclear Information System (INIS)

    Hasenbusch, M.; Jansen, K.; Pleiter, D.; Stueben, H.; Wegner, P.; Wettig, T.; Wittig, H.

    2004-01-01

    We define a benchmark suite for lattice QCD and report on benchmark results from several computer platforms. The platforms considered are apeNEXT, CRAY T3E; Hitachi SR8000, IBM p690, PC-Clusters, and QCDOC

  15. Tourism Destination Benchmarking: Evaluation and Selection of the Benchmarking Partners

    Directory of Open Access Journals (Sweden)

    Luštický Martin

    2012-03-01

    Full Text Available Tourism development has an irreplaceable role in regional policy of almost all countries. This is due to its undeniable benefits for the local population with regards to the economic, social and environmental sphere. Tourist destinations compete for visitors at tourism market and subsequently get into a relatively sharp competitive struggle. The main goal of regional governments and destination management institutions is to succeed in this struggle by increasing the competitiveness of their destination. The quality of strategic planning and final strategies is a key factor of competitiveness. Even though the tourism sector is not the typical field where the benchmarking methods are widely used, such approaches could be successfully applied. The paper focuses on key phases of the benchmarking process which lies in the search for suitable referencing partners. The partners are consequently selected to meet general requirements to ensure the quality if strategies. Following from this, some specific characteristics are developed according to the SMART approach. The paper tests this procedure with an expert evaluation of eight selected regional tourism strategies of regions in the Czech Republic, Slovakia and Great Britain. In this way it validates the selected criteria in the frame of the international environment. Hence, it makes it possible to find strengths and weaknesses of selected strategies and at the same time facilitates the discovery of suitable benchmarking partners.

  16. BENCHMARKING LEARNER EDUCATION USING ONLINE BUSINESS SIMULATION

    Directory of Open Access Journals (Sweden)

    Alfred H. Miller

    2016-06-01

    Full Text Available For programmatic accreditation by the Accreditation Council of Business Schools and Programs (ACBSP, business programs are required to meet STANDARD #4, Measurement and Analysis of Student Learning and Performance. Business units must demonstrate that outcome assessment systems are in place using documented evidence that shows how the results are being used to further develop or improve the academic business program. The Higher Colleges of Technology, a 17 campus federal university in the United Arab Emirates, differentiates its applied degree programs through a ‘learning by doing ethos,’ which permeates the entire curricula. This paper documents benchmarking of education for managing innovation. Using business simulation for Bachelors of Business, Year 3 learners, in a business strategy class; learners explored through a simulated environment the following functional areas; research and development, production, and marketing of a technology product. Student teams were required to use finite resources and compete against other student teams in the same universe. The study employed an instrument developed in a 60-sample pilot study of business simulation learners against which subsequent learners participating in online business simulation could be benchmarked. The results showed incremental improvement in the program due to changes made in assessment strategies, including the oral defense.

  17. Building America Research Benchmark Definition: Updated December 20, 2007

    Energy Technology Data Exchange (ETDEWEB)

    Hendron, R.

    2008-01-01

    To track progress toward aggressive multi-year whole-house energy savings goals of 40-70% and onsite power production of up to 30%, DOE's Residential Buildings Program and NREL developed the Building America Research Benchmark in consultation with the Building America industry teams. The Benchmark is generally consistent with mid-1990s standard practice, as reflected in the Home Energy Rating System (HERS) Technical Guidelines (RESNET 2002), with additional definitions that allow the analyst to evaluate all residential end-uses, an extension of the traditional HERS rating approach that focuses on space conditioning and hot water. Unlike the reference homes used for HERS, EnergyStar, and most energy codes, the Benchmark represents typical construction at a fixed point in time so it can be used as the basis for Building America's multi-year energy savings goals without the complication of chasing a 'moving target'.

  18. Building America Research Benchmark Definition, Updated December 15, 2006

    Energy Technology Data Exchange (ETDEWEB)

    Hendron, R.

    2007-01-01

    To track progress toward aggressive multi-year whole-house energy savings goals of 40-70% and onsite power production of up to 30%, DOE's Residential Buildings Program and NREL developed the Building America Research Benchmark in consultation with the Building America industry teams. The Benchmark is generally consistent with mid-1990s standard practice, as reflected in the Home Energy Rating System (HERS) Technical Guidelines (RESNET 2002), with additional definitions that allow the analyst to evaluate all residential end-uses, an extension of the traditional HERS rating approach that focuses on space conditioning and hot water. Unlike the reference homes used for HERS, EnergyStar, and most energy codes, the Benchmark represents typical construction at a fixed point in time so it can be used as the basis for Building America's multi-year energy savings goals without the complication of chasing a ''moving target''.

  19. Building America Research Benchmark Definition: Updated August 15, 2007

    Energy Technology Data Exchange (ETDEWEB)

    Hendron, R.

    2007-09-01

    To track progress toward aggressive multi-year whole-house energy savings goals of 40-70% and onsite power production of up to 30%, DOE's Residential Buildings Program and NREL developed the Building America Research Benchmark in consultation with the Building America industry teams. The Benchmark is generally consistent with mid-1990s standard practice, as reflected in the Home Energy Rating System (HERS) Technical Guidelines (RESNET 2002), with additional definitions that allow the analyst to evaluate all residential end-uses, an extension of the traditional HERS rating approach that focuses on space conditioning and hot water. Unlike the reference homes used for HERS, EnergyStar, and most energy codes, the Benchmark represents typical construction at a fixed point in time so it can be used as the basis for Building America's multi-year energy savings goals without the complication of chasing a 'moving target'.

  20. Simple mathematical law benchmarks human confrontations

    Science.gov (United States)

    Johnson, Neil F.; Medina, Pablo; Zhao, Guannan; Messinger, Daniel S.; Horgan, John; Gill, Paul; Bohorquez, Juan Camilo; Mattson, Whitney; Gangi, Devon; Qi, Hong; Manrique, Pedro; Velasquez, Nicolas; Morgenstern, Ana; Restrepo, Elvira; Johnson, Nicholas; Spagat, Michael; Zarama, Roberto

    2013-12-01

    Many high-profile societal problems involve an individual or group repeatedly attacking another - from child-parent disputes, sexual violence against women, civil unrest, violent conflicts and acts of terror, to current cyber-attacks on national infrastructure and ultrafast cyber-trades attacking stockholders. There is an urgent need to quantify the likely severity and timing of such future acts, shed light on likely perpetrators, and identify intervention strategies. Here we present a combined analysis of multiple datasets across all these domains which account for >100,000 events, and show that a simple mathematical law can benchmark them all. We derive this benchmark and interpret it, using a minimal mechanistic model grounded by state-of-the-art fieldwork. Our findings provide quantitative predictions concerning future attacks; a tool to help detect common perpetrators and abnormal behaviors; insight into the trajectory of a `lone wolf' identification of a critical threshold for spreading a message or idea among perpetrators; an intervention strategy to erode the most lethal clusters; and more broadly, a quantitative starting point for cross-disciplinary theorizing about human aggression at the individual and group level, in both real and online worlds.

  1. Benchmarking clinical photography services in the NHS.

    Science.gov (United States)

    Arbon, Giles

    2015-01-01

    Benchmarking is used in services across the National Health Service (NHS) using various benchmarking programs. Clinical photography services do not have a program in place and services have to rely on ad hoc surveys of other services. A trial benchmarking exercise was undertaken with 13 services in NHS Trusts. This highlights valuable data and comparisons that can be used to benchmark and improve services throughout the profession.

  2. Developing a benchmark for emotional analysis of music.

    Science.gov (United States)

    Aljanaki, Anna; Yang, Yi-Hsuan; Soleymani, Mohammad

    2017-01-01

    Music emotion recognition (MER) field rapidly expanded in the last decade. Many new methods and new audio features are developed to improve the performance of MER algorithms. However, it is very difficult to compare the performance of the new methods because of the data representation diversity and scarcity of publicly available data. In this paper, we address these problems by creating a data set and a benchmark for MER. The data set that we release, a MediaEval Database for Emotional Analysis in Music (DEAM), is the largest available data set of dynamic annotations (valence and arousal annotations for 1,802 songs and song excerpts licensed under Creative Commons with 2Hz time resolution). Using DEAM, we organized the 'Emotion in Music' task at MediaEval Multimedia Evaluation Campaign from 2013 to 2015. The benchmark attracted, in total, 21 active teams to participate in the challenge. We analyze the results of the benchmark: the winning algorithms and feature-sets. We also describe the design of the benchmark, the evaluation procedures and the data cleaning and transformations that we suggest. The results from the benchmark suggest that the recurrent neural network based approaches combined with large feature-sets work best for dynamic MER.

  3. Developing a benchmark for emotional analysis of music.

    Directory of Open Access Journals (Sweden)

    Anna Aljanaki

    Full Text Available Music emotion recognition (MER field rapidly expanded in the last decade. Many new methods and new audio features are developed to improve the performance of MER algorithms. However, it is very difficult to compare the performance of the new methods because of the data representation diversity and scarcity of publicly available data. In this paper, we address these problems by creating a data set and a benchmark for MER. The data set that we release, a MediaEval Database for Emotional Analysis in Music (DEAM, is the largest available data set of dynamic annotations (valence and arousal annotations for 1,802 songs and song excerpts licensed under Creative Commons with 2Hz time resolution. Using DEAM, we organized the 'Emotion in Music' task at MediaEval Multimedia Evaluation Campaign from 2013 to 2015. The benchmark attracted, in total, 21 active teams to participate in the challenge. We analyze the results of the benchmark: the winning algorithms and feature-sets. We also describe the design of the benchmark, the evaluation procedures and the data cleaning and transformations that we suggest. The results from the benchmark suggest that the recurrent neural network based approaches combined with large feature-sets work best for dynamic MER.

  4. WWER-1000 Burnup Credit Benchmark (CB5)

    International Nuclear Information System (INIS)

    Manolova, M.A.

    2002-01-01

    In the paper the specification of WWER-1000 Burnup Credit Benchmark first phase (depletion calculations), given. The second phase - criticality calculations for the WWER-1000 fuel pin cell, will be given after the evaluation of the results, obtained at the first phase. The proposed benchmark is a continuation of the WWER benchmark activities in this field (Author)

  5. A Framework for Urban Transport Benchmarking

    OpenAIRE

    Theuns Henning; Mohammed Dalil Essakali; Jung Eun Oh

    2011-01-01

    This report summarizes the findings of a study aimed at exploring key elements of a benchmarking framework for urban transport. Unlike many industries where benchmarking has proven to be successful and straightforward, the multitude of the actors and interactions involved in urban transport systems may make benchmarking a complex endeavor. It was therefore important to analyze what has bee...

  6. MISTRA facility for containment lumped parameter and CFD codes validation. Example of the International Standard Problem ISP47

    International Nuclear Information System (INIS)

    Tkatschenko, I.; Studer, E.; Paillere, H.

    2005-01-01

    During a severe accident in a Pressurized Water Reactor (PWR), the formation of a combustible gas mixture in the complex geometry of the reactor depends on the understanding of hydrogen production, the complex 3D thermal-hydraulics flow due to gas/steam injection, natural convection, heat transfer by condensation on walls and effect of mitigation devices. Numerical simulation of such flows may be performed either by Lumped Parameter (LP) or by Computational Fluid Dynamics (CFD) codes. Advantages and drawbacks of LP and CFD codes are well-known. LP codes are mainly developed for full size containment analysis but they need improvements, especially since they are not able to accurately predict the local gas mixing within the containment. CFD codes require a process of validation on well-instrumented experimental data before they can be used with a high degree of confidence. The MISTRA coupled effect test facility has been built at CEA to fulfil this validation objective: with numerous measurement points in the gaseous volume - temperature, gas concentration, velocity and turbulence - and with well controlled boundary conditions. As illustration of both experimental and simulation areas of this topic, a recent example in the use of MISTRA test data is presented for the case of the International Standard Problem ISP47. The proposed experimental work in the MISTRA facility provides essential data to fill the gaps in the modelling/validation of computational tools. (author)

  7. Exploration of problem-based learning combined with standardized patient in the teaching of basic science of ophthalmology

    Directory of Open Access Journals (Sweden)

    Jin Yan

    2015-08-01

    Full Text Available AIM:To investigate the effect of problem-based learning(PBLcombined with standardized patient(SPin the teaching of basic science of ophthalmology. METHODS: Sixty-four students of Optometry in grade 2012 were randomly divided into experimental group(n=32and control group(n=32. Traditional teaching method was implemented in control group while PBL combined with SP was applied in experimental group. At the end of term students were interviewed using self-administered questionnaire to obtain their evaluation for teaching effect. Measurement data were expressed as (-overx±s and analyzed by independent samples t test. Enumeration data were analyzed by χ2 test, and PRESULTS:The mean scores of theory test(83.22±3.75and experimental test(94.28±2.20in experimental group were significantly higher than theory test(70.72±3.95and experimental test(85.44±3.52in control group(all PPPCONCLUSION:Using PBL combined with SP teaching mode in basic science of ophthalmology can highly improve learning enthusiasm of students and cultivate self-learning ability of students, practice ability and ability of clinical analysis.

  8. Attila calculations for the 3-D C5G7 benchmark extension

    International Nuclear Information System (INIS)

    Wareing, T.A.; McGhee, J.M.; Barnett, D.A.; Failla, G.A.

    2005-01-01

    The performance of the Attila radiation transport software was evaluated for the 3-D C5G7 MOX benchmark extension, a follow-on study to the MOX benchmark developed by the 'OECD/NEA Expert Group on 3-D Radiation Transport Benchmarks'. These benchmarks were designed to test the ability of modern deterministic transport methods to model reactor problems without spatial homogenization. Attila is a general purpose radiation transport software package with an integrated graphical user interface (GUI) for analysis, set-up and postprocessing. Attila provides solutions to the discrete-ordinates form of the linear Boltzmann transport equation on a fully unstructured, tetrahedral mesh using linear discontinuous finite-element spatial differencing in conjunction with diffusion synthetic acceleration of inner iterations. The results obtained indicate that Attila can accurately solve the benchmark problem without spatial homogenization. (authors)

  9. Geothermal Heat Pump Benchmarking Report

    Energy Technology Data Exchange (ETDEWEB)

    None

    1997-01-17

    A benchmarking study was conducted on behalf of the Department of Energy to determine the critical factors in successful utility geothermal heat pump programs. A Successful program is one that has achieved significant market penetration. Successfully marketing geothermal heat pumps has presented some major challenges to the utility industry. However, select utilities have developed programs that generate significant GHP sales. This benchmarking study concludes that there are three factors critical to the success of utility GHP marking programs: (1) Top management marketing commitment; (2) An understanding of the fundamentals of marketing and business development; and (3) An aggressive competitive posture. To generate significant GHP sales, competitive market forces must by used. However, because utilities have functioned only in a regulated arena, these companies and their leaders are unschooled in competitive business practices. Therefore, a lack of experience coupled with an intrinsically non-competitive culture yields an industry environment that impedes the generation of significant GHP sales in many, but not all, utilities.

  10. The misinterpretation of the standard error of measurement in medical education: a primer on the problems, pitfalls and peculiarities of the three different standard errors of measurement.

    Science.gov (United States)

    McManus, I C

    2012-01-01

    In high-stakes assessments in medical education, such as final undergraduate examinations and postgraduate assessments, an attempt is frequently made to set confidence limits on the probable true score of a candidate. Typically, this is carried out using what is referred to as the standard error of measurement (SEM). However, it is often the case that the wrong formula is applied, there actually being three different formulae for use in different situations. To explain and clarify the calculation of the SEM, and differentiate three separate standard errors, which here are called the standard error of measurement (SEmeas), the standard error of estimation (SEest) and the standard error of prediction (SEpred). Most accounts describe the calculation of SEmeas. For most purposes, though, what is required is the standard error of estimation (SEest), which has to be applied not to a candidate's actual score but to their estimated true score after taking into account the regression to the mean that occurs due to the unreliability of an assessment. A third formula, the standard error of prediction (SEpred) is less commonly used in medical education, but is useful in situations such as counselling, where one needs to predict a future actual score on an examination from a previous actual score on the same examination. The various formulae can produce predictions that differ quite substantially, particularly when reliability is not particularly high, and the mark in question is far removed from the average performance of candidates. That can have important, unintended consequences, particularly in a medico-legal context.

  11. Thermal Performance Benchmarking: Annual Report

    Energy Technology Data Exchange (ETDEWEB)

    Moreno, Gilbert

    2016-04-08

    The goal for this project is to thoroughly characterize the performance of state-of-the-art (SOA) automotive power electronics and electric motor thermal management systems. Information obtained from these studies will be used to: Evaluate advantages and disadvantages of different thermal management strategies; establish baseline metrics for the thermal management systems; identify methods of improvement to advance the SOA; increase the publicly available information related to automotive traction-drive thermal management systems; help guide future electric drive technologies (EDT) research and development (R&D) efforts. The performance results combined with component efficiency and heat generation information obtained by Oak Ridge National Laboratory (ORNL) may then be used to determine the operating temperatures for the EDT components under drive-cycle conditions. In FY15, the 2012 Nissan LEAF power electronics and electric motor thermal management systems were benchmarked. Testing of the 2014 Honda Accord Hybrid power electronics thermal management system started in FY15; however, due to time constraints it was not possible to include results for this system in this report. The focus of this project is to benchmark the thermal aspects of the systems. ORNL's benchmarking of electric and hybrid electric vehicle technology reports provide detailed descriptions of the electrical and packaging aspects of these automotive systems.

  12. RISKIND verification and benchmark comparisons

    Energy Technology Data Exchange (ETDEWEB)

    Biwer, B.M.; Arnish, J.J.; Chen, S.Y.; Kamboj, S.

    1997-08-01

    This report presents verification calculations and benchmark comparisons for RISKIND, a computer code designed to estimate potential radiological consequences and health risks to individuals and the population from exposures associated with the transportation of spent nuclear fuel and other radioactive materials. Spreadsheet calculations were performed to verify the proper operation of the major options and calculational steps in RISKIND. The program is unique in that it combines a variety of well-established models into a comprehensive treatment for assessing risks from the transportation of radioactive materials. Benchmark comparisons with other validated codes that incorporate similar models were also performed. For instance, the external gamma and neutron dose rate curves for a shipping package estimated by RISKIND were compared with those estimated by using the RADTRAN 4 code and NUREG-0170 methodology. Atmospheric dispersion of released material and dose estimates from the GENII and CAP88-PC codes. Verification results have shown the program to be performing its intended function correctly. The benchmark results indicate that the predictions made by RISKIND are within acceptable limits when compared with predictions from similar existing models.

  13. RISKIND verification and benchmark comparisons

    International Nuclear Information System (INIS)

    Biwer, B.M.; Arnish, J.J.; Chen, S.Y.; Kamboj, S.

    1997-08-01

    This report presents verification calculations and benchmark comparisons for RISKIND, a computer code designed to estimate potential radiological consequences and health risks to individuals and the population from exposures associated with the transportation of spent nuclear fuel and other radioactive materials. Spreadsheet calculations were performed to verify the proper operation of the major options and calculational steps in RISKIND. The program is unique in that it combines a variety of well-established models into a comprehensive treatment for assessing risks from the transportation of radioactive materials. Benchmark comparisons with other validated codes that incorporate similar models were also performed. For instance, the external gamma and neutron dose rate curves for a shipping package estimated by RISKIND were compared with those estimated by using the RADTRAN 4 code and NUREG-0170 methodology. Atmospheric dispersion of released material and dose estimates from the GENII and CAP88-PC codes. Verification results have shown the program to be performing its intended function correctly. The benchmark results indicate that the predictions made by RISKIND are within acceptable limits when compared with predictions from similar existing models

  14. Closed-Loop Neuromorphic Benchmarks

    Directory of Open Access Journals (Sweden)

    Terrence C Stewart

    2015-12-01

    Full Text Available Evaluating the effectiveness and performance of neuromorphic hardware is difficult. It is evenmore difficult when the task of interest is a closed-loop task; that is, a task where the outputfrom the neuromorphic hardware affects some environment, which then in turn affects thehardware’s future input. However, closed-loop situations are one of the primary potential uses ofneuromorphic hardware. To address this, we present a methodology for generating closed-loopbenchmarks that makes use of a hybrid of real physical embodiment and a type of minimalsimulation. Minimal simulation has been shown to lead to robust real-world performance, whilestill maintaining the practical advantages of simulation, such as making it easy for the samebenchmark to be used by many researchers. This method is flexible enough to allow researchersto explicitly modify the benchmarks to identify specific task domains where particular hardwareexcels. To demonstrate the method, we present a set of novel benchmarks that focus on motorcontrol for an arbitrary system with unknown external forces. Using these benchmarks, we showthat an error-driven learning rule can consistently improve motor control performance across arandomly generated family of closed-loop simulations, even when there are up to 15 interactingjoints to be controlled.

  15. Use of benchmark criticals in fast reactor code validation

    International Nuclear Information System (INIS)

    Curtis, R.; Kelber, C.; Luck, L.; Smith, L.R.

    1980-01-01

    The problem discussed is how to check the accuracy of SIMMER code used for the analysis of hypothetical core disruptive accidents. A three-step process is used for code checking: Benchmark criticals in ZPR-9; Monte Carlo analog calculations to isolate errors arising from cross-section data and to establish a secondary standard; and comparison between the secondary standard, SIMMER neutronics, and other transport approximations, for configurations of interest. The VIM Monte Carlo Code is used as such a secondary standard. The analysis with VIM of the experiments in ZPR-9 using ENDF-B/IV cross-section data yields the following conclusions: (1) A systematic change in bias exists in the analysis going from a reference configuration to a slumped configuration. This change is larger than β and must be attributed to errors in cross-section data, since the Monte Carlo simulation reproduces every significant detail of the experiment. (2) Transport (SN) calculations show the same trends in the bias as the Monte Carlo studies. Thus, the processes used in the construction of group cross-sections appear adequate. Further, the SN-VIM agreement appears to argue against gross errors in code or input. (3) Comparison with diffusion theory (using the same cross-section set) indicates that conventional diffusion theory has an opposite change in bias. (4) The change in bias in calculating the reactivity worth of slumped fuel is dramatic: transport theory overpredicts positive worths while diffusion theory underpredicts. Thus, reactivity ramp rates at prompt critical may be substantially underpredicted if there has been substantial fuel or coolant movement and diffusion theory has been used

  16. A thermo mechanical benchmark calculation of a hexagonal can in the BTI accident with INCA code

    International Nuclear Information System (INIS)

    Zucchini, A.

    1988-01-01

    The thermomechanical behaviour of an hexagonal can in a benchmark problem (simulating the conditions of a BTI accident in a fuel assembly) is examined by means of the INCA code and the results systematically compared with those of ADINA

  17. Adapting benchmarking to project management : an analysis of project management processes, metrics, and benchmarking process models

    OpenAIRE

    Emhjellen, Kjetil

    1997-01-01

    Avhandling (dr.ing.) - Høgskolen i Telemark / Norges teknisk-naturvitenskapelige universitet Since the first publication on benchmarking in 1989 by Robert C. Camp of “Benchmarking: The search for Industry Best Practices that Lead to Superior Performance”, the improvement technique benchmarking has been established as an important tool in the process focused manufacturing or production environment. The use of benchmarking has expanded to other types of industry. Benchmarking has past t...

  18. Guideline for benchmarking thermal treatment systems for low-level mixed waste

    International Nuclear Information System (INIS)

    Hoffman, D.P.; Gibson, L.V. Jr.; Hermes, W.H.; Bastian, R.E.; Davis, W.T.

    1994-01-01

    A process for benchmarking low-level mixed waste (LLMW) treatment technologies has been developed. When used in conjunction with the identification and preparation of surrogate waste mixtures, and with defined quality assurance and quality control procedures, the benchmarking process will effectively streamline the selection of treatment technologies being considered by the US Department of Energy (DOE) for LLMW cleanup and management. Following the quantitative template provided in the benchmarking process will greatly increase the technical information available for the decision-making process. The additional technical information will remove a large part of the uncertainty in the selection of treatment technologies. It is anticipated that the use of the benchmarking process will minimize technology development costs and overall treatment costs. In addition, the benchmarking process will enhance development of the most promising LLMW treatment processes and aid in transferring the technology to the private sector. To instill inherent quality, the benchmarking process is based on defined criteria and a structured evaluation format, which are independent of any specific conventional treatment or emerging process technology. Five categories of benchmarking criteria have been developed for the evaluation: operation/design; personnel health and safety; economics; product quality; and environmental quality. This benchmarking document gives specific guidance on what information should be included and how it should be presented. A standard format for reporting is included in Appendix A and B of this document. Special considerations for LLMW are presented and included in each of the benchmarking categories

  19. The CSNI International Standard Problem Programme: Overall Presentation on Objectives; Rationale and Lessons Learnt: a Joint Venture of the Thermalhydraulic International Community

    International Nuclear Information System (INIS)

    Reocreux, M.

    2008-01-01

    The CSNI International Standard Problems have been one of the key activities of the CSNI thermal hydraulics groups during the last 25 years. After recalling the way the international standard problems were initiated in the late 1970 years -they were called at that time CSNI LOCA Standard Problem- the process which has led to make from the ISPs a full CSNI activity, is described. Rules have been defined which formalized the way experimental results were provided and the way the comparison exercise were performed. The long series of ISPs from 1975 up to nowadays is described, explaining the different trends in the ISPs choices. The findings which have been obtained are reviewed on both technical and programmatic aspects.

  20. When one size does not fit all : A problem of fit rather than failure for voluntary management standards

    NARCIS (Netherlands)

    Simpson, Dayna; Power, Damien; Klassen, Robert

    Voluntary management standards for social and environmental performance ideally help to define and improve firms' related capabilities. These standards, however, have largely failed to improve such performance as intended. Over-emphasis on institutional factors leading to adoption of these standards

  1. Toxicological benchmarks for screening potential contaminants of concern for effects on terrestrial plants. Environmental Restoration Program

    Energy Technology Data Exchange (ETDEWEB)

    Suter, G.W. II; Will, M.E.; Evans, C.

    1993-09-01

    One of the initial stages in ecological risk assessment for hazardous waste sites is the screening of contaminants to determine which of them are worthy of further consideration as ``contaminants of potential concern.`` This process is termed ``contaminant screening.`` It is performed by comparing measured ambient concentrations of chemicals to benchmark concentrations. Currently, no standard benchmark concentrations exist for assessing contaminants in soil with respect to their toxicity to plants. This report presents a standard method for deriving benchmarks for this purpose (phytotoxicity benchmarks), a set of data concerning effects of chemicals in soil or soil solution on plants, and a set of phytotoxicity benchmarks for 34 chemicals potentially associated with US Department of Energy (DOE) sites. Chemicals that are found in soil at concentrations exceeding both the phytotoxicity benchmark and the background concentration for the soil type should be considered contaminants of potential concern. The purpose of this report is to present plant toxicity data and discuss their utility as benchmarks for determining the hazard to terrestrial plants caused by contaminants in soil. Benchmarks are provided for soils and solutions.

  2. Toxicological benchmarks for screening potential contaminants of concern for effects on terrestrial plants

    International Nuclear Information System (INIS)

    Suter, G.W. II; Will, M.E.; Evans, C.

    1993-09-01

    One of the initial stages in ecological risk assessment for hazardous waste sites is the screening of contaminants to determine which of them are worthy of further consideration as ''contaminants of potential concern.'' This process is termed ''contaminant screening.'' It is performed by comparing measured ambient concentrations of chemicals to benchmark concentrations. Currently, no standard benchmark concentrations exist for assessing contaminants in soil with respect to their toxicity to plants. This report presents a standard method for deriving benchmarks for this purpose (phytotoxicity benchmarks), a set of data concerning effects of chemicals in soil or soil solution on plants, and a set of phytotoxicity benchmarks for 34 chemicals potentially associated with US Department of Energy (DOE) sites. Chemicals that are found in soil at concentrations exceeding both the phytotoxicity benchmark and the background concentration for the soil type should be considered contaminants of potential concern. The purpose of this report is to present plant toxicity data and discuss their utility as benchmarks for determining the hazard to terrestrial plants caused by contaminants in soil. Benchmarks are provided for soils and solutions

  3. Application of NEA/CSNI standard problem 3 (blowdown and flow reversal in the IETA-1 rig) to the validation of the RELAP-UK Mk IV code

    International Nuclear Information System (INIS)

    Bryce, W.M.

    1977-10-01

    NEA/CSNI Standard Problem 3 consists of the modelling of an experiment on the IETI-1 rig, in which there is initially flow upwards through a feeder, heated section and riser. The inlet and outlet are then closed and a breach opened at the bottom so that the flow reverses and the rig depressurises. Calculations of this problem by many countries using several computer codes have been reported and show a wide spread of results. The purpose of the study reported here was the following. First, to show the sensitivity of the calculation of Standard Problem 3. Second, to perform an ab initio best estimate calculation using the RELAP-UK Mark IV code with the standard recommended options, and third, to use the results of the sensitivity study to show where tuning of the RELAP-UK Mark IV recommended model options was required. This study has shown that the calculation of Standard Problem 3 is sensitive to model assumptions and that the use of the loss-of-coolant accident code RELAP-UK Mk IV with the standard recommended model options predicts the experimental results very well over most of the transient. (U.K.)

  4. NASA Software Engineering Benchmarking Effort

    Science.gov (United States)

    Godfrey, Sally; Rarick, Heather

    2012-01-01

    Benchmarking was very interesting and provided a wealth of information (1) We did see potential solutions to some of our "top 10" issues (2) We have an assessment of where NASA stands with relation to other aerospace/defense groups We formed new contacts and potential collaborations (1) Several organizations sent us examples of their templates, processes (2) Many of the organizations were interested in future collaboration: sharing of training, metrics, Capability Maturity Model Integration (CMMI) appraisers, instructors, etc. We received feedback from some of our contractors/ partners (1) Desires to participate in our training; provide feedback on procedures (2) Welcomed opportunity to provide feedback on working with NASA

  5. Sensitivity Analysis of OECD Benchmark Tests in BISON

    Energy Technology Data Exchange (ETDEWEB)

    Swiler, Laura Painton [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Gamble, Kyle [Idaho National Lab. (INL), Idaho Falls, ID (United States); Schmidt, Rodney C. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Williamson, Richard [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-09-01

    This report summarizes a NEAMS (Nuclear Energy Advanced Modeling and Simulation) project focused on sensitivity analysis of a fuels performance benchmark problem. The benchmark problem was defined by the Uncertainty Analysis in Modeling working group of the Nuclear Science Committee, part of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development (OECD ). The benchmark problem involv ed steady - state behavior of a fuel pin in a Pressurized Water Reactor (PWR). The problem was created in the BISON Fuels Performance code. Dakota was used to generate and analyze 300 samples of 17 input parameters defining core boundary conditions, manuf acturing tolerances , and fuel properties. There were 24 responses of interest, including fuel centerline temperatures at a variety of locations and burnup levels, fission gas released, axial elongation of the fuel pin, etc. Pearson and Spearman correlatio n coefficients and Sobol' variance - based indices were used to perform the sensitivity analysis. This report summarizes the process and presents results from this study.

  6. Comparison report on the blind phase of the OECD International Standard Problem no. 45 exercise (QUENCH-06)

    International Nuclear Information System (INIS)

    Hering, W.; Homann, C.; Lamy, J.S.

    2002-03-01

    The International Standard Problem (ISP) No. 45 is part of the overall ISP program of the OECD/NEA and is dedicated to the behavior of heat-up and delayed reflood of fuel elements in nuclear reactors. ISP-45 is related to the out-of-pile bundle quench experiment QUENCH-06, performed at Forschungszentrum Karlsruhe (FZK), Germany, on December 13, 2000. Special attention was paid to hydrogen production. To assess the ability of severe accident codes to simulate processes during core heat-up and reflood at temperatures above 2000 K, the behavior of the bundle during the whole experiment should be caculated on the basis of experimental initial and boundary conditions, but without knowing further experimental details (blind phase). In the blind phase 21 participants from 15 nations contributed with 8 different code systems (ATHLET-CD, ICARE/CATHARE, IMPACT/SAMPSON, GENFLO, MAAP, MELCOR, SCDAPSIM, SCDAP-3D). After the end of the blind phase all measured data were made available and the participants were invited to deliver a second calculation, where this knowledge could be used (open phase). In this report, results of the blind calculations are presented, analyzed, and compared to experimental data. Additionally, post-test calculations using the in-house version SCDAP/RELAP5 mod3.2.irs are used for comparison. During heat-up most results do not deviate significantly from one another, except as a consequence of some obvious user errors, so that a definition of a mainstream is justified. During quenching the lack of adequate hydraulic modeling becomes obvious: some participants could not match the observed cool-down rates, others had to use a very fine mesh to compensate code deficiencies. To overcome this insufficiency some newly developed reflood models were used in MAAP and MELCOR. In QUENCH-06, the sufficiently thick oxide layers protected the cladding from melting and failure below 2200 K, so that no massive hydrogen release during reflood was found. This behavior

  7. Distribution of hydrogen within the HDR-containment under severe accident conditions. OECD standard problem. Final comparison report

    International Nuclear Information System (INIS)

    Karwat, H.

    1992-08-01

    The present report summarizes the results of the International Standard Problem Exercise ISP-29, based on the HDR Hydrogen Distribution Experiment E11.2. Post-test analyses are compared to experimentally measured parameters, well-known to the analysis. This report has been prepared by the Institute for Reactor Dynamics and Reactor Safety of the Technical University Munich under contract with the Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS) which received funding for this activity from the German Ministry for Research and Technology (BMFT) under the research contract RS 792. The HDR experiment E11.2 has been performed by the Kernforschungszentrum Karlsruhe (KfK) in the frame of the project 'Projekt HDR-Sicherheitsprogramm' sponsored by the BMFT. Ten institutions from eight countries participated in the post-test analysis exercise which was focussing on the long-lasting gas distribution processes expected inside a PWR containment under severe accident conditions. The gas release experiment was coupled to a long-lasting steam release into the containment typical for an unmitigated small break loss-of-coolant accident. In lieu of pure hydrogen a gas mixture consisting of 15% hydrogen and 85% helium has been applied in order to avoid reaching flammability during the experiment. Of central importance are common overlay plots comparing calculated transients with measurements of the global pressure, the local temperature-, steam- and gas concentration distributions throughout the entire HDR containment. The comparisons indicate relatively large margins between most calculations and the experiment. Having in mind that this exercise was specified as an 'open post-test' analysis of well-known measured data the reasons for discrepancies between measurements and simulations were extensively discussed during a final workshop. It was concluded that analytical shortcomings as well as some uncertainties of experimental boundary conditions may be responsible for deviations

  8. Problems of Technical Standards Teaching in the Context of the Globalization and Euro-Integration in Higher Education System of Ukraine

    Science.gov (United States)

    Kornuta, Olena; Pryhorovska, Tetiana

    2015-01-01

    Globalization and Ukraine association with EU imply including Ukrainian universities into the world scientific space. The aim of this article is to analyze the problem of drawing standards teaching, based on the experience of Ivano-Frankivsk National Technical University of Oil and Gas (Ukraine) and to summarize the experience of post Soviet…

  9. Reevaluation of the Jezebel Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Favorite, Jeffrey A. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2014-03-10

    Every nuclear engineering student is familiar with Jezebel, the homogeneous bare sphere of plutonium first assembled at Los Alamos in 1954-1955. The actual Jezebel assembly was neither homogeneous, nor bare, nor spherical; nor was it singular – there were hundreds of Jezebel configurations assembled. The Jezebel benchmark has been reevaluated for the International Criticality Safety Benchmark Evaluation Project (ICSBEP) Handbook. Logbooks, original drawings, mass accountability statements, internal reports, and published reports have been used to model four actual three-dimensional Jezebel assemblies with high fidelity. Because the documentation available today is often inconsistent, three major assumptions were made regarding plutonium part masses and dimensions. The first was that the assembly masses given in Los Alamos report LA-4208 (1969) were correct, and the second was that the original drawing dimension for the polar height of a certain major part was correct. The third assumption was that a change notice indicated on the original drawing was not actually implemented. This talk will describe these assumptions, the alternatives, and the implications. Since the publication of the 2013 ICSBEP Handbook, the actual masses of the major components have turned up. Our assumption regarding the assembly masses was proven correct, but we had the mass distribution incorrect. Work to incorporate the new information is ongoing, and this talk will describe the latest assessment.

  10. Benchmark calculation of subchannel analysis codes

    International Nuclear Information System (INIS)

    1996-02-01

    In order to evaluate the analysis capabilities of various subchannel codes used in thermal-hydraulic design of light water reactors, benchmark calculations were performed. The selected benchmark problems and major findings obtained by the calculations were as follows: (1)As for single-phase flow mixing experiments between two channels, the calculated results of water temperature distribution along the flow direction were agreed with experimental results by tuning turbulent mixing coefficients properly. However, the effect of gap width observed in the experiments could not be predicted by the subchannel codes. (2)As for two-phase flow mixing experiments between two channels, in high water flow rate cases, the calculated distributions of air and water flows in each channel were well agreed with the experimental results. In low water flow cases, on the other hand, the air mixing rates were underestimated. (3)As for two-phase flow mixing experiments among multi-channels, the calculated mass velocities at channel exit under steady-state condition were agreed with experimental values within about 10%. However, the predictive errors of exit qualities were as high as 30%. (4)As for critical heat flux(CHF) experiments, two different results were obtained. A code indicated that the calculated CHF's using KfK or EPRI correlations were well agreed with the experimental results, while another code suggested that the CHF's were well predicted by using WSC-2 correlation or Weisman-Pei mechanistic model. (5)As for droplets entrainment and deposition experiments, it was indicated that the predictive capability was significantly increased by improving correlations. On the other hand, a remarkable discrepancy between codes was observed. That is, a code underestimated the droplet flow rate and overestimated the liquid film flow rate in high quality cases, while another code overestimated the droplet flow rate and underestimated the liquid film flow rate in low quality cases. (J.P.N.)

  11. Alcoholism in the Families of Origin of MSW Students: Estimating the Prevalence of Mental Health Problems Using Standardized Measures.

    Science.gov (United States)

    Hawkins, Catherine A.; Hawkins, Raymond C., II

    1996-01-01

    A 1991 study of 136 graduate social work students determined students' status as adult children of alcoholics (ACAs) by self-report and standardized screening test scores, and evaluated mental health functioning with four standardized measures. Results found that 47% of the social work students were ACAs, and not all (or only) ACAs were vulnerable…

  12. The Isprs Benchmark on Indoor Modelling

    Science.gov (United States)

    Khoshelham, K.; Díaz Vilariño, L.; Peter, M.; Kang, Z.; Acharya, D.

    2017-09-01

    Automated generation of 3D indoor models from point cloud data has been a topic of intensive research in recent years. While results on various datasets have been reported in literature, a comparison of the performance of different methods has not been possible due to the lack of benchmark datasets and a common evaluation framework. The ISPRS benchmark on indoor modelling aims to address this issue by providing a public benchmark dataset and an evaluation framework for performance comparison of indoor modelling methods. In this paper, we present the benchmark dataset comprising several point clouds of indoor environments captured by different sensors. We also discuss the evaluation and comparison of indoor modelling methods based on manually created reference models and appropriate quality evaluation criteria. The benchmark dataset is available for download at: html"target="_blank">http://www2.isprs.org/commissions/comm4/wg5/benchmark-on-indoor-modelling.html.

  13. Effects of training self-assessment and using assessment standards on retrospective and prospective monitoring of problem solving

    NARCIS (Netherlands)

    Baars, Martine; Vink, Sigrid; van Gog, Tamara; de Bruin, Anique; Paas, Fred

    2014-01-01

    Both retrospective and prospective monitoring are considered important for self-regulated learning of problem-solving skills. Retrospective monitoring (or self-assessment; SA) refers to students' assessments of how well they performed on a problem just completed. Prospective monitoring (or Judgments

  14. Understanding Problem-Solving Errors by Students with Learning Disabilities in Standards-Based and Traditional Curricula

    Science.gov (United States)

    Bouck, Emily C.; Bouck, Mary K.; Joshi, Gauri S.; Johnson, Linley

    2016-01-01

    Students with learning disabilities struggle with word problems in mathematics classes. Understanding the type of errors students make when working through such mathematical problems can further describe student performance and highlight student difficulties. Through the use of error codes, researchers analyzed the type of errors made by 14 sixth…

  15. Professional Performance and Bureaucratic Benchmarking Information

    DEFF Research Database (Denmark)

    Schneider, Melanie L.; Mahlendorf, Matthias D.; Schäffer, Utz

    controls because of organizational-professional conflicts. We therefore analyze the association between bureaucratic benchmarking information provision and professional performance and suggest that the association is more positive if prior professional performance was low. We test our hypotheses based...... on archival, publicly disclosed, professional performance data for 191 German orthopedics departments, matched with survey data on bureaucratic benchmarking information given to chief orthopedists by the administration. We find a positive association between bureaucratic benchmarking information provision...

  16. Computational Chemistry Comparison and Benchmark Database

    Science.gov (United States)

    SRD 101 NIST Computational Chemistry Comparison and Benchmark Database (Web, free access)   The NIST Computational Chemistry Comparison and Benchmark Database is a collection of experimental and ab initio thermochemical properties for a selected set of molecules. The goals are to provide a benchmark set of molecules for the evaluation of ab initio computational methods and allow the comparison between different ab initio computational methods for the prediction of thermochemical properties.

  17. International standard problem (ISP) No. 43 Rapid boron-dilution transient tests for code verification. Comparison report

    International Nuclear Information System (INIS)

    2001-03-01

    International Standard Problem No. 43 (ISP 43) addresses the nuclear industries present capabilities of simulating fluid dynamics aspects of a subset of rapid boron dilution transients. Specifically, the exercise focuses on the sequence involving the transport of a boron-dilute slug through the actuation of a pump. The slug is formed on the primary side of the steam generator as a consequence of in interfacing system leak from the secondary un-borated coolant. Experimental data was collected using the University of Maryland 2 x 4 Thermalhydraulic Loop (UM 2 x 4 Loop) and the Boron-mixing Visualization Facility. Two blind test series were proposed during the first workshop (October 1998) and refined using participant input. The first series, test series A, deals with the injection of a front, i.e., a single interface between borated and dilute fluids. The second blind series, test series B, is the more realistic injection of a slug, i.e., a dilute fluid volume preceded and followed by the borated coolant of the primary system. Data are collected in the UM 2 x 4 Loop and refined details are obtained from the Visualization Facility, which represents a replica of the Loop.s vessel downcomer. In the Loop experimental program, the dilute volume is simulated by cold water and the borated primary coolant is simulated by hot water. The Visualization Facility uses dye to mark the diluted front or slug. The measured boundary conditions for both test series include the initial temperature of the primary system, the front/slug injection flowrate and temperature, and the pressure drop across the core. Temperature data is collected at 185 thermocouple positions in the downcomer and 38 positions in the lower plenum. The advancement of the front/slug through the system is monitored at discrete horizontal levels that contain the thermocouples. The performance of codes is measured relative to a set of figures of merit. During the first workshop, the principal figure of merit was

  18. The analysis of one-dimensional reactor kinetics benchmark computations

    International Nuclear Information System (INIS)

    Sidell, J.

    1975-11-01

    During March 1973 the European American Committee on Reactor Physics proposed a series of simple one-dimensional reactor kinetics problems, with the intention of comparing the relative efficiencies of the numerical methods employed in various codes, which are currently in use in many national laboratories. This report reviews the contributions submitted to this benchmark exercise and attempts to assess the relative merits and drawbacks of the various theoretical and computer methods. (author)

  19. Generating Shifting Workloads to Benchmark Adaptability in Relational Database Systems

    Science.gov (United States)

    Rabl, Tilmann; Lang, Andreas; Hackl, Thomas; Sick, Bernhard; Kosch, Harald

    A large body of research concerns the adaptability of database systems. Many commercial systems already contain autonomic processes that adapt configurations as well as data structures and data organization. Yet there is virtually no possibility for a just measurement of the quality of such optimizations. While standard benchmarks have been developed that simulate real-world database applications very precisely, none of them considers variations in workloads produced by human factors. Today’s benchmarks test the performance of database systems by measuring peak performance on homogeneous request streams. Nevertheless, in systems with user interaction access patterns are constantly shifting. We present a benchmark that simulates a web information system with interaction of large user groups. It is based on the analysis of a real online eLearning management system with 15,000 users. The benchmark considers the temporal dependency of user interaction. Main focus is to measure the adaptability of a database management system according to shifting workloads. We will give details on our design approach that uses sophisticated pattern analysis and data mining techniques.

  20. MoleculeNet: a benchmark for molecular machine learning.

    Science.gov (United States)

    Wu, Zhenqin; Ramsundar, Bharath; Feinberg, Evan N; Gomes, Joseph; Geniesse, Caleb; Pappu, Aneesh S; Leswing, Karl; Pande, Vijay

    2018-01-14

    Molecular machine learning has been maturing rapidly over the last few years. Improved methods and the presence of larger datasets have enabled machine learning algorithms to make increasingly accurate predictions about molecular properties. However, algorithmic progress has been limited due to the lack of a standard benchmark to compare the efficacy of proposed methods; most new algorithms are benchmarked on different datasets making it challenging to gauge the quality of proposed methods. This work introduces MoleculeNet, a large scale benchmark for molecular machine learning. MoleculeNet curates multiple public datasets, establishes metrics for evaluation, and offers high quality open-source implementations of multiple previously proposed molecular featurization and learning algorithms (released as part of the DeepChem open source library). MoleculeNet benchmarks demonstrate that learnable representations are powerful tools for molecular machine learning and broadly offer the best performance. However, this result comes with caveats. Learnable representations still struggle to deal with complex tasks under data scarcity and highly imbalanced classification. For quantum mechanical and biophysical datasets, the use of physics-aware featurizations can be more important than choice of particular learning algorithm.

  1. Results of the event sequence reliability benchmark exercise

    International Nuclear Information System (INIS)

    Silvestri, E.

    1990-01-01

    The Event Sequence Reliability Benchmark Exercise is the fourth of a series of benchmark exercises on reliability and risk assessment, with specific reference to nuclear power plant applications, and is the logical continuation of the previous benchmark exercises on System Analysis Common Cause Failure and Human Factors. The reference plant is the Nuclear Power Plant at Grohnde Federal Republic of Germany a 1300 MW PWR plant of KWU design. The specific objective of the Exercise is to model, to quantify and to analyze such event sequences initiated by the occurrence of a loss of offsite power that involve the steam generator feed. The general aim is to develop a segment of a risk assessment, which ought to include all the specific aspects and models of quantification, such as common canal failure, Human Factors and System Analysis, developed in the previous reliability benchmark exercises, with the addition of the specific topics of dependences between homologous components belonging to different systems featuring in a given event sequence and of uncertainty quantification, to end up with an overall assessment of: - the state of the art in risk assessment and the relative influences of quantification problems in a general risk assessment framework. The Exercise has been carried out in two phases, both requiring modelling and quantification, with the second phase adopting more restrictive rules and fixing certain common data, as emerged necessary from the first phase. Fourteen teams have participated in the Exercise mostly from EEC countries, with one from Sweden and one from the USA. (author)

  2. Present Status and Extensions of the Monte Carlo Performance Benchmark

    Science.gov (United States)

    Hoogenboom, J. Eduard; Petrovic, Bojan; Martin, William R.

    2014-06-01

    The NEA Monte Carlo Performance benchmark started in 2011 aiming to monitor over the years the abilities to perform a full-size Monte Carlo reactor core calculation with a detailed power production for each fuel pin with axial distribution. This paper gives an overview of the contributed results thus far. It shows that reaching a statistical accuracy of 1 % for most of the small fuel zones requires about 100 billion neutron histories. The efficiency of parallel execution of Monte Carlo codes on a large number of processor cores shows clear limitations for computer clusters with common type computer nodes. However, using true supercomputers the speedup of parallel calculations is increasing up to large numbers of processor cores. More experience is needed from calculations on true supercomputers using large numbers of processors in order to predict if the requested calculations can be done in a short time. As the specifications of the reactor geometry for this benchmark test are well suited for further investigations of full-core Monte Carlo calculations and a need is felt for testing other issues than its computational performance, proposals are presented for extending the benchmark to a suite of benchmark problems for evaluating fission source convergence for a system with a high dominance ratio, for coupling with thermal-hydraulics calculations to evaluate the use of different temperatures and coolant densities and to study the correctness and effectiveness of burnup calculations. Moreover, other contemporary proposals for a full-core calculation with realistic geometry and material composition will be discussed.

  3. Professional Performance and Bureaucratic Benchmarking Information

    DEFF Research Database (Denmark)

    Schneider, Melanie L.; Mahlendorf, Matthias D.; Schäffer, Utz

    Prior research documents positive effects of benchmarking information provision on performance and attributes this to social comparisons. However, the effects on professional recipients are unclear. Studies of professional control indicate that professional recipients often resist bureaucratic...... controls because of organizational-professional conflicts. We therefore analyze the association between bureaucratic benchmarking information provision and professional performance and suggest that the association is more positive if prior professional performance was low. We test our hypotheses based...... on archival, publicly disclosed, professional performance data for 191 German orthopedics departments, matched with survey data on bureaucratic benchmarking information given to chief orthopedists by the administration. We find a positive association between bureaucratic benchmarking information provision...

  4. HPC Benchmark Suite NMx, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Intelligent Automation Inc., (IAI) and University of Central Florida (UCF) propose to develop a comprehensive numerical test suite for benchmarking current and...

  5. Analysis of VENUS-3 benchmark experiment

    International Nuclear Information System (INIS)

    Kodeli, I.; Sartori, E.

    1998-01-01

    The paper presents the revision and the analysis of VENUS-3 benchmark experiment performed at CEN/SCK, Mol (Belgium). This benchmark was found to be particularly suitable for validation of current calculation tools like 3-D neutron transport codes, and in particular of the 3D sensitivity and uncertainty analysis code developed within the EFF project. The compilation of the integral experiment was integrated into the SINBAD electronic data base for storing and retrieving information about the shielding experiments for nuclear systems. SINBAD now includes 33 reviewed benchmark descriptions and several compilations waiting for the review, among them many benchmarks relevant for pressure vessel dosimetry system validation.(author)

  6. ZZ-PBMR-400, OECD/NEA PBMR Coupled Neutronics/Thermal Hydraulics Transient Benchmark - The PBMR-400 Core Design

    International Nuclear Information System (INIS)

    Reitsma, Frederik

    2007-01-01

    Description of benchmark: This international benchmark, concerns Pebble-Bed Modular Reactor (PBMR) coupled neutronics/thermal hydraulics transients based on the PBMR-400 MW design. The deterministic neutronics, thermal-hydraulics and transient analysis tools and methods available to design and analyse PBMRs lag, in many cases, behind the state of the art compared to other reactor technologies. This has motivated the testing of existing methods for HTGRs but also the development of more accurate and efficient tools to analyse the neutronics and thermal-hydraulic behaviour for the design and safety evaluations of the PBMR. In addition to the development of new methods, this includes defining appropriate benchmarks to verify and validate the new methods in computer codes. The scope of the benchmark is to establish well-defined problems, based on a common given set of cross sections, to compare methods and tools in core simulation and thermal hydraulics analysis with a specific focus on transient events through a set of multi-dimensional computational test problems. The benchmark exercise has the following objectives: - Establish a standard benchmark for coupled codes (neutronics/thermal-hydraulics) for PBMR design; - Code-to-code comparison using a common cross section library ; - Obtain a detailed understanding of the events and the processes; - Benefit from different approaches, understanding limitations and approximations. Major Design and Operating Characteristics of the PBMR (PBMR Characteristic and Value): Installed thermal capacity: 400 MW(t); Installed electric capacity: 165 MW(e); Load following capability: 100-40-100%; Availability: ≥ 95%; Core configuration: Vertical with fixed centre graphite reflector; Fuel: TRISO ceramic coated U-235 in graphite spheres; Primary coolant: Helium; Primary coolant pressure: 9 MPa; Moderator: Graphite; Core outlet temperature: 900 C.; Core inlet temperature: 500 C.; Cycle type: Direct; Number of circuits: 1; Cycle

  7. New heuristics for traveling salesman and vehicle routing problems with time windows

    Energy Technology Data Exchange (ETDEWEB)

    Gendreau, M.; Hertz, A.; Laporte, G.; Mihnea, S.

    1994-12-31

    We consider variants of the Traveling Salesman (TSP) and Vehicle Routing (VRP) Problems in which each customer can only be visited within a pre-specified (hard) time-window. We first present a two-phase (construction and post-optimization) generalized insertion heuristic for the TSPTW. This insertion heuristic is then imbedded in a tabu search metaheuristic in order to solve the VRPTW. Computational results on standard benchmark problems will be reported.

  8. Benchmarking of nuclear economics tools

    International Nuclear Information System (INIS)

    Moore, Megan; Korinny, Andriy; Shropshire, David; Sadhankar, Ramesh

    2017-01-01

    Highlights: • INPRO and GIF economic tools exhibited good alignment in total capital cost estimation. • Subtle discrepancies in the cost result from differences in financing and the fuel cycle assumptions. • A common set of assumptions was found to reduce the discrepancies to 1% or less. • Opportunities for harmonisation of economic tools exists. - Abstract: Benchmarking of the economics methodologies developed by the Generation IV International Forum (GIF) and the International Atomic Energy Agency’s International Project on Innovative Nuclear Reactors and Fuel Cycles (INPRO), was performed for three Generation IV nuclear energy systems. The Economic Modeling Working Group of GIF developed an Excel based spreadsheet package, G4ECONS (Generation 4 Excel-based Calculation Of Nuclear Systems), to calculate the total capital investment cost (TCIC) and the levelised unit energy cost (LUEC). G4ECONS is sufficiently generic in the sense that it can accept the types of projected input, performance and cost data that are expected to become available for Generation IV systems through various development phases and that it can model both open and closed fuel cycles. The Nuclear Energy System Assessment (NESA) Economic Support Tool (NEST) was developed to enable an economic analysis using the INPRO methodology to easily calculate outputs including the TCIC, LUEC and other financial figures of merit including internal rate of return, return of investment and net present value. NEST is also Excel based and can be used to evaluate nuclear reactor systems using the open fuel cycle, MOX (mixed oxide) fuel recycling and closed cycles. A Super Critical Water-cooled Reactor system with an open fuel cycle and two Fast Reactor systems, one with a break-even fuel cycle and another with a burner fuel cycle, were selected for the benchmarking exercise. Published data on capital and operating costs were used for economics analyses using G4ECONS and NEST tools. Both G4ECONS and

  9. Hextran-Smabre calculation of the VVER-1000 coolant transient benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Elina Syrjaelahti; Anitta Haemaelaeinen [VTT Processes, P.O.Box 1604, FIN-02044 VTT (Finland)

    2005-07-01

    Full text of publication follows: The VVER-1000 Coolant Transient benchmark is intended for validation of couplings of the thermal hydraulic codes and three dimensional neutron kinetic core models. It concerns a switching on a main coolant pump when the other three main coolant pumps are in operation. Problem is based on experiment performed in Kozloduy NPP in Bulgaria. In addition to the real plant transient, two extreme scenarios concerning control rod ejection after switching on a main coolant pump were calculated. In VTT the three-dimensional advanced nodal code HEXTRAN is used for the core kinetics and dynamics, and thermohydraulic system code SMABRE as a thermal hydraulic model for the primary and secondary loop. Parallelly coupled HEXTRAN-SMABRE code has been in production use since early 90's, and it has been extensively used for analysis of VVER NPPs. The SMABRE input model is based on the standard VVER-1000 input used in VTT. Last plant specific modifications to the input model have been made in EU projects. The whole core calculation is performed in the core with HEXTRAN. Also the core model is based on earlier VVER-1000 models. Nuclear data for the calculation was specified in the benchmark. The paper outlines the input models used for both codes. Calculated results are introduced both for the coupled core system with inlet and outlet boundary conditions and for the whole plant model. Sensitivity studies have been performed for selected parameters. (authors)

  10. Hextran-Smabre calculation of the VVER-1000 coolant transient benchmark

    International Nuclear Information System (INIS)

    Elina Syrjaelahti; Anitta Haemaelaeinen

    2005-01-01

    Full text of publication follows: The VVER-1000 Coolant Transient benchmark is intended for validation of couplings of the thermal hydraulic codes and three dimensional neutron kinetic core models. It concerns a switching on a main coolant pump when the other three main coolant pumps are in operation. Problem is based on experiment performed in Kozloduy NPP in Bulgaria. In addition to the real plant transient, two extreme scenarios concerning control rod ejection after switching on a main coolant pump were calculated. In VTT the three-dimensional advanced nodal code HEXTRAN is used for the core kinetics and dynamics, and thermohydraulic system code SMABRE as a thermal hydraulic model for the primary and secondary loop. Parallelly coupled HEXTRAN-SMABRE code has been in production use since early 90's, and it has been extensively used for analysis of VVER NPPs. The SMABRE input model is based on the standard VVER-1000 input used in VTT. Last plant specific modifications to the input model have been made in EU projects. The whole core calculation is performed in the core with HEXTRAN. Also the core model is based on earlier VVER-1000 models. Nuclear data for the calculation was specified in the benchmark. The paper outlines the input models used for both codes. Calculated results are introduced both for the coupled core system with inlet and outlet boundary conditions and for the whole plant model. Sensitivity studies have been performed for selected parameters. (authors)

  11. Updates to the Integrated Protein-Protein Interaction Benchmarks : Docking Benchmark Version 5 and Affinity Benchmark Version 2

    NARCIS (Netherlands)

    Vreven, Thom; Moal, Iain H.; Vangone, Anna|info:eu-repo/dai/nl/370549694; Pierce, Brian G.; Kastritis, Panagiotis L.|info:eu-repo/dai/nl/315886668; Torchala, Mieczyslaw; Chaleil, Raphael; Jiménez-García, Brian; Bates, Paul A.; Fernandez-Recio, Juan; Bonvin, Alexandre M J J|info:eu-repo/dai/nl/113691238; Weng, Zhiping

    2015-01-01

    We present an updated and integrated version of our widely used protein-protein docking and binding affinity benchmarks. The benchmarks consist of non-redundant, high-quality structures of protein-protein complexes along with the unbound structures of their components. Fifty-five new complexes were

  12. Human factors reliability Benchmark exercise

    International Nuclear Information System (INIS)

    Poucet, A.

    1989-06-01

    The Joint Research Centre of the European Commission has organized a Human Factors Reliability Benchmark Exercise (HF-RBE) with the aim of assessing the state of the art in human reliability modelling and assessment. Fifteen teams from eleven countries, representing industry, utilities, licensing organisations and research institutes, participated in the HF-RBE. The HF-RBE was organized around two study cases: (1) analysis of routine functional Test and Maintenance (T and M) procedures: with the aim of assessing the probability of test induced failures, the probability of failures to remain unrevealed and the potential to initiate transients because of errors performed in the test; (2) analysis of human actions during an operational transient: with the aim of assessing the probability that the operators will correctly diagnose the malfunctions and take proper corrective action. This report contains the final summary reports produced by the participants in the exercise

  13. Toxicological benchmarks for screening potential contaminants of concern for effects on terrestrial plants: 1994 revision

    Energy Technology Data Exchange (ETDEWEB)

    Will, M.E.; Suter, G.W. II

    1994-09-01

    One of the initial stages in ecological risk assessment for hazardous waste sites is screening contaminants to determine which of them are worthy of further consideration as contaminants of potential concern. This process is termed contaminant screening. It is performed by comparing measured ambient concentrations of chemicals to benchmark concentrations. Currently, no standard benchmark concentrations exist for assessing contaminants in soil with respect to their toxicity to plants. This report presents a standard method for deriving benchmarks for this purpose (phytotoxicity benchmarks), a set of data concerning effects of chemicals in soil or soil solution on plants, and a set of phytotoxicity benchmarks for 38 chemicals potentially associated with United States Department of Energy (DOE) sites. In addition, background information on the phytotoxicity and occurrence of the chemicals in soils is presented, and literature describing the experiments from which data were drawn for benchmark derivation is reviewed. Chemicals that are found in soil at concentrations exceeding both the phytotoxicity benchmark and the background concentration for the soil type should be considered contaminants of potential concern.

  14. Toxicological Benchmarks for Screening Potential Contaminants of Concern for Effects on Terrestrial Plants

    Energy Technology Data Exchange (ETDEWEB)

    Suter, G.W. II

    1993-01-01

    One of the initial stages in ecological risk assessment for hazardous waste sites is screening contaminants to determine which of them are worthy of further consideration as contaminants of potential concern. This process is termed contaminant screening. It is performed by comparing measured ambient concentrations of chemicals to benchmark concentrations. Currently, no standard benchmark concentrations exist for assessing contaminants in soil with respect to their toxicity to plants. This report presents a standard method for deriving benchmarks for this purpose (phytotoxicity benchmarks), a set of data concerning effects of chemicals in soil or soil solution on plants, and a set of phytotoxicity benchmarks for 38 chemicals potentially associated with United States Department of Energy (DOE) sites. In addition, background information on the phytotoxicity and occurrence of the chemicals in soils is presented, and literature describing the experiments from which data were drawn for benchmark derivation is reviewed. Chemicals that are found in soil at concentrations exceeding both the phytotoxicity benchmark and the background concentration for the soil type should be considered contaminants of potential concern.

  15. Toxicological benchmarks for screening potential contaminants of concern for effects on terrestrial plants: 1994 revision

    International Nuclear Information System (INIS)

    Will, M.E.; Suter, G.W. II.

    1994-09-01

    One of the initial stages in ecological risk assessment for hazardous waste sites is screening contaminants to determine which of them are worthy of further consideration as contaminants of potential concern. This process is termed contaminant screening. It is performed by comparing measured ambient concentrations of chemicals to benchmark concentrations. Currently, no standard benchmark concentrations exist for assessing contaminants in soil with respect to their toxicity to plants. This report presents a standard method for deriving benchmarks for this purpose (phytotoxicity benchmarks), a set of data concerning effects of chemicals in soil or soil solution on plants, and a set of phytotoxicity benchmarks for 38 chemicals potentially associated with United States Department of Energy (DOE) sites. In addition, background information on the phytotoxicity and occurrence of the chemicals in soils is presented, and literature describing the experiments from which data were drawn for benchmark derivation is reviewed. Chemicals that are found in soil at concentrations exceeding both the phytotoxicity benchmark and the background concentration for the soil type should be considered contaminants of potential concern

  16. Using Participatory Action Research to Study the Implementation of Career Development Benchmarks at a New Zealand University

    Science.gov (United States)

    Furbish, Dale S.; Bailey, Robyn; Trought, David

    2016-01-01

    Benchmarks for career development services at tertiary institutions have been developed by Careers New Zealand. The benchmarks are intended to provide standards derived from international best practices to guide career development services. A new career development service was initiated at a large New Zealand university just after the benchmarks…

  17. Benchmarking Ortec ISOTOPIC measurements and calculations

    International Nuclear Information System (INIS)

    This paper represents a description of eight compiled benchmark tests conducted to probe and to demonstrate the extensive utility of the Ortec ISOTOPIC gamma-ray analysis software program. The paper describes tests of the programs capability to perform finite geometry correction factors and sample-matrix-container photon absorption correction factors. Favorable results are obtained in all benchmark tests. (author)

  18. Benchmarking set for domestic smart grid management

    NARCIS (Netherlands)

    Bosman, M.G.C.; Bakker, Vincent; Molderink, Albert; Hurink, Johann L.; Smit, Gerardus Johannes Maria

    2010-01-01

    In this paper we propose a benchmark for domestic smart grid management. It consists of an in-depth description of a domestic smart grid, in which local energy consumers, producers and buffers can be controlled. First, from this description a general benchmark framework is derived, which can be used

  19. Benchmarking nutrient use efficiency of dairy farms

    NARCIS (Netherlands)

    Mu, W.; Groen, E.A.; Middelaar, van C.E.; Bokkers, E.A.M.; Hennart, S.; Stilmant, D.; Boer, de I.J.M.

    2017-01-01

    The nutrient use efficiency (NUE) of a system, generally computed as the amount of nutrients in valuable outputs over the amount of nutrients in all inputs, is commonly used to benchmark the environmental performance of dairy farms. Benchmarking the NUE of farms, however, may lead to biased

  20. Benchmark analysis of railway networks and undertakings

    NARCIS (Netherlands)

    Hansen, I.A.; Wiggenraad, P.B.L.; Wolff, J.W.

    2013-01-01

    Benchmark analysis of railway networks and companies has been stimulated by the European policy of deregulation of transport markets, the opening of national railway networks and markets to new entrants and separation of infrastructure and train operation. Recent international railway benchmarking

  1. The Linked Data Benchmark Council Project

    NARCIS (Netherlands)

    P.A. Boncz (Peter); I. Fundulaki; A. Gubichev (Andrey); J. Larriba-Pey (Josep); T. Neumann (Thomas)

    2013-01-01

    htmlabstractDespite the fast growth and increasing popularity, the broad field of RDF and Graph database systems lacks an independent authority for developing benchmarks, and for neutrally assessing benchmark results through industry-strength auditing which would allow to quantify and compare the

  2. Aerodynamic Benchmarking of the Deepwind Design

    DEFF Research Database (Denmark)

    Bedona, Gabriele; Schmidt Paulsen, Uwe; Aagaard Madsen, Helge

    2015-01-01

    The aerodynamic benchmarking for the DeepWind rotor is conducted comparing different rotor geometries and solutions and keeping the comparison as fair as possible. The objective for the benchmarking is to find the most suitable configuration in order to maximize the power production and minimize...

  3. Benchmark Two-Good Utility Functions

    NARCIS (Netherlands)

    de Jaegher, K.

    Benchmark two-good utility functions involving a good with zero income elasticity and unit income elasticity are well known. This paper derives utility functions for the additional benchmark cases where one good has zero cross-price elasticity, unit own-price elasticity, and zero own price

  4. MOLECULAR LINE EMISSION FROM MULTIFLUID SHOCK WAVES. I. NUMERICAL METHODS AND BENCHMARK TESTS

    International Nuclear Information System (INIS)

    Ciolek, Glenn E.; Roberge, Wayne G.

    2013-01-01

    We describe a numerical scheme for studying time-dependent, multifluid, magnetohydrodynamic shock waves in weakly ionized interstellar clouds and cores. Shocks are modeled as propagating perpendicular to the magnetic field and consist of a neutral molecular fluid plus a fluid of ions and electrons. The scheme is based on operator splitting, wherein time integration of the governing equations is split into separate parts. In one part, independent homogeneous Riemann problems for the two fluids are solved using Godunov's method. In the other, equations containing the source terms for transfer of mass, momentum, and energy between the fluids are integrated using standard numerical techniques. We show that, for the frequent case where the thermal pressures of the ions and electrons are << magnetic pressure, the Riemann problems for the neutral and ion-electron fluids have a similar mathematical structure which facilitates numerical coding. Implementation of the scheme is discussed and several benchmark tests confirming its accuracy are presented, including (1) MHD wave packets ranging over orders of magnitude in length- and timescales, (2) early evolution of multifluid shocks caused by two colliding clouds, and (3) a multifluid shock with mass transfer between the fluids by cosmic-ray ionization and ion-electron recombination, demonstrating the effect of ion mass loading on magnetic precursors of MHD shocks. An exact solution to an MHD Riemann problem forming the basis for an approximate numerical solver used in the homogeneous part of our scheme is presented, along with derivations of the analytic benchmark solutions and tests showing the convergence of the numerical algorithm.

  5. Molecular Line Emission from Multifluid Shock Waves. I. Numerical Methods and Benchmark Tests

    Science.gov (United States)

    Ciolek, Glenn E.; Roberge, Wayne G.

    2013-05-01

    We describe a numerical scheme for studying time-dependent, multifluid, magnetohydrodynamic shock waves in weakly ionized interstellar clouds and cores. Shocks are modeled as propagating perpendicular to the magnetic field and consist of a neutral molecular fluid plus a fluid of ions and electrons. The scheme is based on operator splitting, wherein time integration of the governing equations is split into separate parts. In one part, independent homogeneous Riemann problems for the two fluids are solved using Godunov's method. In the other, equations containing the source terms for transfer of mass, momentum, and energy between the fluids are integrated using standard numerical techniques. We show that, for the frequent case where the thermal pressures of the ions and electrons are Lt magnetic pressure, the Riemann problems for the neutral and ion-electron fluids have a similar mathematical structure which facilitates numerical coding. Implementation of the scheme is discussed and several benchmark tests confirming its accuracy are presented, including (1) MHD wave packets ranging over orders of magnitude in length- and timescales, (2) early evolution of multifluid shocks caused by two colliding clouds, and (3) a multifluid shock with mass transfer between the fluids by cosmic-ray ionization and ion-electron recombination, demonstrating the effect of ion mass loading on magnetic precursors of MHD shocks. An exact solution to an MHD Riemann problem forming the basis for an approximate numerical solver used in the homogeneous part of our scheme is presented, along with derivations of the analytic benchmark solutions and tests showing the convergence of the numerical algorithm.

  6. HPC Analytics Support. Requirements for Uncertainty Quantification Benchmarks

    Energy Technology Data Exchange (ETDEWEB)

    Paulson, Patrick R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Purohit, Sumit [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Rodriguez, Luke R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2015-05-01

    This report outlines techniques for extending benchmark generation products so they support uncertainty quantification by benchmarked systems. We describe how uncertainty quantification requirements can be presented to candidate analytical tools supporting SPARQL. We describe benchmark data sets for evaluating uncertainty quantification, as well as an approach for using our benchmark generator to produce data sets for generating benchmark data sets.

  7. Benchmarking of refinery emissions performance : Executive summary

    International Nuclear Information System (INIS)

    2003-07-01

    This study was undertaken to collect emissions performance data for Canadian and comparable American refineries. The objective was to examine parameters that affect refinery air emissions performance and develop methods or correlations to normalize emissions performance. Another objective was to correlate and compare the performance of Canadian refineries to comparable American refineries. For the purpose of this study, benchmarking involved the determination of levels of emission performance that are being achieved for generic groups of facilities. A total of 20 facilities were included in the benchmarking analysis, and 74 American refinery emission correlations were developed. The recommended benchmarks, and the application of those correlations for comparison between Canadian and American refinery performance, were discussed. The benchmarks were: sulfur oxides, nitrogen oxides, carbon monoxide, particulate, volatile organic compounds, ammonia and benzene. For each refinery in Canada, benchmark emissions were developed. Several factors can explain differences in Canadian and American refinery emission performance. 4 tabs., 7 figs

  8. Performance of Multi-chaotic PSO on a shifted benchmark functions set

    International Nuclear Information System (INIS)

    Pluhacek, Michal; Senkerik, Roman; Zelinka, Ivan

    2015-01-01

    In this paper the performance of Multi-chaotic PSO algorithm is investigated using two shifted benchmark functions. The purpose of shifted benchmark functions is to simulate the time-variant real-world problems. The results of chaotic PSO are compared with canonical version of the algorithm. It is concluded that using the multi-chaotic approach can lead to better results in optimization of shifted functions

  9. A new benchmark reference solution for double-diffusive convection in a heterogeneous porous medium

    OpenAIRE

    Shao, Q.; Fahs, M.; Younes, Anis; Makradi, A.; Mara, T.

    2016-01-01

    A new benchmark with a high accurate solution is proposed for the verification of numerical codes dealing with double-diffusive convection in a heterogeneous porous medium. The new benchmark is inspired by the popular problem of square porous cavity by assuming a stratified porous medium. A high accurate steady state solution is developed using the Fourier-Galerkin method. To this aim, the unknowns are expanded in double infinite Fourier series. The accuracy of the developed solution is asses...

  10. Problems in detecting misfit of latent class models in diagnostic research without a gold standard were shown.

    Science.gov (United States)

    van Smeden, Maarten; Oberski, Daniel L; Reitsma, Johannes B; Vermunt, Jeroen K; Moons, Karel G M; de Groot, Joris A H

    2016-06-01

    The objective of this study was to evaluate the performance of goodness-of-fit testing to detect relevant violations of the assumptions underlying the criticized "standard" two-class latent class model. Often used to obtain sensitivity and specificity estimates for diagnostic tests in the absence of a gold reference standard, this model relies on assuming that diagnostic test errors are independent. When this assumption is violated, accuracy estimates may be biased: goodness-of-fit testing is often used to evaluate the assumption and prevent bias. We investigate the performance of goodness-of-fit testing by Monte Carlo simulation. The simulation scenarios are based on three empirical examples. Goodness-of-fit tests lack power to detect relevant misfit of the standard two-class latent class model at sample sizes that are typically found in empirical diagnostic studies. The goodness-of-fit tests that are based on asymptotic theory are not robust to the sparseness of data. A parametric bootstrap procedure improves the evaluation of goodness of fit in the case of sparse data. Our simulation study suggests that relevant violation of the local independence assumption underlying the standard two-class latent class model may remain undetected in empirical diagnostic studies, potentially leading to biased estimates of sensitivity and specificity. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. Problems in detecting misfit of latent class models in diagnostic research without a gold standard were shown

    NARCIS (Netherlands)

    Van Smeden, Maarten; Oberski, Daniel L.; Reitsma, Johannes B.; Vermunt, Jeroen K.; Moons, Karel G M; De Groot, Joris A H

    2016-01-01

    Objectives The objective of this study was to evaluate the performance of goodness-of-fit testing to detect relevant violations of the assumptions underlying the criticized "standard" two-class latent class model. Often used to obtain sensitivity and specificity estimates for diagnostic tests in the

  12. Problems in detecting misfit of latent class models in diagnostic research without a gold standard were shown

    NARCIS (Netherlands)

    van Smeden, Maarten|info:eu-repo/dai/nl/413981983; Oberski, Daniel L; Reitsma, Johannes B|info:eu-repo/dai/nl/189853107; Vermunt, Jeroen K; Moons, Karel G M|info:eu-repo/dai/nl/152483519; de Groot, JAH|info:eu-repo/dai/nl/314072268

    OBJECTIVES: The objective of this study was to evaluate the performance of goodness-of-fit testing to detect relevant violations of the assumptions underlying the criticized 'standard' 2-class latent class model. Often used to obtain sensitivity and specificity estimates for diagnostic tests in the

  13. General squark flavour mixing: constraints, phenomenology and benchmarks

    CERN Document Server

    De Causmaecker, Karen; Herrmann, Bjoern; Mahmoudi, Farvah; O'Leary, Ben; Porod, Werner; Sekmen, Sezen; Strobbe, Nadja

    2015-11-19

    We present an extensive study of non-minimal flavour violation in the squark sector in the framework of the Minimal Supersymmetric Standard Model. We investigate the effects of multiple non-vanishing flavour-violating elements in the squark mass matrices by means of a Markov Chain Monte Carlo scanning technique and identify parameter combinations that are favoured by both current data and theoretical constraints. We then detail the resulting distributions of the flavour-conserving and flavour-violating model parameters. Based on this analysis, we propose a set of benchmark scenarios relevant for future studies of non-minimal flavour violation in the Minimal Supersymmetric Standard Model.

  14. Compilation report of VHTRC temperature coefficient benchmark calculations

    International Nuclear Information System (INIS)

    Yasuda, Hideshi; Yamane, Tsuyoshi

    1995-11-01

    A calculational benchmark problem has been proposed by JAERI to an IAEA Coordinated Research Program, 'Verification of Safety Related Neutronic Calculation for Low-enriched Gas-cooled Reactors' to investigate the accuracy of calculation results obtained by using codes of the participating countries. This benchmark is made on the basis of assembly heating experiments at a pin-in block type critical assembly, VHTRC. Requested calculation items are the cell parameters, effective multiplication factor, temperature coefficient of reactivity, reaction rates, fission rate distribution, etc. Seven institutions from five countries have joined the benchmark works. Calculation results are summarized in this report with some remarks by the authors. Each institute analyzed the problem by applying the calculation code system which was prepared for the HTGR development of individual country. The values of the most important parameter, k eff , by all institutes showed good agreement with each other and with the experimental ones within 1%. The temperature coefficient agreed within 13%. The values of several cell parameters calculated by several institutes did not agree with the other's ones. It will be necessary to check the calculation conditions again for getting better agreement. (J.P.N.)

  15. Benchmarking NNWSI flow and transport codes: COVE 1 results

    International Nuclear Information System (INIS)

    Hayden, N.K.

    1985-06-01

    The code verification (COVE) activity of the Nevada Nuclear Waste Storage Investigations (NNWSI) Project is the first step in certification of flow and transport codes used for NNWSI performance assessments of a geologic repository for disposing of high-level radioactive wastes. The goals of the COVE activity are (1) to demonstrate and compare the numerical accuracy and sensitivity of certain codes, (2) to identify and resolve problems in running typical NNWSI performance assessment calculations, and (3) to evaluate computer requirements for running the codes. This report describes the work done for COVE 1, the first step in benchmarking some of the codes. Isothermal calculations for the COVE 1 benchmarking have been completed using the hydrologic flow codes SAGUARO, TRUST, and GWVIP; the radionuclide transport codes FEMTRAN and TRUMP; and the coupled flow and transport code TRACR3D. This report presents the results of three cases of the benchmarking problem solved for COVE 1, a comparison of the results, questions raised regarding sensitivities to modeling techniques, and conclusions drawn regarding the status and numerical sensitivities of the codes. 30 refs

  16. Thermal Performance Benchmarking: Annual Report

    Energy Technology Data Exchange (ETDEWEB)

    Feng, Xuhui [National Renewable Energy Laboratory (NREL), Golden, CO (United States). Transportation and Hydrogen Systems Center

    2017-10-19

    In FY16, the thermal performance of the 2014 Honda Accord Hybrid power electronics thermal management systems were benchmarked. Both experiments and numerical simulation were utilized to thoroughly study the thermal resistances and temperature distribution in the power module. Experimental results obtained from the water-ethylene glycol tests provided the junction-to-liquid thermal resistance. The finite element analysis (FEA) and computational fluid dynamics (CFD) models were found to yield a good match with experimental results. Both experimental and modeling results demonstrate that the passive stack is the dominant thermal resistance for both the motor and power electronics systems. The 2014 Accord power electronics systems yield steady-state thermal resistance values around 42- 50 mm to the 2nd power K/W, depending on the flow rates. At a typical flow rate of 10 liters per minute, the thermal resistance of the Accord system was found to be about 44 percent lower than that of the 2012 Nissan LEAF system that was benchmarked in FY15. The main reason for the difference is that the Accord power module used a metalized-ceramic substrate and eliminated the thermal interface material layers. FEA models were developed to study the transient performance of 2012 Nissan LEAF, 2014 Accord, and two other systems that feature conventional power module designs. The simulation results indicate that the 2012 LEAF power module has lowest thermal impedance at a time scale less than one second. This is probably due to moving low thermally conductive materials further away from the heat source and enhancing the heat spreading effect from the copper-molybdenum plate close to the insulated gate bipolar transistors. When approaching steady state, the Honda system shows lower thermal impedance. Measurement results of the thermal resistance of the 2015 BMW i3 power electronic system indicate that the i3 insulated gate bipolar transistor module has significantly lower junction

  17. FIREWORKS ALGORITHM FOR UNCONSTRAINED FUNCTION OPTIMIZATION PROBLEMS

    Directory of Open Access Journals (Sweden)

    Evans BAIDOO

    2017-03-01

    Full Text Available Modern real world science and engineering problems can be classified as multi-objective optimisation problems which demand for expedient and efficient stochastic algorithms to respond to the optimization needs. This paper presents an object-oriented software application that implements a firework optimization algorithm for function optimization problems. The algorithm, a kind of parallel diffuse optimization algorithm is based on the explosive phenomenon of fireworks. The algorithm presented promising results when compared to other population or iterative based meta-heuristic algorithm after it was experimented on five standard benchmark problems. The software application was implemented in Java with interactive interface which allow for easy modification and extended experimentation. Additionally, this paper validates the effect of runtime on the algorithm performance.

  18. VVER-1000 burnup credit benchmark (CB5). New results evaluation

    International Nuclear Information System (INIS)

    Manolova, M.; Mihaylov, N.; Prodanova, R.

    2008-01-01

    The validation of depletion codes is an important task in spent fuel management, especially for burnup credit application in criticality safety analysis of spent fuel facilities. Because of lack of well documented experimental data for VVER-1000, the validation could be made on the basis of code intercomparison based on the numerical benchmark problems. Some years ago a VVER-1000 burnup credit benchmark (CB5) was proposed to the AER research community and the preliminary results from three depletion codes were compared. In the paper some new results for the isotopic concentrations of twelve actinides and fifteen fission products calculated by the depletion codes SCALE5.1, WIMS9, SCALE4.4 and NESSEL-NUKO are compared and evaluated. (authors)

  19. Benchmarks for single-phase flow in fractured porous media

    Science.gov (United States)

    Flemisch, Bernd; Berre, Inga; Boon, Wietse; Fumagalli, Alessio; Schwenck, Nicolas; Scotti, Anna; Stefansson, Ivar; Tatomir, Alexandru

    2018-01-01

    This paper presents several test cases intended to be benchmarks for numerical schemes for single-phase fluid flow in fractured porous media. A number of solution strategies are compared, including a vertex and two cell-centred finite volume methods, a non-conforming embedded discrete fracture model, a primal and a dual extended finite element formulation, and a mortar discrete fracture model. The proposed benchmarks test the schemes by increasing the difficulties in terms of network geometry, e.g. intersecting fractures, and physical parameters, e.g. low and high fracture-matrix permeability ratio as well as heterogeneous fracture permeabilities. For each problem, the results presented are the number of unknowns, the approximation errors in the porous matrix and in the fractures with respect to a reference solution, and the sparsity and condition number of the discretized linear system. All data and meshes used in this study are publicly available for further comparisons.

  20. Hydrologic information server for benchmark precipitation dataset

    Science.gov (United States)

    McEnery, John A.; McKee, Paul W.; Shelton, Gregory P.; Ramsey, Ryan W.

    2013-01-01

    This paper will present the methodology and overall system development by which a benchmark dataset of precipitation information has been made available. Rainfall is the primary driver of the hydrologic cycle. High quality precipitation data is vital for hydrologic models, hydrometeorologic studies and climate analysis,and hydrologic time series observations are important to many water resources applications. Over the past two decades, with the advent of NEXRAD radar, science to measure and record rainfall has improved dramatically. However, much existing data has not been readily available for public access or transferable among the agricultural, engineering and scientific communities. This project takes advantage of the existing CUAHSI Hydrologic Information System ODM model and tools to bridge the gap between data storage and data access, providing an accepted standard interface for internet access to the largest time-series dataset of NEXRAD precipitation data ever assembled. This research effort has produced an operational data system to ingest, transform, load and then serve one of most important hydrologic variable sets.

  1. Geant4 Computing Performance Benchmarking and Monitoring

    Science.gov (United States)

    Dotti, Andrea; Elvira, V. Daniel; Folger, Gunter; Genser, Krzysztof; Jun, Soon Yung; Kowalkowski, James B.; Paterno, Marc

    2015-12-01

    Performance evaluation and analysis of large scale computing applications is essential for optimal use of resources. As detector simulation is one of the most compute intensive tasks and Geant4 is the simulation toolkit most widely used in contemporary high energy physics (HEP) experiments, it is important to monitor Geant4 through its development cycle for changes in computing performance and to identify problems and opportunities for code improvements. All Geant4 development and public releases are being profiled with a set of applications that utilize different input event samples, physics parameters, and detector configurations. Results from multiple benchmarking runs are compared to previous public and development reference releases to monitor CPU and memory usage. Observed changes are evaluated and correlated with code modifications. Besides the full summary of call stack and memory footprint, a detailed call graph analysis is available to Geant4 developers for further analysis. The set of software tools used in the performance evaluation procedure, both in sequential and multi-threaded modes, include FAST, IgProf and Open|Speedshop. The scalability of the CPU time and memory performance in multi-threaded application is evaluated by measuring event throughput and memory gain as a function of the number of threads for selected event samples.

  2. Spectral Relative Standard Deviation: A Practical Benchmark in Metabolomics

    Science.gov (United States)

    Metabolomics datasets, by definition, comprise of measurements of large numbers of metabolites. Both technical (analytical) and biological factors will induce variation within these measurements that is not consistent across all metabolites. Consequently, criteria are required to...

  3. Bioelectrochemical Systems Workshop:Standardized Analyses, Design Benchmarks, and Reporting

    Science.gov (United States)

    2012-01-01

    State University; Ms. Caroline Dale, Veolia Water Systems in USA; Dr. Pat Evans, CDM; Dr. Ashley Franks, University of Massachusetts / Latrobe University...generation using membrane and salt bridge microbial fuel cells. Water Research, 2005. 39(9): p. 1675-1686. 3. Logan, B.E., Scaling Up Microbial Fuel... Water Research, 2012. 46: p. 2425-2434. 17. Zhang, G.D., et al., Biocathode microbial fuel cell for efficient electricity recovery from dairy manure

  4. Benchmarking Commercial Conformer Ensemble Generators.

    Science.gov (United States)

    Friedrich, Nils-Ole; de Bruyn Kops, Christina; Flachsenberg, Florian; Sommer, Kai; Rarey, Matthias; Kirchmair, Johannes

    2017-11-27

    We assess and compare the performance of eight commercial conformer ensemble generators (ConfGen, ConfGenX, cxcalc, iCon, MOE LowModeMD, MOE Stochastic, MOE Conformation Import, and OMEGA) and one leading free algorithm, the distance geometry algorithm implemented in RDKit. The comparative study is based on a new version of the Platinum Diverse Dataset, a high-quality benchmarking dataset of 2859 protein-bound ligand conformations extracted from the PDB. Differences in the performance of commercial algorithms are much smaller than those observed for free algorithms in our previous study (J. Chem. Inf. 2017, 57, 529-539). For commercial algorithms, the median minimum root-mean-square deviations measured between protein-bound ligand conformations and ensembles of a maximum of 250 conformers are between 0.46 and 0.61 Å. Commercial conformer ensemble generators are characterized by their high robustness, with at least 99% of all input molecules successfully processed and few or even no substantial geometrical errors detectable in their output conformations. The RDKit distance geometry algorithm (with minimization enabled) appears to be a good free alternative since its performance is comparable to that of the midranked commercial algorithms. Based on a statistical analysis, we elaborate on which algorithms to use and how to parametrize them for best performance in different application scenarios.

  5. What Randomized Benchmarking Actually Measures

    Science.gov (United States)

    Proctor, Timothy; Rudinger, Kenneth; Young, Kevin; Sarovar, Mohan; Blume-Kohout, Robin

    2017-09-01

    Randomized benchmarking (RB) is widely used to measure an error rate of a set of quantum gates, by performing random circuits that would do nothing if the gates were perfect. In the limit of no finite-sampling error, the exponential decay rate of the observable survival probabilities, versus circuit length, yields a single error metric r . For Clifford gates with arbitrary small errors described by process matrices, r was believed to reliably correspond to the mean, over all Clifford gates, of the average gate infidelity between the imperfect gates and their ideal counterparts. We show that this quantity is not a well-defined property of a physical gate set. It depends on the representations used for the imperfect and ideal gates, and the variant typically computed in the literature can differ from r by orders of magnitude. We present new theories of the RB decay that are accurate for all small errors describable by process matrices, and show that the RB decay curve is a simple exponential for all such errors. These theories allow explicit computation of the error rate that RB measures (r ), but as far as we can tell it does not correspond to the infidelity of a physically allowed (completely positive) representation of the imperfect gates.

  6. Human factors reliability benchmark exercise

    International Nuclear Information System (INIS)

    Poucet, A.

    1989-08-01

    The Joint Research Centre of the European Commission has organised a Human Factors Reliability Benchmark Exercise (HF-RBE) with the aim of assessing the state of the art in human reliability modelling and assessment. Fifteen teams from eleven countries, representing industry, utilities, licensing organisations and research institutes, participated in the HF-RBE. The HF-RBE was organised around two study cases: (1) analysis of routine functional Test and Maintenance (TPM) procedures: with the aim of assessing the probability of test induced failures, the probability of failures to remain unrevealed and the potential to initiate transients because of errors performed in the test; (2) analysis of human actions during an operational transient: with the aim of assessing the probability that the operators will correctly diagnose the malfunctions and take proper corrective action. This report summarises the contributions received from the participants and analyses these contributions on a comparative basis. The aim of this analysis was to compare the procedures, modelling techniques and quantification methods used, to obtain insight in the causes and magnitude of the variability observed in the results, to try to identify preferred human reliability assessment approaches and to get an understanding of the current state of the art in the field identifying the limitations that are still inherent to the different approaches

  7. Benchmark Production Scheduling Problems for Job Shops with Interactive Constraints

    Science.gov (United States)

    1993-09-01

    to save setup times that may otherwise be necessary if these job orders remained two separate job orders. DISASTERTM then does not recognize these as...not allow for this setup savings , so setup time was eliminated. 11. WIP Inventory Level/Buffer Sizes: This variable is the amount of WIP inventory...25% A PG- B FG-C PG- D E PG-P FG- G 11 RdYlo O DuGr en DWehidut te DeDte Prd Ct yae Due DetenDe Date J PGB 04O.9 - 04Ot9 PGD 0 Oc9 PGC 04c9 8P G-C

  8. Workshop: Monte Carlo computational performance benchmark - Contributions

    International Nuclear Information System (INIS)

    Hoogenboom, J.E.; Petrovic, B.; Martin, W.R.; Sutton, T.; Leppaenen, J.; Forget, B.; Romano, P.; Siegel, A.; Hoogenboom, E.; Wang, K.; Li, Z.; She, D.; Liang, J.; Xu, Q.; Qiu, Y.; Yu, J.; Sun, J.; Fan, X.; Yu, G.; Bernard, F.; Cochet, B.; Jinaphanh, A.; Jacquet, O.; Van der Marck, S.; Tramm, J.; Felker, K.; Smith, K.; Horelik, N.; Capellan, N.; Herman, B.

    2013-01-01

    This series of slides is divided into 3 parts. The first part is dedicated to the presentation of the Monte-Carlo computational performance benchmark (aims, specifications and results). This benchmark aims at performing a full-size Monte Carlo simulation of a PWR core with axial and pin-power distribution. Many different Monte Carlo codes have been used and their results have been compared in terms of computed values and processing speeds. It appears that local power values mostly agree quite well. The first part also includes the presentations of about 10 participants in which they detail their calculations. In the second part, an extension of the benchmark is proposed in order to simulate a more realistic reactor core (for instance non-uniform temperature) and to assess feedback coefficients due to change of some parameters. The third part deals with another benchmark, the BEAVRS benchmark (Benchmark for Evaluation And Validation of Reactor Simulations). BEAVRS is also a full-core PWR benchmark for Monte Carlo simulations

  9. ICSBEP Benchmarks For Nuclear Data Applications

    Science.gov (United States)

    Briggs, J. Blair

    2005-05-01

    The International Criticality Safety Benchmark Evaluation Project (ICSBEP) was initiated in 1992 by the United States Department of Energy. The ICSBEP became an official activity of the Organization for Economic Cooperation and Development (OECD) — Nuclear Energy Agency (NEA) in 1995. Representatives from the United States, United Kingdom, France, Japan, the Russian Federation, Hungary, Republic of Korea, Slovenia, Serbia and Montenegro (formerly Yugoslavia), Kazakhstan, Spain, Israel, Brazil, Poland, and the Czech Republic are now participating. South Africa, India, China, and Germany are considering participation. The purpose of the ICSBEP is to identify, evaluate, verify, and formally document a comprehensive and internationally peer-reviewed set of criticality safety benchmark data. The work of the ICSBEP is published as an OECD handbook entitled "International Handbook of Evaluated Criticality Safety Benchmark Experiments." The 2004 Edition of the Handbook contains benchmark specifications for 3331 critical or subcritical configurations that are intended for use in validation efforts and for testing basic nuclear data. New to the 2004 Edition of the Handbook is a draft criticality alarm / shielding type benchmark that should be finalized in 2005 along with two other similar benchmarks. The Handbook is being used extensively for nuclear data testing and is expected to be a valuable resource for code and data validation and improvement efforts for decades to come. Specific benchmarks that are useful for testing structural materials such as iron, chromium, nickel, and manganese; beryllium; lead; thorium; and 238U are highlighted.

  10. Performance Benchmarking of Fast Multipole Methods

    KAUST Repository

    Al-Harthi, Noha A.

    2013-06-01

    The current trends in computer architecture are shifting towards smaller byte/flop ratios, while available parallelism is increasing at all levels of granularity – vector length, core count, and MPI process. Intel’s Xeon Phi coprocessor, NVIDIA’s Kepler GPU, and IBM’s BlueGene/Q all have a Byte/flop ratio close to 0.2, which makes it very difficult for most algorithms to extract a high percentage of the theoretical peak flop/s from these architectures. Popular algorithms in scientific computing such as FFT are continuously evolving to keep up with this trend in hardware. In the meantime it is also necessary to invest in novel algorithms that are more suitable for computer architectures of the future. The fast multipole method (FMM) was originally developed as a fast algorithm for ap- proximating the N-body interactions that appear in astrophysics, molecular dynamics, and vortex based fluid dynamics simulations. The FMM possesses have a unique combination of being an efficient O(N) algorithm, while having an operational intensity that is higher than a matrix-matrix multiplication. In fact, the FMM can reduce the requirement of Byte/flop to around 0.01, which means that it will remain compute bound until 2020 even if the cur- rent trend in microprocessors continues. Despite these advantages, there have not been any benchmarks of FMM codes on modern architectures such as Xeon Phi, Kepler, and Blue- Gene/Q. This study aims to provide a comprehensive benchmark of a state of the art FMM code “exaFMM” on the latest architectures, in hopes of providing a useful reference for deciding when the FMM will become useful as the computational engine in a given application code. It may also serve as a warning to certain problem size domains areas where the FMM will exhibit insignificant performance improvements. Such issues depend strongly on the asymptotic constants rather than the asymptotics themselves, and therefore are strongly implementation and hardware

  11. Quality benchmarking methodology: Case study of finance and culture industries in Latvia

    Directory of Open Access Journals (Sweden)

    Ieva Zemīte

    2011-01-01

    Full Text Available Political, socio-economic and cultural changes that have taken place in the world during the last years have influenced all the spheres. Constant improvements are necessary to sustain in rival and shrinking markets. This sets high quality standards for the service industries. Therefore it is important to conduct comparison of quality criteria to ascertain which practices are achieving superior performance levels. At present companies in Latvia do not carry out mutual benchmarking, and as a result of that do not know how they rank against their peers in terms of quality, as well as they do not see benefits in sharing of information and in benchmarking.The purpose of this paper is to determine the criteria of qualitative benchmarking, and to investigate the use of the benchmarking quality in service industries, particularly: finance and culture sectors in Latvia in order to determine the key driving factors of quality, to explore internal and foreign benchmarks, and to reveal the full potential of inputs’ reduction and efficiency growth for the aforementioned industries.Case study and other tools are used to define the readiness of the company for benchmarking. Certain key factors are examined for their impact on quality criteria. The results are based on the research conducted in professional associations in defined fields (insurance and theatre.Originality/value – this is the first study that adopts the benchmarking models for measuring quality criteria and readiness for mutual comparison in insurance and theatre industries in Latvia.

  12. Developing integrated benchmarks for DOE performance measurement

    Energy Technology Data Exchange (ETDEWEB)

    Barancik, J.I.; Kramer, C.F.; Thode, Jr. H.C.

    1992-09-30

    The objectives of this task were to describe and evaluate selected existing sources of information on occupational safety and health with emphasis on hazard and exposure assessment, abatement, training, reporting, and control identifying for exposure and outcome in preparation for developing DOE performance benchmarks. Existing resources and methodologies were assessed for their potential use as practical performance benchmarks. Strengths and limitations of current data resources were identified. Guidelines were outlined for developing new or improved performance factors, which then could become the basis for selecting performance benchmarks. Data bases for non-DOE comparison populations were identified so that DOE performance could be assessed relative to non-DOE occupational and industrial groups. Systems approaches were described which can be used to link hazards and exposure, event occurrence, and adverse outcome factors, as needed to generate valid, reliable, and predictive performance benchmarks. Data bases were identified which contain information relevant to one or more performance assessment categories . A list of 72 potential performance benchmarks was prepared to illustrate the kinds of information that can be produced through a benchmark development program. Current information resources which may be used to develop potential performance benchmarks are limited. There is need to develop an occupational safety and health information and data system in DOE, which is capable of incorporating demonstrated and documented performance benchmarks prior to, or concurrent with the development of hardware and software. A key to the success of this systems approach is rigorous development and demonstration of performance benchmark equivalents to users of such data before system hardware and software commitments are institutionalized.

  13. A meta-analysis of hypnosis for chronic pain problems: a comparison between hypnosis, standard care, and other psychological interventions.

    Science.gov (United States)

    Adachi, Tomonori; Fujino, Haruo; Nakae, Aya; Mashimo, Takashi; Sasaki, Jun

    2014-01-01

    Hypnosis is regarded as an effective treatment for psychological and physical ailments. However, its efficacy as a strategy for managing chronic pain has not been assessed through meta-analytical methods. The objective of the current study was to conduct a meta-analysis to assess the efficacy of hypnosis for managing chronic pain. When compared with standard care, hypnosis provided moderate treatment benefit. Hypnosis also showed a moderate superior effect as compared to other psychological interventions for a nonheadache group. The results suggest that hypnosis is efficacious for managing chronic pain. Given that large heterogeneity among the included studies was identified, the nature of hypnosis treatment is further discussed.

  14. Benchmark Imagery FY11 Technical Report

    Energy Technology Data Exchange (ETDEWEB)

    Roberts, R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Pope, P. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2011-06-14

    This report details the work performed in FY11 under project LL11-GS-PD06, “Benchmark Imagery for Assessing Geospatial Semantic Extraction Algorithms.” The original LCP for the Benchmark Imagery project called for creating a set of benchmark imagery for verifying and validating algorithms that extract semantic content from imagery. More specifically, the first year was slated to deliver real imagery that had been annotated, the second year to deliver real imagery that had composited features, and the final year was to deliver synthetic imagery modeled after the real imagery.

  15. Shielding benchmark tests of JENDL-3

    International Nuclear Information System (INIS)

    Kawai, Masayoshi; Hasegawa, Akira; Ueki, Kohtaro; Yamano, Naoki; Sasaki, Kenji; Matsumoto, Yoshihiro; Takemura, Morio; Ohtani, Nobuo; Sakurai, Kiyoshi.

    1994-03-01

    The integral test of neutron cross sections for major shielding materials in JENDL-3 has been performed by analyzing various shielding benchmark experiments. For the fission-like neutron source problem, the following experiments are analyzed: (1) ORNL Broomstick experiments for oxygen, iron and sodium, (2) ASPIS deep penetration experiments for iron, (3) ORNL neutron transmission experiments for iron, stainless steel, sodium and graphite, (4) KfK leakage spectrum measurements from iron spheres, (5) RPI angular neutron spectrum measurements in a graphite block. For D-T neutron source problem, the following two experiments are analyzed: (6) LLNL leakage spectrum measurements from spheres of iron and graphite, and (7) JAERI-FNS angular neutron spectrum measurements on beryllium and graphite slabs. Analyses have been performed using the radiation transport codes: ANISN(1D Sn), DIAC(1D Sn), DOT3.5(2D Sn) and MCNP(3D point Monte Carlo). The group cross sections for Sn transport calculations are generated with the code systems PROF-GROUCH-G/B and RADHEAT-V4. The point-wise cross sections for MCNP are produced with NJOY. For comparison, the analyses with JENDL-2 and ENDF/B-IV have been also carried out. The calculations using JENDL-3 show overall agreement with the experimental data as well as those with ENDF/B-IV. Particularly, JENDL-3 gives better results than JENDL-2 and ENDF/B-IV for sodium. It has been concluded that JENDL-3 is very applicable for fission and fusion reactor shielding analyses. (author)

  16. BENCHMARKING ORTEC ISOTOPIC MEASUREMENTS AND CALCULATIONS

    Energy Technology Data Exchange (ETDEWEB)

    Dewberry, R; Raymond Sigg, R; Vito Casella, V; Nitin Bhatt, N

    2008-09-29

    This report represents a description of compiled benchmark tests conducted to probe and to demonstrate the extensive utility of the Ortec ISOTOPIC {gamma}-ray analysis computer program. The ISOTOPIC program performs analyses of {gamma}-ray spectra applied to specific acquisition configurations in order to apply finite-geometry correction factors and sample-matrix-container photon absorption correction factors. The analysis program provides an extensive set of preset acquisition configurations to which the user can add relevant parameters in order to build the geometry and absorption correction factors that the program determines from calculus and from nuclear g-ray absorption and scatter data. The Analytical Development Section field nuclear measurement group of the Savannah River National Laboratory uses the Ortec ISOTOPIC analysis program extensively for analyses of solid waste and process holdup applied to passive {gamma}-ray acquisitions. Frequently the results of these {gamma}-ray acquisitions and analyses are to determine compliance with facility criticality safety guidelines. Another use of results is to designate 55-gallon drum solid waste as qualified TRU waste3 or as low-level waste. Other examples of the application of the ISOTOPIC analysis technique to passive {gamma}-ray acquisitions include analyses of standard waste box items and unique solid waste configurations. In many passive {gamma}-ray acquisition circumstances the container and sample have sufficient density that the calculated energy-dependent transmission correction factors have intrinsic uncertainties in the range 15%-100%. This is frequently the case when assaying 55-gallon drums of solid waste with masses of up to 400 kg and when assaying solid waste in extensive unique containers. Often an accurate assay of the transuranic content of these containers is not required, but rather a good defensible designation as >100 nCi/g (TRU waste) or <100 nCi/g (low level solid waste) is required. In

  17. Problems of Abstraction: Defining an American Standard for Mathematics Education at the Turn of the Twentieth Century

    Science.gov (United States)

    Fiss, Andrew

    2012-08-01

    Throughout the nineteenth century, the sciences in the United States went through many professional and disciplinary shifts. While the impact of these changes on university education has been well established, their consequences at the level of high school education have been often overlooked. In mathematics, debates at the level of university officials found clear outlets in the reform movement concerning secondary school offerings and college entrance requirements. This article therefore focuses on these debates and also the attempts to achieve compromises through standardized curricula in the recommendations of the Committee of Ten. In discussing the interplay between university and secondary education, it exposes a feature of the history of science education that has been neglected.

  18. Systematic benchmarking of microarray data classification: assessing the role of non-linearity and dimensionality reduction.

    Science.gov (United States)

    Pochet, Nathalie; De Smet, Frank; Suykens, Johan A K; De Moor, Bart L R

    2004-11-22

    Microarrays are capable of determining the expression levels of thousands of genes simultaneously. In combination with classification methods, this technology can be useful to support clinical management decisions for individual patients, e.g. in oncology. The aim of this paper is to systematically benchmark the role of non-linear versus linear techniques and dimensionality reduction methods. A systematic benchmarking study is performed by comparing linear versions of standard classification and dimensionality reduction techniques with their non-linear versions based on non-linear kernel functions with a radial basis function (RBF) kernel. A total of 9 binary cancer classification problems, derived from 7 publicly available microarray datasets, and 20 randomizations of each problem are examined. Three main conclusions can be formulated based on the performances on independent test sets. (1) When performing classification with least squares support vector machines (LS-SVMs) (without dimensionality reduction), RBF kernels can be used without risking too much overfitting. The results obtained with well-tuned RBF kernels are never worse and sometimes even statistically significantly better compared to results obtained with a linear kernel in terms of test set receiver operating characteristic and test set accuracy performances. (2) Even for classification with linear classifiers like LS-SVM with linear kernel, using regularization is very important. (3) When performing kernel principal component analysis (kernel PCA) before classification, using an RBF kernel for kernel PCA tends to result in overfitting, especially when using supervised feature selection. It has been observed that an optimal selection of a large number of features is often an indication for overfitting. Kernel PCA with linear kernel gives better results.

  19. Competency based training in robotic surgery: benchmark scores for virtual reality robotic simulation.

    Science.gov (United States)

    Raison, Nicholas; Ahmed, Kamran; Fossati, Nicola; Buffi, Nicolò; Mottrie, Alexandre; Dasgupta, Prokar; Van Der Poel, Henk

    2017-05-01

    To develop benchmark scores of competency for use within a competency based virtual reality (VR) robotic training curriculum. This longitudinal, observational study analysed results from nine European Association of Urology hands-on-training courses in VR simulation. In all, 223 participants ranging from novice to expert robotic surgeons completed 1565 exercises. Competency was set at 75% of the mean expert score. Benchmark scores for all general performance metrics generated by the simulator were calculated. Assessment exercises were selected by expert consensus and through learning-curve analysis. Three basic skill and two advanced skill exercises were identified. Benchmark scores based on expert performance offered viable targets for novice and intermediate trainees in robotic surgery. Novice participants met the competency standards for most basic skill exercises; however, advanced exercises were significantly more challenging. Intermediate participants performed better across the seven metrics but still did not achieve the benchmark standard in the more difficult exercises. Benchmark scores derived from expert performances offer relevant and challenging scores for trainees to achieve during VR simulation training. Objective feedback allows both participants and trainers to monitor educational progress and ensures that training remains effective. Furthermore, the well-defined goals set through benchmarking offer clear targets for trainees and enable training to move to a more efficient competency based curriculum. © 2016 The Authors BJU International © 2016 BJU International Published by John Wiley & Sons Ltd.

  20. Lesson learned from the SARNET wall condensation benchmarks

    International Nuclear Information System (INIS)

    Ambrosini, W.; Forgione, N.; Merli, F.; Oriolo, F.; Paci, S.; Kljenak, I.; Kostka, P.; Vyskocil, L.; Travis, J.R.; Lehmkuhl, J.; Kelm, S.; Chin, Y.-S.; Bucci, M.

    2014-01-01

    Highlights: • The results of the benchmarking activity on wall condensation are reported. • The work was performed in the frame of SARNET. • General modelling techniques for condensation are discussed. • Results of University of Pisa and of other benchmark participants are discussed. • The lesson learned is drawn. - Abstract: The prediction of condensation in the presence of noncondensable gases has received continuing attention in the frame of the Severe Accident Research Network of Excellence, both in the first (2004–2008) and in the second (2009–2013) EC integrated projects. Among the different reasons for considering so relevant this basic phenomenon, coped with by classical treatments dated in the first decades of the last century, there is the interest for developing updated CFD models for reactor containment analysis, requiring validating at a different level the available modelling techniques. In the frame of SARNET, benchmarking activities were undertaken taking advantage of the work performed at different institutions in setting up and developing models for steam condensation in conditions of interest for nuclear reactor containment. Four steps were performed in the activity, involving: (1) an idealized problem freely inspired at the actual conditions occurring in an experimental facility, CONAN, installed at the University of Pisa; (2) a first comparison with experimental data purposely collected by the CONAN facility; (3) a second comparison with data available from experimental campaigns performed in the same apparatus before the inclusion of the activities in SARNET; (4) a third exercise involving data obtained at lower mixture velocity than in previous campaigns, aimed at providing conditions closer to those addressed in reactor containment analyses. The last step of the benchmarking activity required to change the configuration of the experimental apparatus to achieve the lower flow rates involved in the new test specifications. The

  1. Analogy, an Alternative Model.
 Critics to the standard model of analogical problems solving and proposals for an alternative one

    Directory of Open Access Journals (Sweden)

    Ricardo A. Minervino

    2016-02-01

    Full Text Available The authors made an extension of Hofstadter‘s criticisms against the standard approach in analogical thinking represented by the structure-mapping theory of Gentner and the multiconstraint theory of Holyoak and Thagard. Based on this extension, they proposed a non-serial model of analogical problem solving. Against the standard approach, the model postulates that: (a people detect and evaluate differences between mapped elements before the subprocess of inference generation and consider them in order to control it, and (b properties of an element that explain why the element could fill a certain role in the base problem resolution (PERs play a crucial role in these detection and evaluation operations, and also in post-inferences subprocesses. An experiment showed that: (a people detect and evaluate the relevance of differences between mapped elements before inference generation, (b that they inhibit the generation of literal inferences when they face relevant differences, and (c that they stop the subprocess when they recognize insuperable ones. The results also showed that base PERs are reactivated at different moments of analogical transfer. The data obtained are incompatible with the standard theories of analogical thinking, which treat inference generation as a syntactic mechanism and exclude contextual semantic analysis from the study of analogy. 

  2. Reactor fuel depletion benchmark of TINDER

    International Nuclear Information System (INIS)

    Martin, W.J.; Oliveira, C.R.E. de; Hecht, A.A.

    2014-01-01

    Highlights: • A reactor burnup benchmark of TINDER, coupling MCNP6 to CINDER2008, was performed. • TINDER is a poor candidate for fuel depletion calculations using its current libraries. • Data library modification is necessary if fuel depletion is desired from TINDER. - Abstract: Accurate burnup calculations are key to proper nuclear reactor design, fuel cycle modeling, and disposal estimations. The TINDER code, originally designed for activation analyses, has been modified to handle full burnup calculations, including the widely used predictor–corrector feature. In order to properly characterize the performance of TINDER for this application, a benchmark calculation was performed. Although the results followed the trends of past benchmarked codes for a UO 2 PWR fuel sample from the Takahama-3 reactor, there were obvious deficiencies in the final result, likely in the nuclear data library that was used. Isotopic comparisons versus experiment and past code benchmarks are given, as well as hypothesized areas of deficiency and future work

  3. Medicare Contracting - Redacted Benchmark Metric Reports

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Centers for Medicare and Medicaid Services has compiled aggregate national benchmark cost and workload metrics using data submitted to CMS by the AB MACs and the...

  4. How benchmarking can improve patient nutrition.

    Science.gov (United States)

    Ellis, Jane

    Benchmarking is a tool that originated in business to enable organisations to compare their services with industry-wide best practice. Early last year the Department of Health published The Essence of Care, a benchmarking toolkit adapted for use in health care. It focuses on eight elements of care that are crucial to patients' experiences. Nurses and other health care professionals at a London NHS trust have begun a trust-wide benchmarking project. The aim is to improve patients' experiences of health care by sharing and comparing information, and by identifying examples of good practice and areas for improvement. The project began with two of the eight elements of The Essence of Care, with the intention of covering the rest later. This article describes the benchmarking process for nutrition and some of the consequent improvements in care.

  5. Measuring Distribution Performance? Benchmarking Warrants Your Attention

    Energy Technology Data Exchange (ETDEWEB)

    Ericson, Sean J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Alvarez, Paul [The Wired Group

    2018-04-13

    Identifying, designing, and measuring performance metrics is critical to securing customer value, but can be a difficult task. This article examines the use of benchmarks based on publicly available performance data to set challenging, yet fair, metrics and targets.

  6. Benchmarking Linked Open Data Management Systems

    NARCIS (Netherlands)

    R. Angles Rojas (Renzo); M.-D. Pham (Minh-Duc); P.A. Boncz (Peter)

    2014-01-01

    htmlabstractWith inherent support for storing and analysing highly interconnected data, graph and RDF databases appear as natural solutions for developing Linked Open Data applications. However, current benchmarks for these database technologies do not fully attain the desirable characteristics

  7. Professional Performance and Bureaucratic Benchmarking Information

    DEFF Research Database (Denmark)

    Schneider, Melanie L.; Mahlendorf, Matthias D.; Schäffer, Utz

    Prior research documents positive effects of benchmarking information provision on performance and attributes this to social comparisons. However, the effects on professional recipients are unclear. Studies of professional control indicate that professional recipients often resist bureaucratic...

  8. Professional Performance and Bureaucratic Benchmarking Information

    DEFF Research Database (Denmark)

    Schneider, Melanie L.; Mahlendorf, Matthias D.; Schäffer, Utz

    and professional performance but only if prior professional performance was low. Supplemental analyses support the robustness of our results. Findings indicate conditions under which bureaucratic benchmarking information may affect professional performance and advance research on professional control and social...

  9. Standard Model processes

    CERN Document Server

    Mangano, M.L.; Aguilar-Saavedra, Juan Antonio; Alekhin, S.; Badger, S.; Bauer, C.W.; Becher, T.; Bertone, V.; Bonvini, M.; Boselli, S.; Bothmann, E.; Boughezal, R.; Cacciari, M.; Carloni Calame, C.M.; Caola, F.; Campbell, J.M.; Carrazza, S.; Chiesa, M.; Cieri, L.; Cimaglia, F.; Febres Cordero, F.; Ferrarese, P.; D'Enterria, D.; Ferrera, G.; Garcia i Tormo, X.; Garzelli, M.V.; Germann, E.; Hirschi, V.; Han, T.; Ita, H.; Jäger, B.; Kallweit, S.; Karlberg, A.; Kuttimalai, S.; Krauss, F.; Larkoski, A.J.; Lindert, J.; Luisoni, G.; Maierhöfer, P.; Mattelaer, O.; Martinez, H.; Moch, S.; Montagna, G.; Moretti, M.; Nason, P.; Nicrosini, O.; Oleari, C.; Pagani, D.; Papaefstathiou, A.; Petriello, F.; Piccinini, F.; Pierini, M.; Pierog, T.; Pozzorini, S.; Re, E.; Robens, T.; Rojo, J.; Ruiz, R.; Sakurai, K.; Salam, G.P.; Salfelder, L.; Schönherr, M.; Schulze, M.; Schumann, S.; Selvaggi, M.; Shivaji, A.; Siodmok, A.; Skands, P.; Torrielli, P.; Tramontano, F.; Tsinikos, I.; Tweedie, B.; Vicini, A.; Westhoff, S.; Zaro, M.; Zeppenfeld, D.; CERN. Geneva. ATS Department

    2017-06-22

    This report summarises the properties of Standard Model processes at the 100 TeV pp collider. We document the production rates and typical distributions for a number of benchmark Standard Model processes, and discuss new dynamical phenomena arising at the highest energies available at this collider. We discuss the intrinsic physics interest in the measurement of these Standard Model processes, as well as their role as backgrounds for New Physics searches.

  10. Monte Carlo benchmark calculations for 400MWTH PBMR core

    International Nuclear Information System (INIS)

    Kim, H. C.; Kim, J. K.; Kim, S. Y.; Noh, J. M.

    2007-01-01

    A large interest in high-temperature gas-cooled reactors (HTGR) has been initiated in connection with hydrogen production in recent years. In this study, as a part of work for establishing Monte Carlo computation system for HTGR core analysis, some benchmark calculations for pebble-type HTGR were carried out using MCNP5 code. The core of the 400MW t h Pebble-bed Modular Reactor (PBMR) was selected as a benchmark model. Recently, the IAEA CRP5 neutronics and thermal-hydraulics benchmark problem was proposed for the testing of existing methods for HTGRs to analyze the neutronics and thermal-hydraulic behavior for the design and safety evaluations of the PBMR. This study deals with the neutronic benchmark problems, for fresh fuel and cold conditions (Case F-1), and first core loading with given number densities (Case F-2), proposed for PBMR. After the detailed MCNP modeling of the whole facility, benchmark calculations were performed. Spherical fuel region of a fuel pebble is divided into cubic lattice element in order to model a fuel pebble which contains, on average, 15000 CFPs (Coated Fuel Particles). Each element contains one CFP at its center. In this study, the side length of each cubic lattice element to have the same amount of fuel was calculated to be 0.1635 cm. The remaining volume of each lattice element was filled with graphite. All of different 5 concentric shells of CFP were modeled. The PBMR annular core consists of approximately 452000 pebbles in the benchmark problems. In Case F-1 where the core was filled with only fresh fuel pebble, a BCC(body-centered-cubic) lattice model was employed in order to achieve the random packing core with the packing fraction of 0.61. The BCC lattice was also employed with the size of the moderator pebble increased in a manner that reproduces the specified F/M ratio of 1:2 while preserving the packing fraction of 0.61 in Case F-2. The calculations were pursued with ENDF/B-VI cross-section library and used sab2002 S(α,

  11. Benchmarking Danish Vocational Education and Training Programmes

    DEFF Research Database (Denmark)

    Bogetoft, Peter; Wittrup, Jesper

    This study paper discusses methods whereby Danish vocational education and training colleges can be benchmarked, and presents results from a number of models. It is conceptually complicated to benchmark vocational colleges, as the various colleges in Denmark offer a wide range of course programmes...... attempt to summarise the various effects that the colleges have in two relevant figures, namely retention rates of students and employment rates among students who have completed training programmes....

  12. Practice benchmarking in the age of targeted auditing.

    Science.gov (United States)

    Langdale, Ryan P; Holland, Ben F

    2012-11-01

    The frequency and sophistication of health care reimbursement auditing has progressed rapidly in recent years, leaving many oncologists wondering whether their private practices would survive a full-scale Office of the Inspector General (OIG) investigation. The Medicare Part B claims database provides a rich source of information for physicians seeking to understand how their billing practices measure up to their peers, both locally and nationally. This database was dissected by a team of cancer specialists to uncover important benchmarks related to targeted auditing. All critical Medicare charges, payments, denials, and service ratios in this article were derived from the full 2010 Medicare Part B claims database. Relevant claims were limited by using Medicare provider specialty codes 83 (hematology/oncology) and 90 (medical oncology), with an emphasis on claims filed from the physician office place of service (11). All charges, denials, and payments were summarized at the Current Procedural Terminology code level to drive practice benchmarking standards. A careful analysis of this data set, combined with the published audit priorities of the OIG, produced germane benchmarks from which medical oncologists can monitor, measure and improve on common areas of billing fraud, waste or abuse in their practices. Part II of this series and analysis will focus on information pertinent to radiation oncologists.

  13. Analytical Radiation Transport Benchmarks for The Next Century

    International Nuclear Information System (INIS)

    Ganapol, B.D.

    2005-01-01

    Verification of large-scale computational algorithms used in nuclear engineering and radiological applications is an essential element of reliable code performance. For this reason, the development of a suite of multidimensional semi-analytical benchmarks has been undertaken to provide independent verification of proper operation of codes dealing with the transport of neutral particles. The benchmarks considered cover several one-dimensional, multidimensional, monoenergetic and multigroup, fixed source and critical transport scenarios. The first approach, called the Green's Function. In slab geometry, the Green's function is incorporated into a set of integral equations for the boundary fluxes. Through a numerical Fourier transform inversion and subsequent matrix inversion for the boundary fluxes, a semi-analytical benchmark emerges. Multidimensional solutions in a variety of infinite media are also based on the slab Green's function. In a second approach, a new converged SN method is developed. In this method, the SN solution is ''minded'' to bring out hidden high quality solutions. For this case multigroup fixed source and criticality transport problems are considered. Remarkably accurate solutions can be obtained with this new method called the Multigroup Converged SN (MGCSN) method as will be demonstrated

  14. Nonlinear viscoplasticity in ASPECT: benchmarking and applications to subduction

    Directory of Open Access Journals (Sweden)

    A. Glerum

    2018-03-01

    Full Text Available ASPECT (Advanced Solver for Problems in Earth's ConvecTion is a massively parallel finite element code originally designed for modeling thermal convection in the mantle with a Newtonian rheology. The code is characterized by modern numerical methods, high-performance parallelism and extensibility. This last characteristic is illustrated in this work: we have extended the use of ASPECT from global thermal convection modeling to upper-mantle-scale applications of subduction.Subduction modeling generally requires the tracking of multiple materials with different properties and with nonlinear viscous and viscoplastic rheologies. To this end, we implemented a frictional plasticity criterion that is combined with a viscous diffusion and dislocation creep rheology. Because ASPECT uses compositional fields to represent different materials, all material parameters are made dependent on a user-specified number of fields.The goal of this paper is primarily to describe and verify our implementations of complex, multi-material rheology by reproducing the results of four well-known two-dimensional benchmarks: the indentor benchmark, the brick experiment, the sandbox experiment and the slab detachment benchmark. Furthermore, we aim to provide hands-on examples for prospective users by demonstrating the use of multi-material viscoplasticity with three-dimensional, thermomechanical models of oceanic subduction, putting ASPECT on the map as a community code for high-resolution, nonlinear rheology subduction modeling.

  15. Pre-evaluation of fusion shielding benchmark experiment

    International Nuclear Information System (INIS)

    Hayashi, K.; Handa, H.; Konno, C.

    1994-01-01

    Shielding benchmark experiment is very useful to test the design code and nuclear data for fusion devices. There are many types of benchmark experiments that should be done in fusion shielding problems, but time and budget are limited. Therefore it will be important to select and determine the effective experimental configurations by precalculation before the experiment. The authors did three types of pre-evaluation to determine the experimental assembly configurations of shielding benchmark experiments planned in FNS, JAERI. (1) Void Effect Experiment - The purpose of this experiment is to measure the local increase of dose and nuclear heating behind small void(s) in shield material. Dimension of the voids and its arrangements were decided as follows. Dose and nuclear heating were calculated both for with and without void(s). Minimum size of the void was determined so that the ratio of these two results may be larger than error of the measurement system. (2) Auxiliary Shield Experiment - The purpose of this experiment is to measure shielding properties of B 4 C, Pb, W, and dose around superconducting magnet (SCM). Thickness of B 4 C, Pb, W and their arrangement including multilayer configuration were determined. (3) SCM Nuclear Heating Experiment - The purpose of this experiment is to measure nuclear heating and dose distribution in SCM material. Because it is difficult to use liquid helium as a part of SCM mock up material, material composition of SCM mock up are surveyed to have similar nuclear heating property of real SCM composition

  16. OR-Benchmark: An Open and Reconfigurable Digital Watermarking Benchmarking Framework

    OpenAIRE

    Wang, Hui; Ho, Anthony TS; Li, Shujun

    2015-01-01

    Benchmarking digital watermarking algorithms is not an easy task because different applications of digital watermarking often have very different sets of requirements and trade-offs between conflicting requirements. While there have been some general-purpose digital watermarking benchmarking systems available, they normally do not support complicated benchmarking tasks and cannot be easily reconfigured to work with different watermarking algorithms and testing conditions. In this paper, we pr...

  17. Action-Oriented Benchmarking: Concepts and Tools

    Energy Technology Data Exchange (ETDEWEB)

    California Energy Commission; Mathew, Paul; Mills, Evan; Mathew, Paul; Piette, Mary Ann; Bourassa, Norman; Brook, Martha

    2008-02-13

    Most energy benchmarking tools provide static feedback on how one building compares to a larger set of loosely similar buildings, without providing information at the end-use level or on what can be done to reduce consumption, cost, or emissions. In this article--Part 1 of a two-part series--we describe an 'action-oriented benchmarking' approach, which extends whole-building energy benchmarking to include analysis of system and component energy use metrics and features. Action-oriented benchmarking thereby allows users to generate more meaningful metrics and to identify, screen and prioritize potential efficiency improvements. This opportunity assessment process can then be used to inform and optimize a full-scale audit or commissioning process. We introduce a new web-based action-oriented benchmarking system and associated software tool-EnergyIQ. The benchmarking methods, visualizations, and user interface design are informed by an end-user needs assessment survey and best-practice guidelines from ASHRAE.

  18. Pool critical assembly pressure vessel facility benchmark

    International Nuclear Information System (INIS)

    Remec, I.; Kam, F.B.K.

    1997-07-01

    This pool critical assembly (PCA) pressure vessel wall facility benchmark (PCA benchmark) is described and analyzed in this report. Analysis of the PCA benchmark can be used for partial fulfillment of the requirements for the qualification of the methodology for pressure vessel neutron fluence calculations, as required by the US Nuclear Regulatory Commission regulatory guide DG-1053. Section 1 of this report describes the PCA benchmark and provides all data necessary for the benchmark analysis. The measured quantities, to be compared with the calculated values, are the equivalent fission fluxes. In Section 2 the analysis of the PCA benchmark is described. Calculations with the computer code DORT, based on the discrete-ordinates method, were performed for three ENDF/B-VI-based multigroup libraries: BUGLE-93, SAILOR-95, and BUGLE-96. An excellent agreement of the calculated (C) and measures (M) equivalent fission fluxes was obtained. The arithmetic average C/M for all the dosimeters (total of 31) was 0.93 ± 0.03 and 0.92 ± 0.03 for the SAILOR-95 and BUGLE-96 libraries, respectively. The average C/M ratio, obtained with the BUGLE-93 library, for the 28 measurements was 0.93 ± 0.03 (the neptunium measurements in the water and air regions were overpredicted and excluded from the average). No systematic decrease in the C/M ratios with increasing distance from the core was observed for any of the libraries used

  19. Benchmarking for Cost Improvement. Final report

    Energy Technology Data Exchange (ETDEWEB)

    1993-09-01

    The US Department of Energy`s (DOE) Office of Environmental Restoration and Waste Management (EM) conducted the Benchmarking for Cost Improvement initiative with three objectives: Pilot test benchmarking as an EM cost improvement tool; identify areas for cost improvement and recommend actions to address these areas; provide a framework for future cost improvement. The benchmarking initiative featured the use of four principal methods (program classification, nationwide cost improvement survey, paired cost comparison and component benchmarking). Interested parties contributed during both the design and execution phases. The benchmarking initiative was conducted on an accelerated basis. Of necessity, it considered only a limited set of data that may not be fully representative of the diverse and complex conditions found at the many DOE installations. The initiative generated preliminary data about cost differences and it found a high degree of convergence on several issues. Based on this convergence, the report recommends cost improvement strategies and actions. This report describes the steps taken as part of the benchmarking initiative and discusses the findings and recommended actions for achieving cost improvement. The results and summary recommendations, reported below, are organized by the study objectives.

  20. Perform qualify reliability-power tests by shooting common mistakes: practical problems and standard answers per Telcordia/Bellcore requests

    Science.gov (United States)

    Yu, Zheng

    2002-08-01

    Facing the new demands of the optical fiber communications market, almost all the performance and reliability of optical network system are dependent on the qualification of the fiber optics components. So, how to comply with the system requirements, the Telcordia / Bellcore reliability and high-power testing has become the key issue for the fiber optics components manufacturers. The qualification of Telcordia / Bellcore reliability or high-power testing is a crucial issue for the manufacturers. It is relating to who is the outstanding one in the intense competition market. These testing also need maintenances and optimizations. Now, work on the reliability and high-power testing have become the new demands in the market. The way is needed to get the 'Triple-Win' goal expected by the component-makers, the reliability-testers and the system-users. To those who are meeting practical problems for the testing, there are following seven topics that deal with how to shoot the common mistakes to perform qualify reliability and high-power testing: ¸ Qualification maintenance requirements for the reliability testing ¸ Lots control for preparing the reliability testing ¸ Sampling select per the reliability testing ¸ Interim measurements during the reliability testing ¸ Basic referencing factors relating to the high-power testing ¸ Necessity of re-qualification testing for the changing of producing ¸ Understanding the similarity for product family by the definitions

  1. An enhanced RNA alignment benchmark for sequence alignment programs

    Directory of Open Access Journals (Sweden)

    Steger Gerhard

    2006-10-01

    Full Text Available Abstract Background The performance of alignment programs is traditionally tested on sets of protein sequences, of which a reference alignment is known. Conclusions drawn from such protein benchmarks do not necessarily hold for the RNA alignment problem, as was demonstrated in the first RNA alignment benchmark published so far. For example, the twilight zone – the similarity range where alignment quality drops drastically – starts at 60 % for RNAs in comparison to 20 % for proteins. In this study we enhance the previous benchmark. Results The RNA sequence sets in the benchmark database are taken from an increased number of RNA families to avoid unintended impact by using only a few families. The size of sets varies from 2 to 15 sequences to assess the influence of the number of sequences on program performance. Alignment quality is scored by two measures: one takes into account only nucleotide matches, the other measures structural conservation. The performance order of parameters – like nucleotide substitution matrices and gap-costs – as well as of programs is rated by rank tests. Conclusion Most sequence alignment programs perform equally well on RNA sequence sets with high sequence identity, that is with an average pairwise sequence identity (APSI above 75 %. Parameters for gap-open and gap-extension have a large influence on alignment quality lower than APSI ≤ 75 %; optimal parameter combinations are shown for several programs. The use of different 4 × 4 substitution matrices improved program performance only in some cases. The performance of iterative programs drastically increases with increasing sequence numbers and/or decreasing sequence identity, which makes them clearly superior to programs using a purely non-iterative, progressive approach. The best sequence alignment programs produce alignments of high quality down to APSI > 55 %; at lower APSI the use of sequence+structure alignment programs is recommended.

  2. Benchmarks for targeted alpha therapy for cancer

    International Nuclear Information System (INIS)

    Allen, J.B.

    2011-01-01

    Full text: Targeted alpha therapy (TAT) needs to achieve certain benchmarks if t is to find its way into the clinic. This paper reviews the status of benchmarks for dose normalisation, microdosimetry, response of micrometastases to therapy, maximum tolerance doses and adequate supplies of alpha emitting radioisotopes. In comparing dose effect for different alpha immunoconjugates (IC), patients and diseases, it is appropriate to normalise dose according to specific factors that affect the efficacy of the treatment. Body weight and body surface area are two commonly used criteria. However, more advanced criteria are required, such as the volume of distribution. Alpha dosimetry presents a special challenge in clinical trials. Monte Carlo calculations can be used to determine specific energies, but these need validation. This problem could be resolved with micronuclei biological dosimetry and mutagenesis studies of radiation Jam age. While macroscopic disease can be monitored, the impact of therapy on subclinical microscopic disease is a real problem. Magnetic cell separation of cancer cells in the blood with magnetic microspheres coated with the targeting monoclonal antibody could provide the response data. Alpha therapy needs first to establish maximum tolerance doses for practical acceptance. This has been determined with 213Bi-IC for acute myelogenous leukaemia at ∼ I mCi/kg. The maximum tolerance dose has not yet been established for metastatic melanoma, but the efficacious dose for some melanomas is less than 0.3 mCi/kg and for intra-cavity therapy of GBM it is ∼ 0.14 mCi/kg for 211 At-Ie. In the case of Ra-223 for bone cancer, the emission of four alphas with a total energy of 27 MeV results in very high cytotoxicity and an effective dose of only ∼ 5 μCi/kg. The limited supplies of Ac-225 available after separation from Th-229 are adequate for clinical trials. However, should TAT become a clinical procedure, then new supplies must be found. Accelerator

  3. Benchmark calculation of APOLLO2 and SLAROM-UF in a fast reactor lattice

    International Nuclear Information System (INIS)

    Hazama, Taira

    2009-10-01

    A lattice cell benchmark calculation is carried out for APOLLO2 and SLAROM-UF on the infinite lattice of a simple pin cell featuring a fast reactor. The accuracy in k-infinity and reaction rates is investigated in their reference and standard level calculations. In the 1st reference level calculation, APOLLO2 and SLAROM-UF agree with the reference value of k-infinity obtained by a continuous energy Monte Carlo calculation within 50 pcm. However, larger errors are observed in a particular reaction rate and energy range. A major problem common to both codes is in the cross section library of 239 Pu in the unresolved energy range. In the 2nd reference level calculation, which is based on the ECCO 1968 group structure, both results of k-infinity agree with the reference value within 100 pcm. The resonance overlap effect is observed by several percents in cross sections of heavy nuclides. In the standard level calculation based on the APOLLO2 library creation methodology, a discrepancy appears by more than 300 pcm. A restriction is revealed in APOLLO2. Its standard cross section library does not have a sufficiently small background cross section to evaluate the self-shielding effect of 56 Fe cross sections. The restriction can be removed by introducing the mixture self-shielding treatment recently introduced to APOLLO2. SLAROM-UF original standard level calculation based on the JFS-3 library creation methodology is the best among the standard level calculations. Improvement from the SLAROM-UF standard level calculation is achieved mainly by use of a proper weight function for light or intermediate nuclides. (author)

  4. Benchmark calculation of APOLLO-2 and SLAROM-UF in a fast reactor lattice

    International Nuclear Information System (INIS)

    Hazama, T.

    2009-07-01

    A lattice cell benchmark calculation is carried out for APOLLO2 and SLAROM-UF on the infinite lattice of a simple pin cell featuring a fast reactor. The accuracy in k-infinity and reaction rates is investigated in their reference and standard level calculations. In the 1. reference level calculation, APOLLO2 and SLAROM-UF agree with the reference value of k-infinity obtained by a continuous energy Monte Carlo calculation within 50 pcm. However, larger errors are observed in a particular reaction rate and energy range. The major problem common to both codes is in the cross section library of 239 Pu in the unresolved energy range. In the 2. reference level calculation, which is based on the ECCO 1968 group structure, both results of k-infinity agree with the reference value within 100 pcm. The resonance overlap effect is observed by several percents in cross sections of heavy nuclides. In the standard level calculation based on the APOLLO2 library creation methodology, a discrepancy appears by more than 300 pcm. A restriction is revealed in APOLLO2. Its standard cross section library does not have a sufficiently small background cross section to evaluate the self shielding effect on 56 Fe cross sections. The restriction can be removed by introducing the mixture self-shielding treatment recently introduced to APOLLO2. SLAROM-UF original standard level calculation based on the JFS-3 library creation methodology is the best among the standard level calculations. Improvement from the SLAROM-UF standard level calculation is achieved mainly by use of a proper weight function for light or intermediate nuclides. (author)

  5. Benchmarking as a strategy policy tool for energy management

    NARCIS (Netherlands)

    Rienstra, S.A.; Nijkamp, P.

    2002-01-01

    In this paper we analyse to what extent benchmarking is a valuable tool in strategic energy policy analysis. First, the theory on benchmarking is concisely presented, e.g., by discussing the benchmark wheel and the benchmark path. Next, some results of surveys among business firms are presented. To

  6. Benchmarking von Krankenhausinformationssystemen – eine vergleichende Analyse deutschsprachiger Benchmarkingcluster

    Directory of Open Access Journals (Sweden)

    Jahn, Franziska

    2015-08-01

    Full Text Available Benchmarking is a method of strategic information management used by many hospitals today. During the last years, several benchmarking clusters have been established within the German-speaking countries. They support hospitals in comparing and positioning their information system’s and information management’s costs, performance and efficiency against other hospitals. In order to differentiate between these benchmarking clusters and to provide decision support in selecting an appropriate benchmarking cluster, a classification scheme is developed. The classification scheme observes both general conditions and examined contents of the benchmarking clusters. It is applied to seven benchmarking clusters which have been active in the German-speaking countries within the last years. Currently, performance benchmarking is the most frequent benchmarking type, whereas the observed benchmarking clusters differ in the number of benchmarking partners and their cooperation forms. The benchmarking clusters also deal with different benchmarking subjects. Assessing costs and quality application systems, physical data processing systems, organizational structures of information management and IT services processes are the most frequent benchmarking subjects. There is still potential for further activities within the benchmarking clusters to measure strategic and tactical information management, IT governance and quality of data and data-processing processes. Based on the classification scheme and the comparison of the benchmarking clusters, we derive general recommendations for benchmarking of hospital information systems.

  7. Clean Energy Manufacturing Analysis Center Benchmark Report: Framework and Methodologies

    Energy Technology Data Exchange (ETDEWEB)

    Sandor, Debra [National Renewable Energy Lab. (NREL), Golden, CO (United States); Chung, Donald [National Renewable Energy Lab. (NREL), Golden, CO (United States); Keyser, David [National Renewable Energy Lab. (NREL), Golden, CO (United States); Mann, Margaret [National Renewable Energy Lab. (NREL), Golden, CO (United States); Engel-Cox, Jill [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2017-05-23

    This report documents the CEMAC methodologies for developing and reporting annual global clean energy manufacturing benchmarks. The report reviews previously published manufacturing benchmark reports and foundational data, establishes a framework for benchmarking clean energy technologies, describes the CEMAC benchmark analysis methodologies, and describes the application of the methodologies to the manufacturing of four specific clean energy technologies.

  8. Spherical Harmonic Solutions to the 3D Kobayashi Benchmark Suite

    International Nuclear Information System (INIS)

    Brown, P.N.; Chang, B.; Hanebutte, U.R.

    1999-01-01

    Spherical harmonic solutions of order 5, 9 and 21 on spatial grids containing up to 3.3 million cells are presented for the Kobayashi benchmark suite. This suite of three problems with simple geometry of pure absorber with large void region was proposed by Professor Kobayashi at an OECD/NEA meeting in 1996. Each of the three problems contains a source, a void and a shield region. Problem 1 can best be described as a box in a box problem, where a source region is surrounded by a square void region which itself is embedded in a square shield region. Problems 2 and 3 represent a shield with a void duct. Problem 2 having a straight and problem 3 a dog leg shaped duct. A pure absorber and a 50% scattering case are considered for each of the three problems. The solutions have been obtained with Ardra, a scalable, parallel neutron transport code developed at Lawrence Livermore National Laboratory (LLNL). The Ardra code takes advantage of a two-level parallelization strategy, which combines message passing between processing nodes and thread based parallelism amongst processors on each node. All calculations were performed on the IBM ASCI Blue-Pacific computer at LLNL

  9. The institutionalization of benchmarking in the Danish construction industry

    DEFF Research Database (Denmark)

    Rasmussen, Grane Mikael Gregaard; Gottlieb, Stefan Christoffer

    and disseminated to the construction industry. The fourth chapter demonstrates how benchmarking was concretized into a benchmarking system and articulated to address several political focus areas for the construction industry. BEC accordingly became a political arena where many local perspectives and strategic...... interests had to be managed. The fifth chapter is about the operationalization of benchmarking and demonstrates how the concretizing and implementation of benchmarking gave rise to reactions from different actors with different and diverse interests in the benchmarking initiative. Political struggles...

  10. Toxicological benchmarks for wildlife: 1994 Revision

    International Nuclear Information System (INIS)

    Opresko, D.M.; Sample, B.E.; Suter, G.W. II.

    1994-09-01

    The process by which ecological risks of environmental contaminants are evaluated is two-tiered. The first tier is a screening assessment where concentrations of contaminants in the environment are compared to toxicological benchmarks which represent concentrations of chemicals in environmental media (water, sediment, soil, food, etc.) that are presumed to be nonhazardous to the surrounding biota. The second tier is a baseline ecological risk assessment where toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. The report presents toxicological benchmarks for assessment of effects of 76 chemicals on 8 representative mammalian wildlife species and 31 chemicals on 9 avian wildlife species. The chemicals are some of those that occur at United States Department of Energy waste sites; the wildlife species were chosen because they are widely distributed and provide a representative range of body sizes and diets. Further descriptions of the chosen wildlife species and chemicals are provided in the report. The benchmarks presented in this report represent values believed to be nonhazardous for the listed wildlife species. These benchmarks only consider contaminant exposure through oral ingestion of contaminated media; exposure through inhalation or direct dermal exposure are not considered in this report

  11. Toxicological benchmarks for wildlife: 1994 Revision

    Energy Technology Data Exchange (ETDEWEB)

    Opresko, D.M.; Sample, B.E.; Suter, G.W. II

    1994-09-01

    The process by which ecological risks of environmental contaminants are evaluated is two-tiered. The first tier is a screening assessment where concentrations of contaminants in the environment are compared to toxicological benchmarks which represent concentrations of chemicals in environmental media (water, sediment, soil, food, etc.) that are presumed to be nonhazardous to the surrounding biota. The second tier is a baseline ecological risk assessment where toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. The report presents toxicological benchmarks for assessment of effects of 76 chemicals on 8 representative mammalian wildlife species and 31 chemicals on 9 avian wildlife species. The chemicals are some of those that occur at United States Department of Energy waste sites; the wildlife species were chosen because they are widely distributed and provide a representative range of body sizes and diets. Further descriptions of the chosen wildlife species and chemicals are provided in the report. The benchmarks presented in this report represent values believed to be nonhazardous for the listed wildlife species. These benchmarks only consider contaminant exposure through oral ingestion of contaminated media; exposure through inhalation or direct dermal exposure are not considered in this report.

  12. A benchmark server using high resolution protein structure data, and benchmark results for membrane helix predictions.

    Science.gov (United States)

    Rath, Emma M; Tessier, Dominique; Campbell, Alexander A; Lee, Hong Ching; Werner, Tim; Salam, Noeris K; Lee, Lawrence K; Church, W Bret

    2013-03-27

    Helical membrane proteins are vital for the interaction of cells with their environment. Predicting the location of membrane helices in protein amino acid sequences provides substantial understanding of their structure and function and identifies membrane proteins in sequenced genomes. Currently there is no comprehensive benchmark tool for evaluating prediction methods, and there is no publication comparing all available prediction tools. Current benchmark literature is outdated, as recently determined membrane protein structures are not included. Current literature is also limited to global assessments, as specialised benchmarks for predicting specific classes of membrane proteins were not previously carried out. We present a benchmark server at http://sydney.edu.au/pharmacy/sbio/software/TMH_benchmark.shtml that uses recent high resolution protein structural data to provide a comprehensive assessment of the accuracy of existing membrane helix prediction methods. The server further allows a user to compare uploaded predictions generated by novel methods, permitting the comparison of these novel methods against all existing methods compared by the server. Benchmark metrics include sensitivity and specificity of predictions for membrane helix location and orientation, and many others. The server allows for customised evaluations such as assessing prediction method performances for specific helical membrane protein subtypes.We report results for custom benchmarks which illustrate how the server may be used for specialised benchmarks. Which prediction method is the best performing method depends on which measure is being benchmarked. The OCTOPUS membrane helix prediction method is consistently one of the highest performing methods across all measures in the benchmarks that we performed. The benchmark server allows general and specialised assessment of existing and novel membrane helix prediction methods. Users can employ this benchmark server to determine the most

  13. Integral benchmark test of JENDL-4.0 for U-233 systems with ICSBEP handbook

    International Nuclear Information System (INIS)

    Kuwagaki, Kazuki; Nagaya, Yasunobu

    2017-03-01

    The integral benchmark test of JENDL-4.0 for U-233 systems using the continuous-energy Monte Carlo code MVP was conducted. The previous benchmark test was performed only for U-233 thermal solution and fast metallic systems in the ICSBEP handbook. In this study, MVP input files were prepared for uninvestigated benchmark problems in the handbook including compound thermal systems (mainly lattice systems) and integral benchmark test was performed. The prediction accuracy of JENDL-4.0 was evaluated for effective multiplication factors (k eff 's) of the U-233 systems. As a result, a trend of underestimation was observed for all the categories of U-233 systems. In the benchmark test of ENDF/B-VII.1 for U-233 systems with the ICSBEP handbook, it is reported that a decreasing trend of calculated k eff values in association with a parameter ATFF (Above-Thermal Fission Fraction) is observed. The ATFF values were also calculated in this benchmark test of JENDL-4.0 and the same trend as ENDF/B-VII.1 was observed. A CD-ROM is attached as an appendix. (J.P.N.)

  14. Solution of the neutronics code dynamic benchmark by finite element method

    Science.gov (United States)

    Avvakumov, A. V.; Vabishchevich, P. N.; Vasilev, A. O.; Strizhov, V. F.

    2016-10-01

    The objective is to analyze the dynamic benchmark developed by Atomic Energy Research for the verification of best-estimate neutronics codes. The benchmark scenario includes asymmetrical ejection of a control rod in a water-type hexagonal reactor at hot zero power. A simple Doppler feedback mechanism assuming adiabatic fuel temperature heating is proposed. The finite element method on triangular calculation grids is used to solve the three-dimensional neutron kinetics problem. The software has been developed using the engineering and scientific calculation library FEniCS. The matrix spectral problem is solved using the scalable and flexible toolkit SLEPc. The solution accuracy of the dynamic benchmark is analyzed by condensing calculation grid and varying degree of finite elements.

  15. Benchmark testing and independent verification of the VS2DT computer code

    International Nuclear Information System (INIS)

    McCord, J.T.

    1994-11-01

    The finite difference flow and transport simulator VS2DT was benchmark tested against several other codes which solve the same equations (Richards equation for flow and the Advection-Dispersion equation for transport). The benchmark problems investigated transient two-dimensional flow in a heterogeneous soil profile with a localized water source at the ground surface. The VS2DT code performed as well as or better than all other codes when considering mass balance characteristics and computational speed. It was also rated highly relative to the other codes with regard to ease-of-use. Following the benchmark study, the code was verified against two analytical solutions, one for two-dimensional flow and one for two-dimensional transport. These independent verifications show reasonable agreement with the analytical solutions, and complement the one-dimensional verification problems published in the code's original documentation

  16. Shielding Integral Benchmark Archive and Database (SINBAD)

    Energy Technology Data Exchange (ETDEWEB)

    Kirk, Bernadette Lugue [ORNL; Grove, Robert E [ORNL; Kodeli, I. [International Atomic Energy Agency (IAEA); Sartori, Enrico [ORNL; Gulliford, J. [OECD Nuclear Energy Agency

    2011-01-01

    The Shielding Integral Benchmark Archive and Database (SINBAD) collection of benchmarks was initiated in the early 1990 s. SINBAD is an international collaboration between the Organization for Economic Cooperation and Development s Nuclear Energy Agency Data Bank (OECD/NEADB) and the Radiation Safety Information Computational Center (RSICC) at Oak Ridge National Laboratory (ORNL). SINBAD is a major attempt to compile experiments and corresponding computational models with the goal of preserving institutional knowledge and expertise that need to be handed down to future scientists. SINBAD is also a learning tool for university students and scientists who need to design experiments or gain expertise in modeling and simulation. The SINBAD database is currently divided into three categories fission, fusion, and accelerator benchmarks. Where possible, each experiment is described and analyzed using deterministic or probabilistic (Monte Carlo) radiation transport software.

  17. A Benchmarking System for Domestic Water Use

    Directory of Open Access Journals (Sweden)

    Dexter V. L. Hunt

    2014-05-01

    Full Text Available The national demand for water in the UK is predicted to increase, exacerbated by a growing UK population, and home-grown demands for energy and food. When set against the context of overstretched existing supply sources vulnerable to droughts, particularly in increasingly dense city centres, the delicate balance of matching minimal demands with resource secure supplies becomes critical. When making changes to "internal" demands the role of technological efficiency and user behaviour cannot be ignored, yet existing benchmarking systems traditionally do not consider the latter. This paper investigates the practicalities of adopting a domestic benchmarking system (using a band rating that allows individual users to assess their current water use performance against what is possible. The benchmarking system allows users to achieve higher benchmarks through any approach that reduces water consumption. The sensitivity of water use benchmarks are investigated by making changes to user behaviour and technology. The impact of adopting localised supplies (i.e., Rainwater harvesting—RWH and Grey water—GW and including "external" gardening demands are investigated. This includes the impacts (in isolation and combination of the following: occupancy rates (1 to 4; roof size (12.5 m2 to 100 m2; garden size (25 m2 to 100 m2 and geographical location (North West, Midlands and South East, UK with yearly temporal effects (i.e., rainfall and temperature. Lessons learnt from analysis of the proposed benchmarking system are made throughout this paper, in particular its compatibility with the existing Code for Sustainable Homes (CSH accreditation system. Conclusions are subsequently drawn for the robustness of the proposed system.

  18. Toxicological benchmarks for wildlife: 1996 Revision

    International Nuclear Information System (INIS)

    Sample, B.E.; Opresko, D.M.; Suter, G.W., II.

    1996-06-01

    The purpose of this report is to present toxicological benchmarks for assessment of effects of certain chemicals on mammalian and avian wildlife species. Publication of this document meets a milestone for the Environmental Restoration (ER) Risk Assessment Program. This document provides the ER Program with toxicological benchmarks that may be used as comparative tools in screening assessments as well as lines of evidence to support or refute the presence of ecological effects in ecological risk assessments. The chemicals considered in this report are some that occur at US DOE waste sites, and the wildlife species evaluated herein were chosen because they represent a range of body sizes and diets

  19. Toxicological benchmarks for wildlife: 1996 Revision

    Energy Technology Data Exchange (ETDEWEB)

    Sample, B.E.; Opresko, D.M.; Suter, G.W., II

    1996-06-01

    The purpose of this report is to present toxicological benchmarks for assessment of effects of certain chemicals on mammalian and avian wildlife species. Publication of this document meets a milestone for the Environmental Restoration (ER) Risk Assessment Program. This document provides the ER Program with toxicological benchmarks that may be used as comparative tools in screening assessments as well as lines of evidence to support or refute the presence of ecological effects in ecological risk assessments. The chemicals considered in this report are some that occur at US DOE waste sites, and the wildlife species evaluated herein were chosen because they represent a range of body sizes and diets.

  20. Deploy Nalu/Kokkos algorithmic infrastructure with performance benchmarking.

    Energy Technology Data Exchange (ETDEWEB)

    Domino, Stefan P. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Ananthan, Shreyas [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Knaus, Robert C. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Williams, Alan B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-09-29

    assembly timings faster than that observed on Haswell architecture. The computational workload of higher-order meshes, therefore, seems ideally suited for the many-core architecture and justi es further exploration of higher-order on NGP platforms. A Trilinos/Tpetra-based multi-threaded GMRES preconditioned by symmetric Gauss Seidel (SGS) represents the core solver infrastructure for the low-Mach advection/diffusion implicit solves. The threaded solver stack has been tested on small problems on NREL's Peregrine system using the newly developed and deployed Kokkos-view/SIMD kernels. fforts are underway to deploy the Tpetra-based solver stack on NERSC Cori system to benchmark its performance at scale on KNL machines.

  1. Solidification of a binary alloy: Finite-element, single-domain simulation and new benchmark solutions

    Science.gov (United States)

    Le Bars, Michael; Worster, M. Grae

    2006-07-01

    A finite-element simulation of binary alloy solidification based on a single-domain formulation is presented and tested. Resolution of phase change is first checked by comparison with the analytical results of Worster [M.G. Worster, Solidification of an alloy from a cooled boundary, J. Fluid Mech. 167 (1986) 481-501] for purely diffusive solidification. Fluid dynamical processes without phase change are then tested by comparison with previous numerical studies of thermal convection in a pure fluid [G. de Vahl Davis, Natural convection of air in a square cavity: a bench mark numerical solution, Int. J. Numer. Meth. Fluids 3 (1983) 249-264; D.A. Mayne, A.S. Usmani, M. Crapper, h-adaptive finite element solution of high Rayleigh number thermally driven cavity problem, Int. J. Numer. Meth. Heat Fluid Flow 10 (2000) 598-615; D.C. Wan, B.S.V. Patnaik, G.W. Wei, A new benchmark quality solution for the buoyancy driven cavity by discrete singular convolution, Numer. Heat Transf. 40 (2001) 199-228], in a porous medium with a constant porosity [G. Lauriat, V. Prasad, Non-darcian effects on natural convection in a vertical porous enclosure, Int. J. Heat Mass Transf. 32 (1989) 2135-2148; P. Nithiarasu, K.N. Seetharamu, T. Sundararajan, Natural convective heat transfer in an enclosure filled with fluid saturated variable porosity medium, Int. J. Heat Mass Transf. 40 (1997) 3955-3967] and in a mixed liquid-porous medium with a spatially variable porosity [P. Nithiarasu, K.N. Seetharamu, T. Sundararajan, Natural convective heat transfer in an enclosure filled with fluid saturated variable porosity medium, Int. J. Heat Mass Transf. 40 (1997) 3955-3967; N. Zabaras, D. Samanta, A stabilized volume-averaging finite element method for flow in porous media and binary alloy solidification processes, Int. J. Numer. Meth. Eng. 60 (2004) 1103-1138]. Finally, new benchmark solutions for simultaneous flow through both fluid and porous domains and for convective solidification processes are

  2. Comparison of NUPIPE-II and SAP IV predicted and experimentally determined dynamic structural responses for German Standard Problem 4a. [BWR

    Energy Technology Data Exchange (ETDEWEB)

    Dooley, W.T.; Mosby, W.R.

    1983-01-01

    This paper presents comparisons between two computer code predictions and experimental measurements of the structural response of a pipe line/check valve system subjected to loading from a loss of feedwater transient. The piping system that was modeled and instrumented for measurement was the focus of German Standard Problem 4a; part of the Heissdampfreaktor Safety Program being conducted in the Federal Republic of Germany. The availability of their experimental data offered EG and G a unique opportunity to evaluate two structural codes' predictions, to compare them with each other, and to compare their predictions with the actual measured values of acceleration and displacement. A thermal-hydrualic code, SOLA-LOOP, computed the hydraulic behavior of the system. The hydraulic forcing functions were calculated and placed into the structural codes, NUPIPE-II and SAP IV. It was concluded that both computer programs provided comparable, realistic predictions of the piping system dynamic response to a blowdown load.

  3. CFD validation in OECD/NEA t-junction benchmark.

    Energy Technology Data Exchange (ETDEWEB)

    Obabko, A. V.; Fischer, P. F.; Tautges, T. J.; Karabasov, S.; Goloviznin, V. M.; Zaytsev, M. A.; Chudanov, V. V.; Pervichko, V. A.; Aksenova, A. E. (Mathematics and Computer Science); (Cambridge Univ.); (Moscow Institute of Nuclar Energy Safety)

    2011-08-23

    and benchmark data. The numerical scheme has a very small scheme diffusion and is the second and the first order accurate in space and time, correspondingly. We compare and contrast simulation results for three computational fluid dynamics codes CABARET, Conv3D, and Nek5000 for the T-junction thermal striping problem that was the focus of a recent OECD/NEA blind benchmark. The corresponding codes utilize finite-difference implicit large eddy simulation (ILES), finite-volume LES on fully staggered grids, and an LES spectral element method (SEM), respectively. The simulations results are in a good agreement with experimenatl data. We present results from a study of sensitivity to computational mesh and time integration interval, and discuss the next steps in the simulation of this problem.

  4. Toxicological Benchmarks for Screening Potential Contaminants of Concern for Effects on Soil and Litter Invertebrates and Heterotrophic Process

    Energy Technology Data Exchange (ETDEWEB)

    Will, M.E.

    1994-01-01

    This report presents a standard method for deriving benchmarks for the purpose of ''contaminant screening,'' performed by comparing measured ambient concentrations of chemicals. The work was performed under Work Breakdown Structure 1.4.12.2.3.04.07.02 (Activity Data Sheet 8304). In addition, this report presents sets of data concerning the effects of chemicals in soil on invertebrates and soil microbial processes, benchmarks for chemicals potentially associated with United States Department of Energy sites, and literature describing the experiments from which data were drawn for benchmark derivation.

  5. Effectiveness of Cognitive-Behavioral Therapy for Adolescent Depression: A Benchmarking Investigation

    Science.gov (United States)

    Weersing, V. Robin; Iyengar, Satish; Kolko, David J.; Birmaher, Boris; Brent, David A.

    2006-01-01

    In this study, we examined the effectiveness of cognitive-behavioral therapy (CBT) for adolescent depression. Outcomes of 80 youth treated with CBT in an outpatient depression specialty clinic, the Services for Teens at Risk Center (STAR), were compared to a "gold standard" CBT research benchmark. On average, youths treated with CBT in STAR…

  6. LDBC Graphalytics: A Benchmark for Large-Scale Graph Analysis on Parallel and Distributed Platforms

    NARCIS (Netherlands)

    Iosup, Alexandru; Hegeman, Tim; Ngai, Wing Lung; Heldens, Stijn; Prat-Pérez, Arnau; Manhardt, Thomas; Chafi, Hassan; Capota, Mihai; Sundaram, Narayanan; Anderson, Michael J.; Tanase, Ilie Gabriel; Xia, Yinglong; Nai, Lifeng; Boncz, Peter A.

    2016-01-01

    In this paper we introduce LDBC Graphalytics, a new industrial-grade benchmark for graph analysis platforms. It consists of six deterministic algorithms, standard datasets, synthetic dataset generators, and reference output, that enable the objective comparison of graph analysis platforms. Its test

  7. Benchmarking Reference Desk Service in Academic Health Science Libraries: A Preliminary Survey.

    Science.gov (United States)

    Robbins, Kathryn; Daniels, Kathleen

    2001-01-01

    This preliminary study was designed to benchmark patron perceptions of reference desk services at academic health science libraries, using a standard questionnaire. Responses were compared to determine the library that provided the highest-quality service overall and along five service dimensions. All libraries were rated very favorably, but none…

  8. A Critical Thinking Benchmark for a Department of Agricultural Education and Studies

    Science.gov (United States)

    Perry, Dustin K.; Retallick, Michael S.; Paulsen, Thomas H.

    2014-01-01

    Due to an ever changing world where technology seemingly provides endless answers, today's higher education students must master a new skill set reflecting an emphasis on critical thinking, problem solving, and communications. The purpose of this study was to establish a departmental benchmark for critical thinking abilities of students majoring…

  9. Verification, validation, and benchmarking report for GILDA: An infinite lattice diffusion theory calculation

    Energy Technology Data Exchange (ETDEWEB)

    Le, T.T.

    1991-09-01

    This report concerns the verification and validation of GILDA, a static two dimensional infinite lattice diffusion theory code. The verification was performed to determine if GILDA was applying the correct theory and that all the subroutines function as required. The validation was performed to determine the accuracy of the code by comparing the results of the code with the integral transport solutions (GLASS) of benchmark problems. Since GLASS uses multigroup integral transport theory, a more accurate method than fewgroup diffusion theory, using solutions from GLASS as reference solutions to benchmark GILDA is acceptable. Eight benchmark problems used in this process are infinite mixed lattice problems. The lattice is constructed by repeating an infinite number of identical super-cells (zones). Two types of super-cell have been used for these benchmark problems: one consists of six Mark22 assemblies surrounding one control assembly and the other consists of three Markl6 fuel assemblies and three Mark31 target assemblies surrounding a control assembly.

  10. Benchmarking European Gas Transmission System Operators

    DEFF Research Database (Denmark)

    Agrell, Per J.; Bogetoft, Peter; Trinkner, Urs

    This is the final report for the pan-European efficiency benchmarking of gas transmission system operations commissioned by the Netherlands Authority for Consumers and Markets (ACM), Den Haag, on behalf of the Council of European Energy Regulators (CEER) under the supervision of the authors....

  11. Benchmarks for dynamic multi-objective optimisation

    CSIR Research Space (South Africa)

    Helbig, M

    2013-06-01

    Full Text Available of dynamic multi-objective optimisation algorithms (DMOAs) are highlighted. In addition, new DMOO benchmark functions with complicated Pareto-optimal sets (POSs) and approaches to develop DMOOPs with either an isolated or deceptive Pareto-optimal front (POF...

  12. Benchmarking 2009: Trends in Education Philanthropy

    Science.gov (United States)

    Bearman, Jessica; Kilgore, Gin

    2009-01-01

    "Benchmarking 2009: Trends in Education Philanthropy" is Grantmakers for Education's (GFE) second annual study of grantmaking trends and priorities among members. As a national network dedicated to improving education outcomes through philanthropy, GFE members are mindful of their role in fostering greater knowledge in the field. They believe it's…

  13. Parton Distribution Benchmarking with LHC Data

    NARCIS (Netherlands)

    Ball, Richard D.; Carrazza, Stefano; Debbio, Luigi Del; Forte, Stefano; Gao, Jun; Hartland, Nathan; Huston, Joey; Nadolsky, Pavel; Rojo, Juan; Stump, Daniel; Thorne, Robert S.; Yuan, C. -P.

    2012-01-01

    We present a detailed comparison of the most recent sets of NNLO PDFs from the ABM, CT, HERAPDF, MSTW and NNPDF collaborations. We compare parton distributions at low and high scales and parton luminosities relevant for LHC phenomenology. We study the PDF dependence of LHC benchmark inclusive cross

  14. A Benchmark and Simulator for UAV Tracking

    KAUST Repository

    Mueller, Matthias

    2016-09-16

    In this paper, we propose a new aerial video dataset and benchmark for low altitude UAV target tracking, as well as, a photorealistic UAV simulator that can be coupled with tracking methods. Our benchmark provides the first evaluation of many state-of-the-art and popular trackers on 123 new and fully annotated HD video sequences captured from a low-altitude aerial perspective. Among the compared trackers, we determine which ones are the most suitable for UAV tracking both in terms of tracking accuracy and run-time. The simulator can be used to evaluate tracking algorithms in real-time scenarios before they are deployed on a UAV “in the field”, as well as, generate synthetic but photo-realistic tracking datasets with automatic ground truth annotations to easily extend existing real-world datasets. Both the benchmark and simulator are made publicly available to the vision community on our website to further research in the area of object tracking from UAVs. (https://ivul.kaust.edu.sa/Pages/pub-benchmark-simulator-uav.aspx.). © Springer International Publishing AG 2016.

  15. Prague texture segmentation data generator and benchmark

    Czech Academy of Sciences Publication Activity Database

    Mikeš, Stanislav; Haindl, Michal

    2006-01-01

    Roč. 2006, č. 64 (2006), s. 67-68 ISSN 0926-4981 R&D Projects: GA MŠk(CZ) 1M0572; GA AV ČR(CZ) 1ET400750407; GA AV ČR IAA2075302 Institutional research plan: CEZ:AV0Z10750506 Keywords : image segmentation * texture * benchmark * web Subject RIV: BD - Theory of Information

  16. Operational benchmarking of Japanese and Danish hopsitals

    DEFF Research Database (Denmark)

    Traberg, Andreas; Itoh, Kenji; Jacobsen, Peter

    2010-01-01

    This benchmarking model is designed as an integration of three organizational dimensions suited for the healthcare sector. The model incorporates posterior operational indicators, and evaluates upon aggregation of performance. The model is tested upon seven cases from Japan and Denmark. Japanese...

  17. Benchmarks in Tacit Knowledge Skills Instruction

    DEFF Research Database (Denmark)

    Tackney, Charles T.; Strömgren, Ole; Sato, Toyoko

    2006-01-01

    experience more empowering of essential tacit knowledge skills than that found in educational institutions in other national settings. We specify the program forms and procedures for consensus-based governance and group work (as benchmarks) that demonstrably instruct undergraduates in the tacit skill...... dimensions of knowledge thought to be essential for success following graduation....

  18. Determination of Benchmarks Stability within Ahmadu Bello ...

    African Journals Online (AJOL)

    Heights of six geodetic benchmarks over a total distance of 8.6km at the Ahmadu Bello University (ABU), Zaria, Nigeria were recomputed and analysed using least squares adjustment technique. The network computations were tied to two fix primary reference pillars situated outside the campus. The two-tail Chi-square ...

  19. Benchmarking and performance management in health care

    OpenAIRE

    Buttigieg, Sandra; ; EHMA Annual Conference : Public Health Care : Who Pays, Who Provides?

    2012-01-01

    Current economic conditions challenge health care providers globally. Healthcare organizations need to deliver optimal financial, operational, and clinical performance to sustain quality of service delivery. Benchmarking is one of the most potent and under-utilized management tools available and an analytic tool to understand organizational performance. Additionally, it is required for financial survival and organizational excellence.

  20. How Many Letters Should Preschoolers in Public Programs Know? The Diagnostic Efficiency of Various Preschool Letter-Naming Benchmarks for Predicting First-Grade Literacy Achievement

    Science.gov (United States)

    Piasta, Shayne B.; Petscher, Yaacov; Justice, Laura M.

    2012-01-01

    Review of current federal and state standards indicates little consensus or empirical justification regarding appropriate goals, often referred to as benchmarks, for preschool letter-name learning. The present study investigated the diagnostic efficiency of various letter-naming benchmarks using a longitudinal database of 371 children who attended…

  1. Final PANTHER solution to the NEA-NSC3-DPWR core transient benchmark. Uncontrolled withdrawal of control rods at zero power

    International Nuclear Information System (INIS)

    Kuijper, J.C.

    1996-10-01

    This report contains the final results of PANTHER calculations for the 'NEA-NSC 3-D PWR Core Transient Benchmark: Uncontrolled Withdrawal of Control Rods at Zero Power'. PANTHER was able to model the benchmark problems without modifications to the code. All the calculations were performed in 3-D. (orig.)

  2. Chapter 1: Standard Model processes

    OpenAIRE

    Becher, Thomas

    2017-01-01

    This chapter documents the production rates and typical distributions for a number of benchmark Standard Model processes, and discusses new dynamical phenomena arising at the highest energies available at this collider. We discuss the intrinsic physics interest in the measurement of these Standard Model processes, as well as their role as backgrounds for New Physics searches.

  3. Collected notes from the Benchmarks and Metrics Workshop

    Science.gov (United States)

    Drummond, Mark E.; Kaelbling, Leslie P.; Rosenschein, Stanley J.

    1991-01-01

    In recent years there has been a proliferation of proposals in the artificial intelligence (AI) literature for integrated agent architectures. Each architecture offers an approach to the general problem of constructing an integrated agent. Unfortunately, the ways in which one architecture might be considered better than another are not always clear. There has been a growing realization that many of the positive and negative aspects of an architecture become apparent only when experimental evaluation is performed and that to progress as a discipline, we must develop rigorous experimental methods. In addition to the intrinsic intellectual interest of experimentation, rigorous performance evaluation of systems is also a crucial practical concern to our research sponsors. DARPA, NASA, and AFOSR (among others) are actively searching for better ways of experimentally evaluating alternative approaches to building intelligent agents. One tool for experimental evaluation involves testing systems on benchmark tasks in order to assess their relative performance. As part of a joint DARPA and NASA funded project, NASA-Ames and Teleos Research are carrying out a research effort to establish a set of benchmark tasks and evaluation metrics by which the performance of agent architectures may be determined. As part of this project, we held a workshop on Benchmarks and Metrics at the NASA Ames Research Center on June 25, 1990. The objective of the workshop was to foster early discussion on this important topic. We did not achieve a consensus, nor did we expect to. Collected here is some of the information that was exchanged at the workshop. Given here is an outline of the workshop, a list of the participants, notes taken on the white-board during open discussions, position papers/notes from some participants, and copies of slides used in the presentations.

  4. Benchmarking transaction and analytical processing systems the creation of a mixed workload benchmark and its application

    CERN Document Server

    Bog, Anja

    2014-01-01

    This book introduces a new benchmark for hybrid database systems, gauging the effect of adding OLAP to an OLTP workload and analyzing the impact of commonly used optimizations in historically separate OLTP and OLAP domains in mixed-workload scenarios.

  5. Benchmark calculations of the solution-fuel criticality experiments by SRAC code system

    International Nuclear Information System (INIS)

    Senuma, Ichiro; Miyoshi, Yoshinori; Suzaki, Takenori; Kobayashi, Iwao

    1984-06-01

    Benchmark calculations were performed by using newly developed SRAC (Standard Reactor Analysis Code) system and nuclear data library based upon JENDL-2. The 34 benchmarks include variety of composition, concentration and configuration of Pu homogeneous and U/Pu homogeneous systems (nitrate, mainly), also include UO 2 /PuO 2 rods in fissile solution: a simplified model of the dissolver process of the fuel reprocessing plant. Calculation results shows good agreement with Monte Carlo method. This code-evaluation work has been done for the the part of the Detailed Design of CSEF (Critical Satety Experimental Facility), which is now in Progress. (author)

  6. Review of microscopic integral cross section data in fundamental reactor dosimetry benchmark neutron fields

    International Nuclear Information System (INIS)

    Fabry, A.; McElroy, W.N.; Kellogg, L.S.; Lippincott, E.P.; Grundl, J.A.; Gilliam, D.M.; Hansen, G.E.

    1976-01-01

    This paper is intended to review and critically discuss microscopic integral cross section measurement and calculation data for fundamental reactor dosimetry benchmark neutron fields. Specifically the review covers the following fundamental benchmarks: the spontaneous californium-252 fission neutron spectrum standard field; the thermal-neutron induced uranium-235 fission neutron spectrum standard field; the (secondary) intermediate-energy standard neutron field at the center of the Mol-ΣΣ, NISUS, and ITN-ΣΣ facilities; the reference neutron field at the center of the Coupled Fast Reactor Measurement Facility; the reference neutron field at the center of the 10% enriched uranium metal, cylindrical, fast critical; the (primary) Intermediate-Energy Standard Neutron Field

  7. Review of microscopic integral cross section data in fundamental reactor dosimetry benchmark neutron fields

    International Nuclear Information System (INIS)

    Fabry, A.; McElroy, W.N.; Kellogg, L.S.; Lippincott, E.P.; Grundl, J.A.; Gilliam, D.M.; Hansen, G.E.

    1976-10-01

    The paper is intended to review and critically discuss microscopic integral cross section measurement and calculation data for fundamental reactor dosimetry benchmark neutron fields. Specifically the review covers the following fundamental benchmarks: (1) the spontaneous californium-252 fission neutron spectrum standard field; (2) the thermal-neutron induced uranium-235 fission neutron spectrum standard field; (3) the (secondary) intermediate-energy standard neutron field at the center of the Mol-ΣΣ, NISUS, and ITN--ΣΣ facilities; (4) the reference neutron field at the center of the Coupled Fast Reactor Measurement Facility (CFRMF); (5) the reference neutron field at the center of the 10 percent enriched uranium metal, cylindrical, fast critical; and (6) the (primary) Intermediate-Energy Standard Neutron Field

  8. Benchmarking in Identifying Priority Directions of Development of Telecommunication Operators

    Directory of Open Access Journals (Sweden)

    Zaharchenko Lolita A.

    2013-12-01

    Full Text Available The article analyses evolution of development and possibilities of application of benchmarking in the telecommunication sphere. It studies essence of benchmarking on the basis of generalisation of approaches of different scientists to definition of this notion. In order to improve activity of telecommunication operators, the article identifies the benchmarking technology and main factors, that determine success of the operator in the modern market economy, and the mechanism of benchmarking and component stages of carrying out benchmarking by a telecommunication operator. It analyses the telecommunication market and identifies dynamics of its development and tendencies of change of the composition of telecommunication operators and providers. Having generalised the existing experience of benchmarking application, the article identifies main types of benchmarking of telecommunication operators by the following features: by the level of conduct of (branch, inter-branch and international benchmarking; by relation to participation in the conduct (competitive and joint; and with respect to the enterprise environment (internal and external.

  9. ZZ BWRSB-FORSMARKS, Stability Benchmark Data from BWR FORSMARKS 1 and 2

    International Nuclear Information System (INIS)

    Verdu, G.; Palomo, M.J.; Escriva, A.; Ginestar, D.; Lansaker, Per

    2002-01-01

    to study the variability of the DR and oscillation frequency with the measurement time duration. There are two time series to analyse. Each one has about 14000 points, and will be divided in blocks of approximately 4000 and 2000 points. The results for the short time series will be compared with the original long series results. Case 3: APRM data for this case contains more than one natural frequency of the core. The data also contains peaks of other frequencies due to the actuation of the pressure controller. One case has two frequencies close to each other. Cases with more than one natural frequency make the analysis much more difficult. This case contains five measurements contaminated with influences from the plant control systems. In this case, the time series have a bad behaviour, and consequently the standard stability parameters are not clear. It could then be interesting to analyse a set of the dominant poles of the transfer function obtained from the time series. Case 4: This case contains a mixture between a global oscillation mode and a regional (half core) oscillation. The case consists of APRM and LPRM (Local PRM) signals coming from one test. Case 5: This case is focused on the analysis of two APRM-signals obtained during a small plant transient, that resulted in a bad behaviour of the signals. In this case, it is important to analyse the first dominant poles of the transfer function obtained from the time series. Note that this is a non-stationary case and the autoregressive methods have a limited validity. Case 6: This test case shows local (channel) oscillations. The data contains APRM and LPRM signals from two tests that were performed close to each other, both in time and in the operating conditions. 2 - Restrictions on the complexity of the problem: The use of this data is limited to the NSC Stability Benchmark, and any other use or publication of this information should be previously approved by Forsmarks Kraftgrupp AB

  10. A proposal for benchmark tests for underactuated or compliant hands

    Directory of Open Access Journals (Sweden)

    G. A. Kragten

    2010-12-01

    Full Text Available There is a lack of agreement in the literature as to what exactly quantifies the performance of underactuated hands. This paper proposes two benchmark tests to measure the ability of underactuated hands to grasp different objects and the ability to hold the objects when force disturbances apply. The first test determines the smallest and largest cylindrical object which can be successfully grasped in an enveloping grasp or in a pinch grasp. The second test provides the maximal allowable force which can be applied to a grasped object without loosing it. A setup was constructed consisting of standard components. Exemplary tests were applied to the Delft Hand 2. The proposed benchmark tests are representative to quantify the performance of pick and place operations with underactuated hands. The results of the tests can be applied to evaluate, compare, and improve the performance of robotic hands.

    This paper was presented at the IFToMM/ASME International Workshop on Underactuated Grasping (UG2010, 19 August 2010, Montréal, Canada.

  11. Integration of oncology and palliative care: setting a benchmark.

    Science.gov (United States)

    Vayne-Bossert, P; Richard, E; Good, P; Sullivan, K; Hardy, J R

    2017-10-01

    Integration of oncology and palliative care (PC) should be the standard model of care for patients with advanced cancer. An expert panel developed criteria that constitute integration. This study determined whether the PC service within this Health Service, which is considered to be fully "integrated", could be benchmarked against these criteria. A survey was undertaken to determine the perceived level of integration of oncology and palliative care by all health care professionals (HCPs) within our cancer centre. An objective determination of integration was obtained from chart reviews of deceased patients. Integration was defined as >70% of all respondents answered "agree" or "strongly agree" to each indicator and >70% of patient charts supported each criteria. Thirty-four HCPs participated in the survey (response rate 69%). Over 90% were aware of the outpatient PC clinic, interdisciplinary and consultation team, PC senior leadership, and the acceptance of concurrent anticancer therapy. None of the other criteria met the 70% agreement mark but many respondents lacked the necessary knowledge to respond. The chart review included 67 patients, 92% of whom were seen by the PC team prior to death. The median time from referral to death was 103 days (range 0-1347). The level of agreement across all criteria was below our predefined definition of integration. The integration criteria relating to service delivery are medically focused and do not lend themselves to interdisciplinary review. The objective criteria can be audited and serve both as a benchmark and a basis for improvement activities.

  12. Benchmarking Terrestrial Ecosystem Models in the South Central US

    Science.gov (United States)

    Kc, M.; Winton, K.; Langston, M. A.; Luo, Y.

    2016-12-01

    Ecosystem services and products are the foundation of sustainability for regional and global economy since we are directly or indirectly dependent on the ecosystem services like food, livestock, water, air, wildlife etc. It has been increasingly recognized that for sustainability concerns, the conservation problems need to be addressed in the context of entire ecosystems. This approach is even more vital in the 21st century with formidable increasing human population and rapid changes in global environment. This study was conducted to find the state of the science of ecosystem models in the South-Central region of US. The ecosystem models were benchmarked using ILAMB diagnostic package developed as a result of International Land Model Benchmarking (ILAMB) project on four main categories; viz, Ecosystem and Carbon Cycle, Hydrology Cycle, Radiation and Energy Cycle and Climate forcings. A cumulative assessment was generated with weighted seven different skill assessment metrics for the ecosystem models. This synthesis on the current state of the science of ecosystem modeling in the South-Central region of US will be highly useful towards coupling these models with climate, agronomic, hydrologic, economic or management models to better represent ecosystem dynamics as affected by climate change and human activities; and hence gain more reliable predictions of future ecosystem functions and service in the region. Better understandings of such processes will increase our ability to predict the ecosystem responses and feedbacks to environmental and human induced change in the region so that decision makers can make an informed management decisions of the ecosystem.

  13. Benchmark study of TRIPOLI-4 through experiment and MCNP codes

    Energy Technology Data Exchange (ETDEWEB)

    Michel, M. [CEA, LIST, Laboratoire Capteurs et Architectures Electroniques, F-91191 Gif-sur-Yvette (France); Coulon, R. [Canberra France, F-78182 Saint Quentin en Yvelines (France); Normand, S. [CEA, LIST, Laboratoire Capteurs et Architectures Electroniques, F-91191 Gif-sur-Yvette (France); Huot, N.; Petit, O. [CEA, DEN DANS, SERMA, F-91191 Gif-sur-Yvette (France)

    2011-07-01

    Reliability on simulation results is essential in nuclear physics. Although MCNP5 and MCNPX are the world widely used 3D Monte Carlo radiation transport codes, alternative Monte Carlo simulation tools exist to simulate neutral and charged particles' interactions with matter. Therefore, benchmark are required in order to validate these simulation codes. For instance, TRIPOLI-4.7, developed at the French Alternative Energies and Atomic Energy Commission for neutron and photon transport, now also provides the user with a full feature electron-photon electromagnetic shower. Whereas the reliability of TRIPOLI-4.7 for neutron and photon transport has been validated yet, the new development regarding electron-photon matter interaction needs additional validation benchmarks. We will thus demonstrate how accurately TRIPOLI-4's 'deposited spectrum' tally can simulate gamma spectrometry problems, compared to MCNP's 'F8' tally. The experimental setup is based on an HPGe detector measuring the decay spectrum of an {sup 152}Eu source. These results are then compared with those given by MCNPX 2.6d and TRIPOLI-4 codes. This paper deals with both the experimental aspect and simulation. We will demonstrate that TRIPOLI-4 is a potential alternative to both MCNPX and MCNP5 for gamma-electron interaction simulation. (authors)

  14. JNC results of BN-600 benchmark calculation (phase 4)

    International Nuclear Information System (INIS)

    Ishikawa, Makoto

    2003-01-01

    The present work is the results of JNC, Japan, for the Phase 4 of the BN-600 core benchmark problem (Hex-Z fully MOX fuelled core model) organized by IAEA. The benchmark specification is based on 1) the RCM report of IAEA CRP on 'Updated Codes and Methods to Reduce the Calculational Uncertainties of LMFR Reactivity Effects, Action 3.12' (Calculations for BN-600 fully fuelled MOX core for subsequent transient analyses). JENDL-3.2 nuclear data library was used for calculating 70 group ABBN-type group constants. Cell models for fuel assembly and control rod calculations were applied: homogeneous and heterogeneous (cylindrical supercell) model. Basic diffusion calculation was three-dimensional Hex-Z model, 18 group (Citation code). Transport calculations were 18 group, three-dimensional (NSHEC code) based on Sn-transport nodal method developed at JNC. The generated thermal power per fission was based on Sher's data corrected on the basis of ENDF/B-IV data library. Calculation results are presented in Tables for intercomparison

  15. The Global Benchmarking as a Method of Countering the Intellectual Migration in Ukraine

    Directory of Open Access Journals (Sweden)

    Striy Lуbov A.

    2017-05-01

    Full Text Available The publication is aimed at studying the global benchmarking as a method of countering the intellectual migration in Ukraine. The article explores the intellectual process of migration in Ukraine; the current status of the country in the light of crisis and all the problems that arose has been analyzed; statistical data on the migration process are provided, the method of countering it has been determined; types of benchmarking have been considered; the benchmarking method as a way of achieving objective has been analyzed; the benefits to be derived from this method have been determined, as well as «bottlenecks» in the State process of regulating migratory flows, not only to call attention to, but also take corrective actions.

  16. Benchmark of Space Charge Simulations and Comparison with Experimental Results for High Intensity, Low Energy Accelerators

    CERN Document Server

    Cousineau, Sarah M

    2005-01-01

    Space charge effects are a major contributor to beam halo and emittance growth leading to beam loss in high intensity, low energy accelerators. As future accelerators strive towards unprecedented levels of beam intensity and beam loss control, a more comprehensive understanding of space charge effects is required. A wealth of simulation tools have been developed for modeling beams in linacs and rings, and with the growing availability of high-speed computing systems, computationally expensive problems that were inconceivable a decade ago are now being handled with relative ease. This has opened the field for realistic simulations of space charge effects, including detailed benchmarks with experimental data. A great deal of effort is being focused in this direction, and several recent benchmark studies have produced remarkably successful results. This paper reviews the achievements in space charge benchmarking in the last few years, and discusses the challenges that remain.

  17. Synthetic graph generation for data-intensive HPC benchmarking: Scalability, analysis and real-world application

    Energy Technology Data Exchange (ETDEWEB)

    Powers, Sarah S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Lothian, Joshua [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2014-12-01

    The benchmarking effort within the Extreme Scale Systems Center at Oak Ridge National Laboratory seeks to provide High Performance Computing benchmarks and test suites of interest to the DoD sponsor. The work described in this report is a part of the effort focusing on graph generation. A previously developed benchmark, SystemBurn, allows the emulation of a broad spectrum of application behavior profiles within a single framework. To complement this effort, similar capabilities are desired for graph-centric problems. This report described the in-depth analysis of the generated synthetic graphs' properties at a variety of scales using different generator implementations and examines their applicability to replicating real world datasets.

  18. Revaluering benchmarking - A topical theme for the construction industry

    DEFF Research Database (Denmark)

    Rasmussen, Grane Mikael Gregaard

    2011-01-01

    Over the past decade, benchmarking has increasingly gained foothold in the construction industry. The predominant research, perceptions and uses of benchmarking are valued so strongly and uniformly, that what may seem valuable, is actually abstaining researchers and practitioners from studying...... this perspective develops more thorough knowledge about benchmarking and challenges the current dominating rationales. Hereby, it is argued that benchmarking is not a neutral practice. On the contrary it is highly influenced by organizational ambitions and strategies, with the potentials to transform...

  19. Variation In Accountable Care Organization Spending And Sensitivity To Risk Adjustment: Implications For Benchmarking.

    Science.gov (United States)

    Rose, Sherri; Zaslavsky, Alan M; McWilliams, J Michael

    2016-03-01

    Spending targets (or benchmarks) for accountable care organizations (ACOs) participating in the Medicare Shared Savings Program must be set carefully to encourage program participation while achieving fiscal goals and minimizing unintended consequences, such as penalizing ACOs for serving sicker patients. Recently proposed regulatory changes include measures to make benchmarks more similar for ACOs in the same area with different historical spending levels. We found that ACOs vary widely in how their spending levels compare with those of other local providers after standard case-mix adjustments. Additionally adjusting for survey measures of patient health meaningfully reduced the variation in differences between ACO spending and local average fee-for-service spending, but substantial variation remained, which suggests that differences in care efficiency between ACOs and local non-ACO providers vary widely. Accordingly, measures to equilibrate benchmarks between high- and low-spending ACOs--such as setting benchmarks to risk-adjusted average fee-for-service spending in an area--should be implemented gradually to maintain participation by ACOs with high spending. Use of survey information also could help mitigate perverse incentives for risk selection and upcoding and limit unintended consequences of new benchmarking methodologies for ACOs serving sicker patients. Project HOPE—The People-to-People Health Foundation, Inc.

  20. Principles for Developing Benchmark Criteria for Staff Training in Responsible Gambling.

    Science.gov (United States)

    Oehler, Stefan; Banzer, Raphaela; Gruenerbl, Agnes; Malischnig, Doris; Griffiths, Mark D; Haring, Christian

    2017-03-01

    One approach to minimizing the negative consequences of excessive gambling is staff training to reduce the rate of the development of new cases of harm or disorder within their customers. The primary goal of the present study was to assess suitable benchmark criteria for the training of gambling employees at casinos and lottery retailers. The study utilised the Delphi Method, a survey with one qualitative and two quantitative phases. A total of 21 invited international experts in the responsible gambling field participated in all three phases. A total of 75 performance indicators were outlined and assigned to six categories: (1) criteria of content, (2) modelling, (3) qualification of trainer, (4) framework conditions, (5) sustainability and (6) statistical indicators. Nine of the 75 indicators were rated as very important by 90 % or more of the experts. Unanimous support for importance was given to indicators such as (1) comprehensibility and (2) concrete action-guidance for handling with problem gamblers, Additionally, the study examined the implementation of benchmarking, when it should be conducted, and who should be responsible. Results indicated that benchmarking should be conducted every 1-2 years regularly and that one institution should be clearly defined and primarily responsible for benchmarking. The results of the present study provide the basis for developing a benchmarking for staff training in responsible gambling.

  1. Benchmarking in the globalised world and its impact on South ...

    African Journals Online (AJOL)

    In order to understand the potential impact of international benchmarking on South African institutions, it is important to explore the future role of benchmarking on the international level. In this regard, examples of transnational benchmarking activities will be considered. As a result of the involvement of South African ...

  2. Benchmarking a signpost to excellence in quality and productivity

    CERN Document Server

    Karlof, Bengt

    1993-01-01

    According to the authors, benchmarking exerts a powerful leverage effect on an organization and they consider some of the factors which justify their claim. Describes how to implement benchmarking and exactly what to benchmark. Explains benchlearning which integrates education, leadership development and organizational dynamics with the actual work being done and how to make it work more efficiently in terms of quality and productivity.

  3. MTCB: A Multi-Tenant Customizable database Benchmark

    NARCIS (Netherlands)

    van der Zijden, WIm; Hiemstra, Djoerd; van Keulen, Maurice

    2017-01-01

    We argue that there is a need for Multi-Tenant Customizable OLTP systems. Such systems need a Multi-Tenant Customizable Database (MTC-DB) as a backing. To stimulate the development of such databases, we propose the benchmark MTCB. Benchmarks for OLTP exist and multi-tenant benchmarks exist, but no

  4. BENCHMARKING – BETWEEN TRADITIONAL & MODERN BUSINESS ENVIRONMENT

    Directory of Open Access Journals (Sweden)

    Mihaela Ungureanu

    2011-09-01

    Full Text Available The concept of benchmarking requires a continuous process of performance improvement of different organizations in order to obtain superiority towards those perceived as market leader’s competitors. This superiority can always be questioned, its relativity originating in the quick growing evolution of the economic environment. The approach supports innovation in relation with traditional methods and it is based on the will of those managers who want to determine limits and seek excellence. The end of the twentieth century is the period of broad expression of benchmarking in various areas and its transformation from a simple quantitative analysis tool, to a resource of information on performance and quality of goods and services.

  5. Benchmark and Continuous Improvement of Performance

    Directory of Open Access Journals (Sweden)

    Alina Alecse Stanciu

    2017-12-01

    Full Text Available The present Economic Environment is challenge us to perform, to think and re-think our personal strategies in according with our entities strategies, even if we are simply employed or we are entrepreneurs. Is an environment characterised by Volatility, Uncertainity, Complexity and Ambiguity - a VUCA World in which the entities must fight for their position gained in the market, disrupt new markets and new economies, developing their client portofolio, with the Performance as one final goal. The pressure of driving forces known as the 2030 Megatrends: Globalization 2.0, Environmental Crisis and the Scarcity of Resources, Individualism and Value Pluralism, Demographic Change, This paper examines whether using benchmark is an opportunity to increase the competitiveness of Romanian SMEs and the results show that benchmark is therefore a powerful instrument, combining reduced negative impact on the environment with a positive impact on the economy and society.

  6. Direct data access protocols benchmarking on DPM

    CERN Document Server

    Furano, Fabrizio; Keeble, Oliver; Mancinelli, Valentina

    2015-01-01

    The Disk Pool Manager is an example of a multi-protocol, multi-VO system for data access on the Grid that went though a considerable technical evolution in the last years. Among other features, its architecture offers the opportunity of testing its different data access frontends under exactly the same conditions, including hardware and backend software. This characteristic inspired the idea of collecting monitoring information from various testbeds in order to benchmark the behaviour of the HTTP and Xrootd protocols for the use case of data analysis, batch or interactive. A source of information is the set of continuous tests that are run towards the worldwide endpoints belonging to the DPM Collaboration, which accumulated relevant statistics in its first year of activity. On top of that, the DPM releases are based on multiple levels of automated testing that include performance benchmarks of various kinds, executed regularly every day. At the same time, the recent releases of DPM can report monitoring infor...

  7. HPL and STREAM Benchmarks on SANAM Supercomputer

    KAUST Repository

    Bin Sulaiman, Riman A.

    2017-03-13

    SANAM supercomputer was jointly built by KACST and FIAS in 2012 ranking second that year in the Green500 list with a power efficiency of 2.3 GFLOPS/W (Rohr et al., 2014). It is a heterogeneous accelerator-based HPC system that has 300 compute nodes. Each node includes two Intel Xeon E5?2650 CPUs, two AMD FirePro S10000 dual GPUs and 128 GiB of main memory. In this work, the seven benchmarks of HPCC were installed and configured to reassess the performance of SANAM, as part of an unpublished master thesis, after it was reassembled in the Kingdom of Saudi Arabia. We present here detailed results of HPL and STREAM benchmarks.

  8. IOP Physics benchmarks of the VELO upgrade

    CERN Document Server

    AUTHOR|(CDS)2068636

    2017-01-01

    The LHCb Experiment at the LHC is successfully performing precision measurements primarily in the area of flavour physics. The collaboration is preparing an upgrade that will start taking data in 2021 with a trigger-less readout at five times the current luminosity. The vertex locator has been crucial in the success of the experiment and will continue to be so for the upgrade. It will be replaced by a hybrid pixel detector and this paper discusses the performance benchmarks of the upgraded detector. Despite the challenging experimental environment, the vertex locator will maintain or improve upon its benchmark figures compared to the current detector. Finally the long term plans for LHCb, beyond those of the upgrade currently in preparation, are discussed.

  9. Toxicological benchmarks for wildlife. Environmental Restoration Program

    Energy Technology Data Exchange (ETDEWEB)

    Opresko, D.M.; Sample, B.E.; Suter, G.W.

    1993-09-01

    This report presents toxicological benchmarks for assessment of effects of 55 chemicals on six representative mammalian wildlife species (short-tailed shrew, white-footed mouse, cottontail ink, red fox, and whitetail deer) and eight avian wildlife species (American robin, woodcock, wild turkey, belted kingfisher, great blue heron, barred owl, Cooper`s hawk, and redtailed hawk) (scientific names are presented in Appendix C). These species were chosen because they are widely distributed and provide a representative range of body sizes and diets. The chemicals are some of those that occur at United States Department of Energy (DOE) waste sites. The benchmarks presented in this report are values believed to be nonhazardous for the listed wildlife species.

  10. Benchmark On Sensitivity Calculation (Phase III)

    Energy Technology Data Exchange (ETDEWEB)

    Ivanova, Tatiana [IRSN; Laville, Cedric [IRSN; Dyrda, James [Atomic Weapons Establishment; Mennerdahl, Dennis [E. Mennerdahl Systems; Golovko, Yury [Institute of Physics and Power Engineering (IPPE), Obninsk, Russia; Raskach, Kirill [Institute of Physics and Power Engineering (IPPE), Obninsk, Russia; Tsiboulia, Anatoly [Institute of Physics and Power Engineering (IPPE), Obninsk, Russia; Lee, Gil Soo [Korea Institute of Nuclear Safety (KINS); Woo, Sweng-Woong [Korea Institute of Nuclear Safety (KINS); Bidaud, Adrien [Labratoire de Physique Subatomique et de Cosmolo-gie (LPSC); Patel, Amrit [NRC; Bledsoe, Keith C [ORNL; Rearden, Bradley T [ORNL; Gulliford, J. [OECD Nuclear Energy Agency

    2012-01-01

    The sensitivities of the keff eigenvalue to neutron cross sections have become commonly used in similarity studies and as part of the validation algorithm for criticality safety assessments. To test calculations of the sensitivity coefficients, a benchmark study (Phase III) has been established by the OECD-NEA/WPNCS/EG UACSA (Expert Group on Uncertainty Analysis for Criticality Safety Assessment). This paper presents some sensitivity results generated by the benchmark participants using various computational tools based upon different computational methods: SCALE/TSUNAMI-3D and -1D, MONK, APOLLO2-MORET 5, DRAGON-SUSD3D and MMKKENO. The study demonstrates the performance of the tools. It also illustrates how model simplifications impact the sensitivity results and demonstrates the importance of 'implicit' (self-shielding) sensitivities. This work has been a useful step towards verification of the existing and developed sensitivity analysis methods.

  11. SINBAD: Shielding integral benchmark archive and database

    International Nuclear Information System (INIS)

    Hunter, H.T.; Ingersoll, D.T.; Roussin, R.W.

    1996-01-01

    SINBAD is a new electronic database developed to store a variety of radiation shielding benchmark data so that users can easily retrieve and incorporate the data into their calculations. SINBAD is an excellent data source for users who require the quality assurance necessary in developing cross-section libraries or radiation transport codes. The future needs of the scientific community are best served by the electronic database format of SINBAD and its user-friendly interface, combined with its data accuracy and integrity

  12. Benchmarking of Remote Sensing Segmentation Methods

    Czech Academy of Sciences Publication Activity Database

    Mikeš, Stanislav; Haindl, Michal; Scarpa, G.; Gaetano, R.

    2015-01-01

    Roč. 8, č. 5 (2015), s. 2240-2248 ISSN 1939-1404 R&D Projects: GA ČR(CZ) GA14-10911S Institutional support: RVO:67985556 Keywords : benchmark * remote sensing segmentation * unsupervised segmentation * supervised segmentation Subject RIV: BD - Theory of Information Impact factor: 2.145, year: 2015 http://library.utia.cas.cz/separaty/2015/RO/haindl-0445995.pdf

  13. Confidential benchmarking based on multiparty computation

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Damgård, Kasper Lyneborg; Nielsen, Kurt

    We report on the design and implementation of a system that uses multiparty computation to enable banks to benchmark their customers' confidential performance data against a large representative set of confidential performance data from a consultancy house. The system ensures that both the banks'...... state during the computation. We ran the system with two servers doing the secure computation using a database with information on about 2500 users. Answers arrived in about 25 seconds.......We report on the design and implementation of a system that uses multiparty computation to enable banks to benchmark their customers' confidential performance data against a large representative set of confidential performance data from a consultancy house. The system ensures that both the banks...... debt. We propose a model based on linear programming for doing the benchmarking and implement it using the SPDZ protocol by Damgård et al., which we modify using a new idea that allows clients to supply data and get output without having to participate in the preprocessing phase and without keeping...

  14. A PWR Thorium Pin Cell Burnup Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Weaver, Kevan Dean; Zhao, X.; Pilat, E. E; Hejzlar, P.

    2000-05-01

    As part of work to evaluate the potential benefits of using thorium in LWR fuel, a thorium fueled benchmark comparison was made in this study between state-of-the-art codes, MOCUP (MCNP4B + ORIGEN2), and CASMO-4 for burnup calculations. The MOCUP runs were done individually at MIT and INEEL, using the same model but with some differences in techniques and cross section libraries. Eigenvalue and isotope concentrations were compared on a PWR pin cell model up to high burnup. The eigenvalue comparison as a function of burnup is good: the maximum difference is within 2% and the average absolute difference less than 1%. The isotope concentration comparisons are better than a set of MOX fuel benchmarks and comparable to a set of uranium fuel benchmarks reported in the literature. The actinide and fission product data sources used in the MOCUP burnup calculations for a typical thorium fuel are documented. Reasons for code vs code differences are analyzed and discussed.

  15. Equilibrium Partitioning Sediment Benchmarks (ESBs) for the ...

    Science.gov (United States)

    This document describes procedures to determine the concentrations of nonionic organic chemicals in sediment interstitial waters. In previous ESB documents, the general equilibrium partitioning (EqP) approach was chosen for the derivation of sediment benchmarks because it accounts for the varying bioavailability of chemicals in different sediments and allows for the incorporation of the appropriate biological effects concentration. This provides for the derivation of benchmarks that are causally linked to the specific chemical, applicable across sediments, and appropriately protective of benthic organisms.  This equilibrium partitioning sediment benchmark (ESB) document was prepared by scientists from the Atlantic Ecology Division, Mid-Continent Ecology Division, and Western Ecology Division, the Office of Water, and private consultants. The document describes procedures to determine the interstitial water concentrations of nonionic organic chemicals in contaminated sediments. Based on these concentrations, guidance is provided on the derivation of toxic units to assess whether the sediments are likely to cause adverse effects to benthic organisms. The equilibrium partitioning (EqP) approach was chosen because it is based on the concentrations of chemical(s) that are known to be harmful and bioavailable in the environment.  This document, and five others published over the last nine years, will be useful for the Program Offices, including Superfund, a

  16. Status on benchmark testing of CENDL-3

    CERN Document Server

    Liu Ping

    2002-01-01

    CENDL-3, the newest version of China Evaluated Nuclear Data Library has been finished, and distributed for some benchmarks analysis recently. The processing was carried out using the NJOY nuclear data processing code system. The calculations and analysis of benchmarks were done with Monte Carlo code MCNP and reactor lattice code WIMSD5A. The calculated results were compared with the experimental results based on ENDF/B6. In most thermal and fast uranium criticality benchmarks, the calculated k sub e sub f sub f values with CENDL-3 were in good agreements with experimental results. In the plutonium fast cores, the k sub e sub f sub f values were improved significantly with CENDL-3. This is duo to reevaluation of the fission spectrum and elastic angular distributions of sup 2 sup 3 sup 9 Pu and sup 2 sup 4 sup 0 Pu. CENDL-3 underestimated the k sub e sub f sub f values compared with other evaluated data libraries for most spherical or cylindrical assemblies of plutonium or uranium with beryllium

  17. The Benchmarking of Integrated Business Structures

    Directory of Open Access Journals (Sweden)

    Nifatova Olena M.

    2017-12-01

    Full Text Available The aim of the article is to study the role of the benchmarking in the process of integration of business structures in the aspect of knowledge sharing. The results of studying the essential content of the concept “integrated business structure” and its semantic analysis made it possible to form our own understanding of this category with an emphasis on the need to consider it in the plane of three projections — legal, economic and organizational one. The economic projection of the essential content of integration associations of business units is supported by the organizational projection, which is expressed through such essential aspects as existence of a single center that makes key decisions; understanding integration as knowledge sharing; using the benchmarking as exchange of experience on key business processes. Understanding the process of integration of business units in the aspect of knowledge sharing involves obtaining certain information benefits. Using the benchmarking as exchange of experience on key business processes in integrated business structures will help improve the basic production processes, increase the efficiency of activity of both the individual business unit and the IBS as a whole.

  18. PANDA experiment and International Standard Problem for passive cooling systems for afterheat removal; PANDA-Versuch und Internationales Standardproblem zu passiven Kuehlsystemen fuer die Nachwaermeabfuhr

    Energy Technology Data Exchange (ETDEWEB)

    Yadigaroglu, G.; Aksan, N.S. [Paul Scherrer Inst. (PSI), Villigen (Switzerland). Lab. fuer Thermohydraulik

    1999-09-03

    In the context of OECD/NEA, Paul Scherrer Institut (PSI) is working on an International Standard Problem which is to provide information on the efficiency and use of computer program systems for passive afterheat removal systems. The PANDA test facility of PSI was designed for these investigations. A six-phase PANDA experiment provides a basis for pre-calculation and recalculation of selected phases covering a limited number of system-typical operating states and phenomena. The experiment was specified and carried out in the year under report. [Deutsch] Im Rahmen der OECD/NEA fuehrt das Paul Scherrer Institut (PSI) ein Internationales Standardproblem durch, das Aufschluss ueber die Leistungsfaehigkeit und Handhabung von Computer-Programmsystemen geben soll, die im Zusammenhang mit passiven Nachwaerme-Abfuhrsystemen eingesetzt werden. Die Versuchsanlage PANDA am PSI ist speziell auf die Untersuchung derartiger Systeme ausgerichtet. Ein PANDA-Versuch in sechs Phasen liefert den teilnehmenden Organisationen die Basis fuer Voraus- und Nachrechnungen einzelner oder mehrerer Phasen, die jeweils eine begrenzte Anzahl von systemtypischen Betriebszustaenden und Phaenomenen abdecken. Im Berichtsjahr wurde der Versuch spezifiziert und gefahren. (orig.)

  19. Benchmark calculations on fluid coupled co-axial cylinders typical to LMFBR structures

    International Nuclear Information System (INIS)

    Dostal, M.; Descleve, P.; Gantenbein, F.; Lazzeri, L.

    1983-01-01

    This paper describes a joint effort promoted and funded by the Commission of European Community under the umbrella of Fast Reactor Co-ordinating Committee and working group on Codes and Standards No. 2 with the purpose to test several programs currently used for dynamic analysis of fluid-coupled structures. The scope of the benchmark calculations is limited to beam type modes of vibration, small displacement of the structures and small pressure variation such as encountered in seismic or flow induced vibration problems. Five computer codes were used: ANSYS, AQUAMODE, NOVAX, MIAS/SAP4 and ZERO where each program employs a different structural-fluid formulation. The calculations were performed for four different geometrical configurations of concentric cylinders where the effect of gap size, water level, and support conditions were considered. The analytical work was accompanied by experiments carried out on a purpose-built rig. The test rig consisted of two concentric cylinders independently supported on flexible cantilevers. A geometrical simplicity and attention in the rig design to eliminate the structural coupling between the cylinders lead to unambiguous test results. Only the beam natural frequencies, in phase and out of phase were measured. The comparison of different analytical methods and experimental results is presented and discussed. The degree of agreement varied between very good and unacceptable. (orig./GL)

  20. RADSAT Benchmarks for Prompt Gamma Neutron Activation Analysis Measurements

    Energy Technology Data Exchange (ETDEWEB)

    Burns, Kimberly A.; Gesh, Christopher J.

    2011-07-01

    The accurate and efficient simulation of coupled neutron-photon problems is necessary for several important radiation detection applications. Examples include the detection of nuclear threats concealed in cargo containers and prompt gamma neutron activation analysis for nondestructive determination of elemental composition of unknown samples. High-resolution gamma-ray spectrometers are used in these applications to measure the spectrum of the emitted photon flux, which consists of both continuum and characteristic gamma rays with discrete energies. Monte Carlo transport is the most commonly used simulation tool for this type of problem, but computational times can be prohibitively long. This work explores the use of multi-group deterministic methods for the simulation of coupled neutron-photon problems. The main purpose of this work is to benchmark several problems modeled with RADSAT and MCNP to experimental data. Additionally, the cross section libraries for RADSAT are updated to include ENDF/B-VII cross sections. Preliminary findings show promising results when compared to MCNP and experimental data, but also areas where additional inquiry and testing are needed. The potential benefits and shortcomings of the multi-group-based approach are discussed in terms of accuracy and computational efficiency.