WorldWideScience

Sample records for benchmark analyses results

  1. Benchmarking result diversification in social image retrieval

    DEFF Research Database (Denmark)

    Ionescu, Bogdan; Popescu, Adrian; Müller, Henning

    2014-01-01

    This article addresses the issue of retrieval result diversification in the context of social image retrieval and discusses the results achieved during the MediaEval 2013 benchmarking. 38 runs and their results are described and analyzed in this text. A comparison of the use of expert vs....... crowdsourcing annotations shows that crowdsourcing results are slightly different and have higher inter observer differences but results are comparable at lower cost. Multimodal approaches have best results in terms of cluster recall. Manual approaches can lead to high precision but often lower diversity....... With this detailed results analysis we give future insights on this matter....

  2. Benchmarking von Krankenhausinformationssystemen – eine vergleichende Analyse deutschsprachiger Benchmarkingcluster

    Directory of Open Access Journals (Sweden)

    Jahn, Franziska

    2015-08-01

    Full Text Available Benchmarking is a method of strategic information management used by many hospitals today. During the last years, several benchmarking clusters have been established within the German-speaking countries. They support hospitals in comparing and positioning their information system’s and information management’s costs, performance and efficiency against other hospitals. In order to differentiate between these benchmarking clusters and to provide decision support in selecting an appropriate benchmarking cluster, a classification scheme is developed. The classification scheme observes both general conditions and examined contents of the benchmarking clusters. It is applied to seven benchmarking clusters which have been active in the German-speaking countries within the last years. Currently, performance benchmarking is the most frequent benchmarking type, whereas the observed benchmarking clusters differ in the number of benchmarking partners and their cooperation forms. The benchmarking clusters also deal with different benchmarking subjects. Assessing costs and quality application systems, physical data processing systems, organizational structures of information management and IT services processes are the most frequent benchmarking subjects. There is still potential for further activities within the benchmarking clusters to measure strategic and tactical information management, IT governance and quality of data and data-processing processes. Based on the classification scheme and the comparison of the benchmarking clusters, we derive general recommendations for benchmarking of hospital information systems.

  3. Evaluation of PWR and BWR pin cell benchmark results

    Energy Technology Data Exchange (ETDEWEB)

    Pijlgroms, B.J.; Gruppelaar, H.; Janssen, A.J. (Netherlands Energy Research Foundation (ECN), Petten (Netherlands)); Hoogenboom, J.E.; Leege, P.F.A. de (Interuniversitair Reactor Inst., Delft (Netherlands)); Voet, J. van der (Gemeenschappelijke Kernenergiecentrale Nederland NV, Dodewaard (Netherlands)); Verhagen, F.C.M. (Keuring van Electrotechnische Materialen NV, Arnhem (Netherlands))

    1991-12-01

    Benchmark results of the Dutch PINK working group on PWR and BWR pin cell calculational benchmark as defined by EPRI are presented and evaluated. The observed discrepancies are problem dependent: a part of the results is satisfactory, some other results require further analysis. A brief overview is given of the different code packages used in this analysis. (author). 14 refs., 9 figs., 30 tabs.

  4. Evaluation of PWR and BWR pin cell benchmark results

    Energy Technology Data Exchange (ETDEWEB)

    Pilgroms, B.J.; Gruppelaar, H.; Janssen, A.J. (Netherlands Energy Research Foundation (ECN), Petten (Netherlands)); Hoogenboom, J.E.; Leege, P.F.A. de (Interuniversitair Reactor Inst., Delft (Netherlands)); Voet, J. van der (Gemeenschappelijke Kernenergiecentrale Nederland NV, Dodewaard (Netherlands)); Verhagen, F.C.M. (Keuring van Electrotechnische Materialen NV, Arnhem (Netherlands))

    1991-12-01

    Benchmark results of the Dutch PINK working group on the PWR and BWR pin cell calculational benchmark as defined by EPRI are presented and evaluated. The observed discrepancies are problem dependent: a part of the results is satisfactory, some other results require further analysis. A brief overview is given of the different code packages used in this analysis. (author). 14 refs.; 9 figs.; 30 tabs.

  5. Surveying and benchmarking techniques to analyse DNA gel fingerprint images.

    Science.gov (United States)

    Heras, Jónathan; Domínguez, César; Mata, Eloy; Pascual, Vico

    2016-11-01

    DNA fingerprinting is a genetic typing technique that allows the analysis of the genomic relatedness between samples, and the comparison of DNA patterns. The analysis of DNA gel fingerprint images usually consists of five consecutive steps: image pre-processing, lane segmentation, band detection, normalization and fingerprint comparison. In this article, we firstly survey the main methods that have been applied in the literature in each of these stages. Secondly, we focus on lane-segmentation and band-detection algorithms-as they are the steps that usually require user-intervention-and detect the seven core algorithms used for both tasks. Subsequently, we present a benchmark that includes a data set of images, the gold standards associated with those images and the tools to measure the performance of lane-segmentation and band-detection algorithms. Finally, we implement the core algorithms used both for lane segmentation and band detection, and evaluate their performance using our benchmark. As a conclusion of that study, we obtain that the average profile algorithm is the best starting point for lane segmentation and band detection.

  6. Analyzing the BBOB results by means of benchmarking concepts.

    Science.gov (United States)

    Mersmann, O; Preuss, M; Trautmann, H; Bischl, B; Weihs, C

    2015-01-01

    We present methods to answer two basic questions that arise when benchmarking optimization algorithms. The first one is: which algorithm is the "best" one? and the second one is: which algorithm should I use for my real-world problem? Both are connected and neither is easy to answer. We present a theoretical framework for designing and analyzing the raw data of such benchmark experiments. This represents a first step in answering the aforementioned questions. The 2009 and 2010 BBOB benchmark results are analyzed by means of this framework and we derive insight regarding the answers to the two questions. Furthermore, we discuss how to properly aggregate rankings from algorithm evaluations on individual problems into a consensus, its theoretical background and which common pitfalls should be avoided. Finally, we address the grouping of test problems into sets with similar optimizer rankings and investigate whether these are reflected by already proposed test problem characteristics, finding that this is not always the case.

  7. Aircraft Engine Gas Path Diagnostic Methods: Public Benchmarking Results

    Science.gov (United States)

    Simon, Donald L.; Borguet, Sebastien; Leonard, Olivier; Zhang, Xiaodong (Frank)

    2013-01-01

    Recent technology reviews have identified the need for objective assessments of aircraft engine health management (EHM) technologies. To help address this issue, a gas path diagnostic benchmark problem has been created and made publicly available. This software tool, referred to as the Propulsion Diagnostic Method Evaluation Strategy (ProDiMES), has been constructed based on feedback provided by the aircraft EHM community. It provides a standard benchmark problem enabling users to develop, evaluate and compare diagnostic methods. This paper will present an overview of ProDiMES along with a description of four gas path diagnostic methods developed and applied to the problem. These methods, which include analytical and empirical diagnostic techniques, will be described and associated blind-test-case metric results will be presented and compared. Lessons learned along with recommendations for improving the public benchmarking processes will also be presented and discussed.

  8. Results of the benchmark for blade structural models, part A

    DEFF Research Database (Denmark)

    Lekou, D.J.; Chortis, D.; Belen Fariñas, A.;

    2013-01-01

    Task 2.2 of the InnWind.Eu project. The benchmark is based on the reference wind turbine and the reference blade provided by DTU [1]. "Structural Concept developers/modelers" of WP2 were provided with the necessary input for a comparison numerical simulation run, upon definition of the reference blade......A benchmark on structural design methods for blades was performed within the InnWind.Eu project under WP2 “Lightweight Rotor” Task 2.2 “Lightweight structural design”. The present document is describes the results of the comparison simulation runs that were performed by the partners involved within...

  9. NAS Parallel Benchmark Results 11-96. 1.0

    Science.gov (United States)

    Bailey, David H.; Bailey, David; Chancellor, Marisa K. (Technical Monitor)

    1997-01-01

    The NAS Parallel Benchmarks have been developed at NASA Ames Research Center to study the performance of parallel supercomputers. The eight benchmark problems are specified in a "pencil and paper" fashion. In other words, the complete details of the problem to be solved are given in a technical document, and except for a few restrictions, benchmarkers are free to select the language constructs and implementation techniques best suited for a particular system. These results represent the best results that have been reported to us by the vendors for the specific 3 systems listed. In this report, we present new NPB (Version 1.0) performance results for the following systems: DEC Alpha Server 8400 5/440, Fujitsu VPP Series (VX, VPP300, and VPP700), HP/Convex Exemplar SPP2000, IBM RS/6000 SP P2SC node (120 MHz), NEC SX-4/32, SGI/CRAY T3E, SGI Origin200, and SGI Origin2000. We also report High Performance Fortran (HPF) based NPB results for IBM SP2 Wide Nodes, HP/Convex Exemplar SPP2000, and SGI/CRAY T3D. These results have been submitted by Applied Parallel Research (APR) and Portland Group Inc. (PGI). We also present sustained performance per dollar for Class B LU, SP and BT benchmarks.

  10. Large Core Code Evaluation Working Group Benchmark Problem Four: neutronics and burnup analysis of a large heterogeneous fast reactor. Part 1. Analysis of benchmark results. [LMFBR

    Energy Technology Data Exchange (ETDEWEB)

    Cowan, C.L.; Protsik, R.; Lewellen, J.W. (eds.)

    1984-01-01

    The Large Core Code Evaluation Working Group Benchmark Problem Four was specified to provide a stringent test of the current methods which are used in the nuclear design and analyses process. The benchmark specifications provided a base for performing detailed burnup calculations over the first two irradiation cycles for a large heterogeneous fast reactor. Particular emphasis was placed on the techniques for modeling the three-dimensional benchmark geometry, and sensitivity studies were carried out to determine the performance parameter sensitivities to changes in the neutronics and burnup specifications. The results of the Benchmark Four calculations indicated that a linked RZ-XY (Hex) two-dimensional representation of the benchmark model geometry can be used to predict mass balance data, power distributions, regionwise fuel exposure data and burnup reactivities with good accuracy when compared with the results of direct three-dimensional computations. Most of the small differences in the results of the benchmark analyses by the different participants were attributed to ambiguities in carrying out the regionwise flux renormalization calculations throughout the burnup step.

  11. The results of the pantograph-catenary interaction benchmark

    Science.gov (United States)

    Bruni, Stefano; Ambrosio, Jorge; Carnicero, Alberto; Cho, Yong Hyeon; Finner, Lars; Ikeda, Mitsuru; Kwon, Sam Young; Massat, Jean-Pierre; Stichel, Sebastian; Tur, Manuel; Zhang, Weihua

    2015-03-01

    This paper describes the results of a voluntary benchmark initiative concerning the simulation of pantograph-catenary interaction, which was proposed and coordinated by Politecnico di Milano and participated by 10 research institutions established in 9 different countries across Europe and Asia. The aims of the benchmark are to assess the dispersion of results on the same simulation study cases, to demonstrate the accuracy of numerical methodologies and simulation models and to identify the best suited modelling approaches to study pantograph-catenary interaction. One static and three dynamic simulation cases were defined for a non-existing but realistic high-speed pantograph-catenary couple. These cases were run using 10 of the major simulation codes presently in use for the study of pantograph-catenary interaction, and the results are presented and critically discussed here. All input data required to run the study cases are also provided, allowing the use of this benchmark as a term of comparison for other simulation codes.

  12. [Results of the evaluation of German benchmarking networks funded by the Ministry of Health].

    Science.gov (United States)

    de Cruppé, Werner; Blumenstock, Gunnar; Fischer, Imma; Selbmann, Hans-Konrad; Geraedts, Max

    2011-01-01

    Nine out of ten demonstration projects on clinical benchmarking funded by the German Ministry of Health were evaluated. Project reports and interviews were uniformly analysed using a list of criteria and a scheme to categorize the realized benchmarking approach. At the end of the funding period four benchmarking networks had implemented all benchmarking steps, and six were continued after funding had expired. The improvement of outcome quality cannot yet be assessed. Factors promoting the introduction of benchmarking networks with regard to organisational and process aspects of benchmarking implementation were derived.

  13. Experiment vs simulation RT WFNDEC 2014 benchmark: CIVA results

    Science.gov (United States)

    Tisseur, D.; Costin, M.; Rattoni, B.; Vienne, C.; Vabre, A.; Cattiaux, G.; Sollier, T.

    2015-03-01

    The French Atomic Energy Commission and Alternative Energies (CEA) has developed for years the CIVA software dedicated to simulation of NDE techniques such as Radiographic Testing (RT). RT modelling is achieved in CIVA using combination of a determinist approach based on ray tracing for transmission beam simulation and a Monte Carlo model for the scattered beam computation. Furthermore, CIVA includes various detectors models, in particular common x-ray films and a photostimulable phosphor plates. This communication presents the results obtained with the configurations proposed in the World Federation of NDEC 2014 RT modelling benchmark with the RT models implemented in the CIVA software.

  14. Actinides transmutation - a comparison of results for PWR benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Claro, Luiz H. [Instituto de Estudos Avancados (IEAv/CTA), Sao Jose dos Campos, SP (Brazil)], e-mail: luizhenu@ieav.cta.br

    2009-07-01

    The physical aspects involved in the Partitioning and Transmutation (P and T) of minor actinides (MA) and fission products (FP) generated by reactors PWR are of great interest in the nuclear industry. Besides these the reduction in the storage of radioactive wastes are related with the acceptability of the nuclear electric power. From the several concepts for partitioning and transmutation suggested in literature, one of them involves PWR reactors to burn the fuel containing plutonium and minor actinides reprocessed of UO{sub 2} used in previous stages. In this work are presented the results of the calculations of a benchmark in P and T carried with WIMSD5B program using its new cross sections library generated from the ENDF-B-VII and the comparison with the results published in literature by other calculations. For comparison, was used the benchmark transmutation concept based in a typical PWR cell and the analyzed results were the k{infinity} and the atomic density of the isotopes Np-239, Pu-241, Pu-242 and Am-242m, as function of burnup considering discharge of 50 GWd/tHM. (author)

  15. Benchmarking Benchmarks

    NARCIS (Netherlands)

    D.C. Blitz (David)

    2011-01-01

    textabstractBenchmarking benchmarks is a bundle of six studies that are inspired by the prevalence of benchmarking in academic finance research as well as in investment practice. Three studies examine if current benchmark asset pricing models adequately describe the cross-section of stock returns.

  16. Benchmarking Benchmarks

    NARCIS (Netherlands)

    D.C. Blitz (David)

    2011-01-01

    textabstractBenchmarking benchmarks is a bundle of six studies that are inspired by the prevalence of benchmarking in academic finance research as well as in investment practice. Three studies examine if current benchmark asset pricing models adequately describe the cross-section of stock returns. W

  17. IAEA coordinated research project (CRP) on 'Analytical and experimental benchmark analyses of accelerator driven systems'

    Energy Technology Data Exchange (ETDEWEB)

    Abanades, Alberto [Universidad Politecnica de Madrid (Spain); Aliberti, Gerardo; Gohar, Yousry; Talamo, Alberto [ANL, Argonne (United States); Bornos, Victor; Kiyavitskaya, Anna [Joint Institute of Power Eng. and Nucl. Research ' Sosny' , Minsk (Belarus); Carta, Mario [ENEA, Casaccia (Italy); Janczyszyn, Jerzy [AGH-University of Science and Technology, Krakow (Poland); Maiorino, Jose [IPEN, Sao Paulo (Brazil); Pyeon, Cheolho [Kyoto University (Japan); Stanculescu, Alexander [IAEA, Vienna (Austria); Titarenko, Yury [ITEP, Moscow (Russian Federation); Westmeier, Wolfram [Wolfram Westmeier GmbH, Ebsdorfergrund (Germany)

    2008-07-01

    In December 2005, the International Atomic Energy Agency (IAEA) has started a Coordinated Research Project (CRP) on 'Analytical and Experimental Benchmark Analyses of Accelerator Driven Systems'. The overall objective of the CRP, performed within the framework of the Technical Working Group on Fast Reactors (TWGFR) of IAEA's Nuclear Energy Department, is to increase the capability of interested Member States in developing and applying advanced reactor technologies in the area of long-lived radioactive waste utilization and transmutation. The specific objective of the CRP is to improve the present understanding of the coupling of an external neutron source (e.g. spallation source) with a multiplicative sub-critical core. The participants are performing computational and experimental benchmark analyses using integrated calculation schemes and simulation methods. The CRP aims at integrating some of the planned experimental demonstration projects of the coupling between a sub-critical core and an external neutron source (e.g. YALINA Booster in Belarus, and Kyoto University's Critical Assembly (KUCA)). The objective of these experimental programs is to validate computational methods, obtain high energy nuclear data, characterize the performance of sub-critical assemblies driven by external sources, and to develop and improve techniques for sub-criticality monitoring. The paper summarizes preliminary results obtained to-date for some of the CRP benchmarks. (authors)

  18. [Benchmarking projects examining patient care in Germany: methods of analysis, survey results, and best practice].

    Science.gov (United States)

    Blumenstock, Gunnar; Fischer, Imma; de Cruppé, Werner; Geraedts, Max; Selbmann, Hans-Konrad

    2011-01-01

    A survey among 232 German health care organisations addressed benchmarking projects in patient care. 53 projects were reported and analysed using a benchmarking development scheme and a list of criteria. None of the projects satisfied all the criteria. Rather, examples of best practice for single aspects have been identified.

  19. Benchmarking Evaluation Results for Prototype Extravehicular Activity Gloves

    Science.gov (United States)

    Aitchison, Lindsay; McFarland, Shane

    2012-01-01

    The Space Suit Assembly (SSA) Development Team at NASA Johnson Space Center has invested heavily in the advancement of rear-entry planetary exploration suit design but largely deferred development of extravehicular activity (EVA) glove designs, and accepted the risk of using the current flight gloves, Phase VI, for unique mission scenarios outside the Space Shuttle and International Space Station (ISS) Program realm of experience. However, as design reference missions mature, the risks of using heritage hardware have highlighted the need for developing robust new glove technologies. To address the technology gap, the NASA Game-Changing Technology group provided start-up funding for the High Performance EVA Glove (HPEG) Project in the spring of 2012. The overarching goal of the HPEG Project is to develop a robust glove design that increases human performance during EVA and creates pathway for future implementation of emergent technologies, with specific aims of increasing pressurized mobility to 60% of barehanded capability, increasing the durability by 100%, and decreasing the potential of gloves to cause injury during use. The HPEG Project focused initial efforts on identifying potential new technologies and benchmarking the performance of current state of the art gloves to identify trends in design and fit leading to establish standards and metrics against which emerging technologies can be assessed at both the component and assembly levels. The first of the benchmarking tests evaluated the quantitative mobility performance and subjective fit of four prototype gloves developed by Flagsuit LLC, Final Frontier Designs, LLC Dover, and David Clark Company as compared to the Phase VI. All of the companies were asked to design and fabricate gloves to the same set of NASA provided hand measurements (which corresponded to a single size of Phase Vi glove) and focus their efforts on improving mobility in the metacarpal phalangeal and carpometacarpal joints. Four test

  20. Pharmacy Survey on Patient Safety Culture: Benchmarking Results.

    Science.gov (United States)

    Herner, Sheryl J; Rawlings, Julia E; Swartzendruber, Kelly; Delate, Thomas

    2017-03-01

    This study's objective was to assess the patient safety culture in a large, integrated health delivery system's pharmacy department to allow for benchmarking with other health systems. This was a cross-sectional survey conducted in a pharmacy department consisting of staff members who provide dispensing, clinical, and support services within an integrated health delivery system. The U.S. Agency for Healthcare Research and Quality's 11-composite, validated Pharmacy Survey on Patient Safety Culture questionnaire was transcribed into an online format. All departmental staff members were invited to participate in this anonymous survey. Cronbach α and overall results and contrasts between dispensing and clinical services staff and dispensing pharmacists and technicians/clerks as percentage positive scores (PPSs) are presented. Differences in contrasts were assessed with χ tests of association. Completed questionnaires were received from 598 (69.9%) of 855 employees. Cronbach α ranged from 0.55 to 0.90. Overall, the highest and lowest composite PPSs were for patient counseling (94.5%) and staffing and work pressure (44.7%), respectively. Compared with dispensing service, the clinical service participants had statistically higher PPSs for all composites except patient counseling, communication about mistakes, and staffing and work pressure (all P > 0.05). The technicians/clerks had a statistically higher PPS compared with the pharmacists for communication about mistakes (P = 0.007). All other composites were equivalent between groups. Patient counseling consistently had the highest PPS among composites measured, but opportunities existed for improvement in all aspects measured. Future research should identify and assess interventions targeted to improving the patient safety culture in pharmacy.

  1. A physician's due: measuring physician billing performance, benchmarking results.

    Science.gov (United States)

    Woodcock, Elizabeth W; Browne, Robert C; Jenkins, Jennifer L

    2008-07-01

    A 2008 study focused on four key performance indicators (KPIs) and staffing levels to benchmark the FYO7 performance of physician group billing operations. A comparison of the change in the KPIs from FYO3 to FYO7 for a number of these billing operations disclosed across-the-board improvements. Billing operations did not show significant changes in staffing levels during this time, pointing to the existence of obstacles that prevent staff reductions in this area.

  2. Benchmark of SCALE (SAS2H) isotopic predictions of depletion analyses for San Onofre PWR MOX fuel

    Energy Technology Data Exchange (ETDEWEB)

    Hermann, O.W.

    2000-02-01

    The isotopic composition of mixed-oxide (MOX) fuel, fabricated with both uranium and plutonium, after discharge from reactors is of significant interest to the Fissile Materials Disposition Program. The validation of the SCALE (SAS2H) depletion code for use in the prediction of isotopic compositions of MOX fuel, similar to previous validation studies on uranium-only fueled reactors, has corresponding significance. The EEI-Westinghouse Plutonium Recycle Demonstration Program examined the use of MOX fuel in the San Onofre PWR, Unit 1, during cycles 2 and 3. Isotopic analyses of the MOX spent fuel were conducted on 13 actinides and {sup 148}Nd by either mass or alpha spectrometry. Six fuel pellet samples were taken from four different fuel pins of an irradiated MOX assembly. The measured actinide inventories from those samples has been used to benchmark SAS2H for MOX fuel applications. The average percentage differences in the code results compared with the measurement were {minus}0.9% for {sup 235}U and 5.2% for {sup 239}Pu. The differences for most of the isotopes were significantly larger than in the cases for uranium-only fueled reactors. In general, comparisons of code results with alpha spectrometer data had extreme differences, although the differences in the calculations compared with mass spectrometer analyses were not extremely larger than that of uranium-only fueled reactors. This benchmark study should be useful in estimating uncertainties of inventory, criticality and dose calculations of MOX spent fuel.

  3. Fast burner reactor benchmark results from the NEA working party on physics of plutonium recycle

    Energy Technology Data Exchange (ETDEWEB)

    Hill, R.N.; Wade, D.C. [Argonne National Lab., IL (United States); Palmiotti, G. [CEA - Cadarache, Saint-Paul-Les-Durance (France)

    1995-12-01

    As part of a program proposed by the OECD/NEA Working Party on Physics of Plutonium Recycling (WPPR) to evaluate different scenarios for the use of plutonium, fast reactor physics benchmarks were developed; fuel cycle scenarios using either PUREX/TRUEX (oxide fuel) or pyrometallurgical (metal fuel) separation technologies were specified. These benchmarks were designed to evaluate the nuclear performance and radiotoxicity impact of a transuranic-burning fast reactor system. International benchmark results are summarized in this paper; and key conclusions are highlighted.

  4. SUSTAINABLE SUCCESS IN HIGHER EDUCATION BY SHARING THE BEST PRACTICES AS A RESULT OF BENCHMARKING PROCESS

    Directory of Open Access Journals (Sweden)

    Anca Gabriela Ilie

    2011-11-01

    Full Text Available The paper proposes to review the main benchmarking criteria, based on the quality indicators used by the higher education institutions and to present new indicators of reference as a result of the inter-universities cooperation. Once these indicators are defined, a national database could be created and through benchmarking methods, there could be established the level of national performance of the educational system. Going forward and generalizing the process, we can compare the national educational system with the European one, using the benchmarking approach. The final purpose is that of establishing a group of universities who come together to explore opportunities for benchmarks and best practices sharing on common interest areas in order to create a „quality culture” for the Romanian higher education system

  5. Evaluation of PWR and BWR pin cell benchmark results

    Energy Technology Data Exchange (ETDEWEB)

    Pijlgroms, B.J.; Gruppelaar, H.; Janssen, A.J. (Unit Nuclear Energy, Netherlands Energy Research Foundation ECN, Petten (Netherlands)); Hoogenboorm, J.E.; De Leege, P.F.A. (International Reactor Institute IRI, University of Leiden, Leiden (Netherlands)); Van de Voet, J.; Verhagen, F.C.M. (KEMA NV, Arnhem (Netherlands))

    1992-01-01

    In order to carry out reliable reactor core calculations for a boiled water reactor (BWR) or a pressurized water reactor (PWR) first reactivity calculations have to be carried out for which several calculation programs are available. The purpose of the title project is to exchange experiences to improve the knowledge of this reactivity calculations. In a large number of institutes reactivity calculations of PWR and BWR pin cells were executed by means of available computer codes. Results are compared. It is concluded that the variations in the calculated results are problem dependent. Part of the results is satisfactory. However, further research is necessary.

  6. Summary of FY15 results of benchmark modeling activities

    Energy Technology Data Exchange (ETDEWEB)

    Arguello, J. Guadalupe [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)

    2015-08-01

    Sandia is participating in the third phase of an is a contributing partner to a U.S.-German "Joint Project" entitled "Comparison of current constitutive models and simulation procedures on the basis of model calculations of the thermo-mechanical behavior and healing of rock salt." The first goal of the project is to check the ability of numerical modeling tools to correctly describe the relevant deformation phenomena in rock salt under various influences. Achieving this goal will lead to increased confidence in the results of numerical simulations related to the secure storage of radioactive wastes in rock salt, thereby enhancing the acceptance of the results. These results may ultimately be used to make various assertions regarding both the stability analysis of an underground repository in salt, during the operating phase, and the long-term integrity of the geological barrier against the release of harmful substances into the biosphere, in the post-operating phase.

  7. BENCHMARKING UPGRADED HOTSPOT DOSE CALCULATIONS AGAINST MACCS2 RESULTS

    Energy Technology Data Exchange (ETDEWEB)

    Brotherton, Kevin

    2009-04-30

    The radiological consequence of interest for a documented safety analysis (DSA) is the centerline Total Effective Dose Equivalent (TEDE) incurred by the Maximally Exposed Offsite Individual (MOI) evaluated at the 95th percentile consequence level. An upgraded version of HotSpot (Version 2.07) has been developed with the capabilities to read site meteorological data and perform the necessary statistical calculations to determine the 95th percentile consequence result. These capabilities should allow HotSpot to join MACCS2 (Version 1.13.1) and GENII (Version 1.485) as radiological consequence toolbox codes in the Department of Energy (DOE) Safety Software Central Registry. Using the same meteorological data file, scenarios involving a one curie release of {sup 239}Pu were modeled in both HotSpot and MACCS2. Several sets of release conditions were modeled, and the results compared. In each case, input parameter specifications for each code were chosen to match one another as much as the codes would allow. The results from the two codes are in excellent agreement. Slight differences observed in results are explained by algorithm differences.

  8. Results of the GABLS3 diurnal-cycle benchmark for wind energy applications

    Science.gov (United States)

    Sanz Rodrigo, J.; Allaerts, D.; Avila, M.; Barcons, J.; Cavar, D.; Chávez Arroyo, RA; Churchfield, M.; Kosovic, B.; Lundquist, JK; Meyers, J.; Muñoz Esparza, D.; Palma, JMLM; Tomaszewski, JM; Troldborg, N.; van der Laan, MP; Veiga Rodrigues, C.

    2017-05-01

    We present results of the GABLS3 model intercomparison benchmark revisited for wind energy applications. The case consists of a diurnal cycle, measured at the 200-m tall Cabauw tower in the Netherlands, including a nocturnal low-level jet. The benchmark includes a sensitivity analysis of WRF simulations using two input meteorological databases and five planetary boundary-layer schemes. A reference set of mesoscale tendencies is used to drive microscale simulations using RANS k-ɛ and LES turbulence models. The validation is based on rotor-based quantities of interest. Cycle-integrated mean absolute errors are used to quantify model performance. The results of the benchmark are used to discuss input uncertainties from mesoscale modelling, different meso-micro coupling strategies (online vs offline) and consistency between RANS and LES codes when dealing with boundary-layer mean flow quantities. Overall, all the microscale simulations produce a consistent coupling with mesoscale forcings.

  9. Benchmark of Space Charge Simulations and Comparison with Experimental Results for High Intensity, Low Energy Accelerators

    CERN Document Server

    Cousineau, Sarah M

    2005-01-01

    Space charge effects are a major contributor to beam halo and emittance growth leading to beam loss in high intensity, low energy accelerators. As future accelerators strive towards unprecedented levels of beam intensity and beam loss control, a more comprehensive understanding of space charge effects is required. A wealth of simulation tools have been developed for modeling beams in linacs and rings, and with the growing availability of high-speed computing systems, computationally expensive problems that were inconceivable a decade ago are now being handled with relative ease. This has opened the field for realistic simulations of space charge effects, including detailed benchmarks with experimental data. A great deal of effort is being focused in this direction, and several recent benchmark studies have produced remarkably successful results. This paper reviews the achievements in space charge benchmarking in the last few years, and discusses the challenges that remain.

  10. Radiochemical analyses of surface water from U.S. Geological Survey hydrologic bench-mark stations

    Science.gov (United States)

    Janzer, V.J.; Saindon, L.G.

    1972-01-01

    The U.S. Geological Survey's program for collecting and analyzing surface-water samples for radiochemical constituents at hydrologic bench-mark stations is described. Analytical methods used during the study are described briefly and data obtained from 55 of the network stations in the United States during the period from 1967 to 1971 are given in tabular form.Concentration values are reported for dissolved uranium, radium, gross alpha and gross beta radioactivity. Values are also given for suspended gross alpha radioactivity in terms of natural uranium. Suspended gross beta radioactivity is expressed both as the equilibrium mixture of strontium-90/yttrium-90 and as cesium-137.Other physical parameters reported which describe the samples include the concentrations of dissolved and suspended solids, the water temperature and stream discharge at the time of the sample collection.

  11. INL Results for Phases I and III of the OECD/NEA MHTGR-350 Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Gerhard Strydom; Javier Ortensi; Sonat Sen; Hans Hammer

    2013-09-01

    The Idaho National Laboratory (INL) Very High Temperature Reactor (VHTR) Technology Development Office (TDO) Methods Core Simulation group led the construction of the Organization for Economic Cooperation and Development (OECD) Modular High Temperature Reactor (MHTGR) 350 MW benchmark for comparing and evaluating prismatic VHTR analysis codes. The benchmark is sponsored by the OECD's Nuclear Energy Agency (NEA), and the project will yield a set of reference steady-state, transient, and lattice depletion problems that can be used by the Department of Energy (DOE), the Nuclear Regulatory Commission (NRC), and vendors to assess their code suits. The Methods group is responsible for defining the benchmark specifications, leading the data collection and comparison activities, and chairing the annual technical workshops. This report summarizes the latest INL results for Phase I (steady state) and Phase III (lattice depletion) of the benchmark. The INSTANT, Pronghorn and RattleSnake codes were used for the standalone core neutronics modeling of Exercise 1, and the results obtained from these codes are compared in Section 4. Exercise 2 of Phase I requires the standalone steady-state thermal fluids modeling of the MHTGR-350 design, and the results for the systems code RELAP5-3D are discussed in Section 5. The coupled neutronics and thermal fluids steady-state solution for Exercise 3 are reported in Section 6, utilizing the newly developed Parallel and Highly Innovative Simulation for INL Code System (PHISICS)/RELAP5-3D code suit. Finally, the lattice depletion models and results obtained for Phase III are compared in Section 7. The MHTGR-350 benchmark proved to be a challenging simulation set of problems to model accurately, and even with the simplifications introduced in the benchmark specification this activity is an important step in the code-to-code verification of modern prismatic VHTR codes. A final OECD/NEA comparison report will compare the Phase I and III

  12. [Results of a benchmarking exercise for primary care teams in Barcelona, Spain].

    Science.gov (United States)

    Plaza Tesías, A; Zara Yahni, C; Guarga Rojas, A; Farrés Quesada, J

    2005-02-28

    To identify primary care teams (PCT) with the best overall performance and compare these with other PCT with benchmarking methods. Descriptive, cross-sectional study of a set of indictors for the year 2002. City of Barcelona (northeastern Spain). Thirteen seven PCT with more than 2 years' experience, and 771,811 inhabitants in the catchment area. Indicators were chosen from among those proposed by an advisory group, depending on feasibility of obtaining information. A total of 17 indicators in 4 dimensions were studied: accessibility, clinical effectiveness, case management capacity, and cost-efficiency. Each PCT was scored for each indicator based on the percentile group in the distribution of scores, and for each dimension based on the mean score for all indicators in a given dimension. Overall score for PCT performance was calculated as the weighted sum of the scores for each dimension. As descriptive variables we analyzed time operating under the revised administrative system, patient visits per population served, the population's economic capacity and age of the population. RESULTS. Nine PCT were identified as the benchmark group. Teams in this group had been operating under the revised administrative system for significantly longer than other PCT. In comparison to other PCT, the benchmark group obtained higher scores on all four dimensions, better results on 14 separate indicators, the same results for 1 indicator, and worse results for 2 indicators. CONCLUSIONS. Benchmarking made it possible to identify PCT with the best performance, and to identify areas in need of improvement. This approach is a potentially useful tool for self-evaluation and for stimulating a dynamic for improvement in primary care providers.

  13. OECD/NEA Burnup Credit Calculational Criticality Benchmark Phase I-B Results

    Energy Technology Data Exchange (ETDEWEB)

    DeHart, M.D.

    1993-01-01

    Burnup credit is an ongoing technical concern for many countries that operate commercial nuclear power reactors. In a multinational cooperative effort to resolve burnup credit issues, a Burnup Credit Working Group has been formed under the auspices of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development. This working group has established a set of well-defined calculational benchmarks designed to study significant aspects of burnup credit computational methods. These benchmarks are intended to provide a means for the intercomparison of computer codes, methods, and data applied in spent fuel analysis. The benchmarks have been divided into multiple phases, each phase focusing on a particular feature of burnup credit analysis. This report summarizes the results and findings of the Phase I-B benchmark, which was proposed to provide a comparison of the ability of different code systems and data libraries to perform depletion analysis for the prediction of spent fuel isotopic concentrations. Results included here represent 21 different sets of calculations submitted by 16 different organizations worldwide, and are based on a limited set of nuclides determined to have the most important effect on the neutron multiplication factor of light-water-reactor spent fuel. A comparison of all sets of results demonstrates that most methods are in agreement to within 10% in the ability to estimate the spent fuel concentrations of most actinides. All methods are within 11% agreement about the average for all fission products studied. Furthermore, most deviations are less than 10%, and many are less than 5%. The exceptions are {sup 149}Sm, {sup 151}Sm, and {sup 155}Gd.

  14. OECD/NEA burnup credit calculational criticality benchmark Phase I-B results

    Energy Technology Data Exchange (ETDEWEB)

    DeHart, M.D.; Parks, C.V. [Oak Ridge National Lab., TN (United States); Brady, M.C. [Sandia National Labs., Las Vegas, NV (United States)

    1996-06-01

    In most countries, criticality analysis of LWR fuel stored in racks and casks has assumed that the fuel is fresh with the maximum allowable initial enrichment. This assumption has led to the design of widely spaced and/or highly poisoned storage and transport arrays. If credit is assumed for fuel burnup, initial enrichment limitations can be raised in existing systems, and more compact and economical arrays can be designed. Such reliance on the reduced reactivity of spent fuel for criticality control is referred to as burnup credit. The Burnup Credit Working Group, formed under the auspices of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development, has established a set of well-defined calculational benchmarks designed to study significant aspects of burnup credit computational methods. These benchmarks are intended to provide a means for the intercomparison of computer codes, methods, and data applied in spent fuel analysis. The benchmarks have been divided into multiple phases, each phase focusing on a particular feature of burnup credit analysis. This report summarizes the results and findings of the Phase I-B benchmark, which was proposed to provide a comparison of the ability of different code systems and data libraries to perform depletion analysis for the prediction of spent fuel isotopic concentrations. Results included here represent 21 different sets of calculations submitted by 16 different organizations worldwide and are based on a limited set of nuclides determined to have the most important effect on the neutron multiplication factor of light-water-reactor spent fuel. A comparison of all sets of results demonstrates that most methods agree to within 10% in the ability to estimate the spent fuel concentrations of most actinides. All methods agree within 11% about the average for all fission products studied. Most deviations are less than 10%, and many are less than 5%. The exceptions are Sm 149, Sm 151, and Gd 155.

  15. NACA 0012 benchmark model experimental flutter results with unsteady pressure distributions

    Science.gov (United States)

    Rivera, Jose A., Jr.; Dansberry, Bryan E.; Bennett, Robert M.; Durham, Michael H.; Silva, Walter A.

    1992-01-01

    The Structural Dynamics Division at NASA Langley Research Center has started a wind tunnel activity referred to as the Benchmark Models Program. The primary objective of the program is to acquire measured dynamic instability and corresponding pressure data that will be useful for developing and evaluating aeroelastic type CFD codes currently in use or under development. The program is a multi-year activity that will involve testing of several different models to investigate various aeroelastic phenomena. This paper describes results obtained from a second wind tunnel test of the first model in the Benchmark Models Program. This first model consisted of a rigid semispan wing having a rectangular planform and a NACA 0012 airfoil shape which was mounted on a flexible two degree-of-freedom mount system. Experimental flutter boundaries and corresponding unsteady pressure distribution data acquired over two model chords located at the 60 and 95 percent span stations are presented.

  16. The reactive transport benchmark proposed by GdR MoMaS: presentation and first results

    Energy Technology Data Exchange (ETDEWEB)

    Carrayrou, J. [Institut de Mecanique des Fluides et des Solides, UMR ULP-CNRS 7507, 67 - Strasbourg (France); Lagneau, V. [Ecole des Mines de Paris, Centre de Geosciences, 77 - Fontainebleau (France)

    2007-07-01

    We present here the actual context of reactive transport modelling and the major numerical challenges. GdR MoMaS proposes a benchmark on reactive transport. We present this benchmark and some results obtained on it by two reactive transport codes HYTEC and SPECY. (authors)

  17. Results of the 2013 UT modeling benchmark obtained with models implemented in CIVA

    Energy Technology Data Exchange (ETDEWEB)

    Toullelan, Gwénaël; Raillon, Raphaële; Chatillon, Sylvain [CEA, LIST, 91191Gif-sur-Yvette (France); Lonne, Sébastien [EXTENDE, Le Bergson, 15 Avenue Emile Baudot, 91300 MASSY (France)

    2014-02-18

    The 2013 Ultrasonic Testing (UT) modeling benchmark concerns direct echoes from side drilled holes (SDH), flat bottom holes (FBH) and corner echoes from backwall breaking artificial notches inspected with a matrix phased array probe. This communication presents the results obtained with the models implemented in the CIVA software: the pencilmodel is used to compute the field radiated by the probe, the Kirchhoff approximation is applied to predict the response of FBH and notches and the SOV (Separation Of Variables) model is used for the SDH responses. The comparison between simulated and experimental results are presented and discussed.

  18. Jet Substructure at the Tevatron and LHC: New results, new tools, new benchmarks

    CERN Document Server

    Altheimer, A; Asquith, L; Brooijmans, G; Butterworth, J; Campanelli, M; Chapleau, B; Cholakian, A E; Chou, J P; Dasgupta, M; Davison, A; Dolen, J; Ellis, S D; Essig, R; Fan, J J; Field, R; Fregoso, A; Gallicchio, J; Gershtein, Y; Gomes, A; Haas, A; Halkiadakis, E; Halyo, V; Hoeche, S; Hook, A; Hornig, A; Huang, P; Izaguirre, E; Jankowiak, M; Kribs, G; Krohn, D; Larkoski, A J; Lath, A; Lee, C; Lee, S J; Loch, P; Maksimovic, P; Martinez, M; Miller, D W; Plehn, T; Prokofiev, K; Rahmat, R; Rappoccio, S; Safonov, A; Salam, G P; Schumann, S; Schwartz, M D; Schwartzman, A; Seymour, M; Shao, J; Sinervo, P; Son, M; Soper, D E; Spannowsky, M; Stewart, I W; Strassler, M; Strauss, E; Takeuchi, M; Thaler, J; Thomas, S; Tweedie, B; Vasquez Sierra, R; Vermilion, C K; Villaplana, M; Vos, M; Wacker, J; Walker, D; Walsh, J R; Wang, L-T; Wilbur, S; Yavin, I; Zhu, W

    2012-01-01

    In this report we review recent theoretical progress and the latest experimental results in jet substructure from the Tevatron and the LHC. We review the status of and outlook for calculation and simulation tools for studying jet substructure. Following up on the report of the Boost 2010 workshop, we present a new set of benchmark comparisons of substructure techniques, focusing on the set of variables and grooming methods that are collectively known as "top taggers". To facilitate further exploration, we have attempted to collect, harmonise, and publish software implementations of these techniques.

  19. The benchmark aeroelastic models program: Description and highlights of initial results

    Science.gov (United States)

    Bennett, Robert M.; Eckstrom, Clinton V.; Rivera, Jose A., Jr.; Dansberry, Bryan E.; Farmer, Moses G.; Durham, Michael H.

    1992-01-01

    An experimental effort was implemented in aeroelasticity called the Benchmark Models Program. The primary purpose of this program is to provide the necessary data to evaluate computational fluid dynamic codes for aeroelastic analysis. It also focuses on increasing the understanding of the physics of unsteady flows and providing data for empirical design. An overview is given of this program and some results obtained in the initial tests are highlighted. The tests that were completed include measurement of unsteady pressures during flutter of a rigid wing with an NACA 0012 airfoil section and dynamic response measurements of a flexible rectangular wing with a thick circular arc airfoil undergoing shock boundary layer oscillations.

  20. Transient void, pressure drop and critical power BFBT benchmark analysis and results with VIPRE-W / MEFISTO-T

    Energy Technology Data Exchange (ETDEWEB)

    Le Corre, J.M.; Adamsson, C.; Alvarez, P., E-mail: lecorrjm@westinghouse.com, E-mail: carl.adamsson@psi.ch, E-mail: alvarep@westinghouse.com [Westinghouse Electric Sweden AB (Sweden)

    2011-07-01

    A benchmark analysis of the transient BFBT data [1], measured in an 8x8 fuel assembly design under typical BWR transient conditions, was performed using the VIPRE-W/MEFISTO-T code package. This is a continuation of the BFBT steady-state benchmark activities documented in [2] and [3]. All available transient void and pressure drop experimental data were considered and the measurements were compared with the predictions of the VIPRE-W sub-channel analysis code using various modeling approaches, including the EPRI drift flux void correlation. Detailed analyses of the code results were performed and it was demonstrated that the VIPRE-W transient predictions are generally reliable over the tested conditions. Available transient dryout data were also considered and the measurements were compared with the predictions of the VIPRE-W/ MEFISTO-T film flow calculations. The code calculates the transient multi-film flowrate distributions in the BFBT bundle, including the effect of spacer grids on drop deposition enhancement, and the dryout criterion corresponds to the total liquid film disappearance. After calibration of the grid enhancement effect with a very small subset of the steady-state critical power database, the code could predict the time and location of transient dryout with very good accuracy. (author)

  1. Variational tensor network renormalization in imaginary time: Benchmark results in the Hubbard model at finite temperature

    Science.gov (United States)

    Czarnik, Piotr; Rams, Marek M.; Dziarmaga, Jacek

    2016-12-01

    A Gibbs operator e-β H for a two-dimensional (2D) lattice system with a Hamiltonian H can be represented by a 3D tensor network, with the third dimension being the imaginary time (inverse temperature) β . Coarse graining the network along β results in a 2D projected entangled-pair operator (PEPO) with a finite bond dimension. The coarse graining is performed by a tree tensor network of isometries. They are optimized variationally to maximize the accuracy of the PEPO as a representation of the 2D thermal state e-β H. The algorithm is applied to the two-dimensional Hubbard model on an infinite square lattice. Benchmark results at finite temperature are obtained that are consistent with the best cluster dynamical mean-field theory and power-series expansion in the regime of parameters where they yield mutually consistent results.

  2. Results of the brugge benchmark study for flooding optimization and history matching

    NARCIS (Netherlands)

    Peters, E.; Arts, R.J.; Brouwer, G.K.; Geel, C.R.; Cullick, S.; Lorentzen, R.J.; Chen, Y.; Dunlop, K.N.B.; Vossepoel, F.C.; Xu, R.; Sarma, P.; Alhutali, A.H.; Reynolds, A.C.

    2010-01-01

    In preparation for the SPE Applied Technology Workshop (ATW) held in Brugge in June 2008, a unique benchmark project was organized to test the combined use of waterflooding-optimization and history-matching methods in a closed-loop workflow. The benchmark was organized in the form of an interactive

  3. Results of the 2014 UT modeling benchmark obtained with models implemented in CIVA: Solution of the FMC-TFM ultrasonic benchmark problem using CIVA

    Science.gov (United States)

    Chatillon, Sylvain; Robert, Sébastien; Brédif, Philippe; Calmon, Pierre; Daniel, Guillaume; Cartier, François

    2015-03-01

    The last decade has seen the emergence of new ultrasonic array techniques going beyond the simple application of suitable delays (phased array techniques) for focusing purposes. Amongst these techniques, the particular method combining the so-called FMC (Full Matrix Capture) acquisition scheme with the synthetic focusing algorithm denoted by TFM (Total Focusing Method) has become popular in the NDE community. The 2014 WFNDEC ultrasonic benchmark aims at providing FMC experimental data for evaluating the ability of models to predict images obtained by TFM algorithms (or equivalent ones). In this paper we describe the benchmark and report comparisons obtained with the CIVA simulation software. The simulations and measurements are carried out on two steel blocks, one in carbon steel and another in stainless steel. The reference probe is a 64 elements linear array, with .5mm element width and a gap of .1mm, working at 5 MHz. The benchmark problem consists in predicting images of vertical and tilted notches located on plane or inclined backwalls. The notches have different heights and different ligaments. The images can be obtained considering different paths (direct echoes or corner echoes). For each notch, the full matrix capture (FMC) have been recorded in one unique position with the probe positioned such that than angle between the probe axis and the notch direction corresponds to 45°. The results are calibrated on the response of a 2mm side drilled hole. For each case, TFM images have been reconstructed for both experimental and simulated signals. The models used are those implemented in CIVA based on Kirchhoff approximation. Comparisons are reported and discussed.

  4. VENUS-2 MOX Core Benchmark: Results of ORNL Calculations Using HELIOS-1.4

    Energy Technology Data Exchange (ETDEWEB)

    Ellis, RJ

    2001-02-02

    The Task Force on Reactor-Based Plutonium Disposition, now an Expert Group, was set up through the Organization for Economic Cooperation and Development/Nuclear Energy Agency to facilitate technical assessments of burning weapons-grade plutonium mixed-oxide (MOX) fuel in U.S. pressurized-water reactors and Russian VVER nuclear reactors. More than ten countries participated to advance the work of the Task Force in a major initiative, which was a blind benchmark study to compare code benchmark calculations against experimental data for the VENUS-2 MOX core at SCK-CEN in Mol, Belgium. At the Oak Ridge National Laboratory, the HELIOS-1.4 code was used to perform a comprehensive study of pin-cell and core calculations for the VENUS-2 benchmark.

  5. Comparative assessment of scoring functions on an updated benchmark: 2. Evaluation methods and general results.

    Science.gov (United States)

    Li, Yan; Han, Li; Liu, Zhihai; Wang, Renxiao

    2014-06-23

    Our comparative assessment of scoring functions (CASF) benchmark is created to provide an objective evaluation of current scoring functions. The key idea of CASF is to compare the general performance of scoring functions on a diverse set of protein-ligand complexes. In order to avoid testing scoring functions in the context of molecular docking, the scoring process is separated from the docking (or sampling) process by using ensembles of ligand binding poses that are generated in prior. Here, we describe the technical methods and evaluation results of the latest CASF-2013 study. The PDBbind core set (version 2013) was employed as the primary test set in this study, which consists of 195 protein-ligand complexes with high-quality three-dimensional structures and reliable binding constants. A panel of 20 scoring functions, most of which are implemented in main-stream commercial software, were evaluated in terms of "scoring power" (binding affinity prediction), "ranking power" (relative ranking prediction), "docking power" (binding pose prediction), and "screening power" (discrimination of true binders from random molecules). Our results reveal that the performance of these scoring functions is generally more promising in the docking/screening power tests than in the scoring/ranking power tests. Top-ranked scoring functions in the scoring power test, such as X-Score(HM), ChemScore@SYBYL, ChemPLP@GOLD, and PLP@DS, are also top-ranked in the ranking power test. Top-ranked scoring functions in the docking power test, such as ChemPLP@GOLD, Chemscore@GOLD, GlidScore-SP, LigScore@DS, and PLP@DS, are also top-ranked in the screening power test. Our results obtained on the entire test set and its subsets suggest that the real challenge in protein-ligand binding affinity prediction lies in polar interactions and associated desolvation effect. Nonadditive features observed among high-affinity protein-ligand complexes also need attention.

  6. VENUS-2 MOX Core Benchmark: Results of ORNL Calculations Using HELIOS-1.4 - Revised Report

    Energy Technology Data Exchange (ETDEWEB)

    Ellis, RJ

    2001-06-01

    The Task Force on Reactor-Based Plutonium Disposition (TFRPD) was formed by the Organization for Economic Cooperation and Development/Nuclear Energy Agency (OECD/NEA) to study reactor physics, fuel performance, and fuel cycle issues related to the disposition of weapons-grade (WG) plutonium as mixed-oxide (MOX) reactor fuel. To advance the goals of the TFRPD, 10 countries and 12 institutions participated in a major TFRPD activity: a blind benchmark study to compare code calculations to experimental data for the VENUS-2 MOX core at SCK-CEN in Mol, Belgium. At Oak Ridge National Laboratory, the HELIOS-1.4 code system was used to perform the comprehensive study of pin-cell and MOX core calculations for the VENUS-2 MOX core benchmark study.

  7. Benchmarking of the mono-energetic transport coefficients-results from the International Collaboration on Neoclassical Transport in Stellarators (ICNTS)

    Energy Technology Data Exchange (ETDEWEB)

    Beidler, C. D. [Max-Planck-Institute for Plasmaphysik, EURATOM-Association, Greifswald, Germany; Allmaier, K. [Insitut fur Theoretische Physik, Association EURATOM, Graz, Austria; Isaev, Maxim Yu [Kurchatov Institute, Moscow, Russia; Kasilov, K. [Insitute of Plasma Physics, NSC-KhIPT, Kharkov, Ukraine; Kernbichler, W. [Insitut fur Theoretische Physik, Association EURATOM, Graz, Austria; Leitold, G. [Insitut fur Theoretische Physik, Association EURATOM, Graz, Austria; Maassberg, H. [Max-Planck-Institute for Plasmaphysik, EURATOM-Association, Greifswald, Germany; Mikkelsen, D. R. [Princeton Plasma Physics Laboratory (PPPL); Murakami, Masanori [ORNL; Schmidt, M. [Max-Planck-Institute for Plasmaphysik, EURATOM-Association, Greifswald, Germany; Spong, Donald A [ORNL; Tribaidos, V. [Universidad Carlos III, Madrid, Spain; Wakasa, A. [Kyoto University, Kyoto, Japan

    2011-01-01

    Numerical results for the three mono-energetic transport coefficients required for a complete neoclassical description of stellarator plasmas have been benchmarked within an international collaboration. These transport coefficients are flux-surface-averaged moments of solutions to the linearized drift kinetic equation which have been determined using field-line-integration techniques, Monte Carlo simulations, a variational method employing Fourier-Legendre test functions and a finite-difference scheme. The benchmarking has been successfully carried out for past, present and future devices which represent different optimization strategies within the extensive configuration space available to stellarators. A qualitative comparison of the results with theoretical expectations for simple model fields is provided. The behaviour of the results for the mono-energetic radial and parallel transport coefficients can be largely understood from such theoretical considerations but the mono-energetic bootstrap current coefficient exhibits characteristics which have not been predicted.

  8. The calculational VVER burnup Credit Benchmark No.3 results with the ENDF/B-VI rev.5 (1999)

    Energy Technology Data Exchange (ETDEWEB)

    Rodriguez Gual, Maritza [Centro de Tecnologia Nuclear, La Habana (Cuba). E-mail: mrgual@ctn.isctn.edu.cu

    2000-07-01

    The purpose of this papers to present the results of CB3 phase of the VVER calculational benchmark with the recent evaluated nuclear data library ENDF/B-VI Rev.5 (1999). This results are compared with the obtained from the other participants in the calculations (Czech Republic, Finland, Hungary, Slovaquia, Spain and the United Kingdom). The phase (CB3) of the VVER calculation benchmark is similar to the Phase II-A of the OECD/NEA/INSC BUC Working Group benchmark for PWR. The cases without burnup profile (BP) were performed with the WIMS/D-4 code. The rest of the cases have been carried with DOTIII discrete ordinates code. The neutron library used was the ENDF/B-VI rev. 5 (1999). The WIMS/D-4 (69 groups) is used to collapse cross sections from the ENDF/B-VI Rev. 5 (1999) to 36 groups working library for 2-D calculations. This work also comprises the results of CB1 (obtained with ENDF/B-VI rev. 5 (1999), too) and CB3 for cases with Burnup of 30 MWd/TU and cooling time of 1 and 5 years and for case with Burnup of 40 MWd/TU and cooling time of 1 year. (author)

  9. PSI Methodologies for Nuclear Data Uncertainty Propagation with CASMO-5M and MCNPX: Results for OECD/NEA UAM Benchmark Phase I

    Directory of Open Access Journals (Sweden)

    W. Wieselquist

    2013-01-01

    Full Text Available Capabilities for uncertainty quantification (UQ with respect to nuclear data have been developed at PSI in the recent years and applied to the UAM benchmark. The guiding principle for the PSI UQ development has been to implement nonintrusive “black box” UQ techniques in state-of-the-art, production-quality codes used already for routine analyses. Two complimentary UQ techniques have been developed thus far: (i direct perturbation (DP and (ii stochastic sampling (SS. The DP technique is, first and foremost, a robust and versatile sensitivity coefficient calculation, applicable to all types of input and output. Using standard uncertainty propagation, the sensitivity coefficients are folded with variance/covariance matrices (VCMs leading to a local first-order UQ method. The complementary SS technique samples uncertain inputs according to their joint probability distributions and provides a global, all-order UQ method. This paper describes both DP and SS implemented in the lattice physics code CASMO-5MX (a special PSI-modified version of CASMO-5M and a preliminary SS technique implemented in MCNPX, routinely used in criticality safety and fluence analyses. Results are presented for the UAM benchmark exercises I-1 (cell and I-2 (assembly.

  10. A Consumer's Guide to Benchmark Dose Models: Results of U.S. EPA Testing of 14 Dichotomous, 8 Continuous, and 6 Developmental Models (Presentation)

    Science.gov (United States)

    Benchmark dose risk assessment software (BMDS) was designed by EPA to generate dose-response curves and facilitate the analysis, interpretation and synthesis of toxicological data. Partial results of QA/QC testing of the EPA benchmark dose software (BMDS) are presented. BMDS pr...

  11. Benchmarking a DSP processor

    OpenAIRE

    Lennartsson, Per; Nordlander, Lars

    2002-01-01

    This Master thesis describes the benchmarking of a DSP processor. Benchmarking means measuring the performance in some way. In this report, we have focused on the number of instruction cycles needed to execute certain algorithms. The algorithms we have used in the benchmark are all very common in signal processing today. The results we have reached in this thesis have been compared to benchmarks for other processors, performed by Berkeley Design Technology, Inc. The algorithms were programm...

  12. The COST Benchmark

    DEFF Research Database (Denmark)

    Jensen, Christian Søndergaard; Tiesyte, Dalia; Tradisauskas, Nerius

    2006-01-01

    , and more are underway. As a result, there is an increasing need for an independent benchmark for spatio-temporal indexes. This paper characterizes the spatio-temporal indexing problem and proposes a benchmark for the performance evaluation and comparison of spatio-temporal indexes. Notably, the benchmark...

  13. Report of results of benchmarking survey of central heating operations at NASA centers and various corporations

    Science.gov (United States)

    Hoffman, Thomas R.

    1995-01-01

    In recent years, Total Quality Management has swept across the country. Many companies and the Government have started looking at every aspect on how business is done and how money is spent. The idea or goal is to provide a service that is better, faster and cheaper. The first step in this process is to document or measure the process or operation as it stands now. For Lewis Research Center, this report is the first step in the analysis of heating plant operations. This report establishes the original benchmark that can be referred to in the future. The report also provides a comparison to other organization's heating plants to help in the brainstorming of new ideas. The next step is to propose and implement changes that would meet the goals as mentioned above. After the changes have been implemented the measuring process starts over again. This provides for a continuous improvement process.

  14. RELAP5-3D Results for Phase I (Exercise 2) of the OECD/NEA MHTGR-350 MW Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Gerhard Strydom

    2012-06-01

    The coupling of the PHISICS code suite to the thermal hydraulics system code RELAP5-3D has recently been initiated at the Idaho National Laboratory (INL) to provide a fully coupled prismatic Very High Temperature Reactor (VHTR) system modeling capability as part of the NGNP methods development program. The PHISICS code consists of three modules: INSTANT (performing 3D nodal transport core calculations), MRTAU (depletion and decay heat generation) and a perturbation/mixer module. As part of the verification and validation activities, steady state results have been obtained for Exercise 2 of Phase I of the newly-defined OECD/NEA MHTGR-350 MW Benchmark. This exercise requires participants to calculate a steady-state solution for an End of Equilibrium Cycle 350 MW Modular High Temperature Reactor (MHTGR), using the provided geometry, material, and coolant bypass flow description. The paper provides an overview of the MHTGR Benchmark and presents typical steady state results (e.g. solid and gas temperatures, thermal conductivities) for Phase I Exercise 2. Preliminary results are also provided for the early test phase of Exercise 3 using a two-group cross-section library and the Relap5-3D model developed for Exercise 2.

  15. Dipole model test with one superconducting coil; results analysed

    CERN Document Server

    Durante, M; Ferracin, P; Fessia, P; Gauthier, R; Giloux, C; Guinchard, M; Kircher, F; Manil, P; Milanese, A; Millot, J-F; Muñoz Garcia, J-E; Oberli, L; Perez, J-C; Pietrowicz, S; Rifflet, J-M; de Rijk, G; Rondeaux, F; Todesco, E; Viret, P; Ziemianski, D

    2013-01-01

    This report is the deliverable report 7.3.1 “Dipole model test with one superconducting coil; results analysed “. The report has four parts: “Design report for the dipole magnet”, “Dipole magnet structure tested in LN2”, “Nb3Sn strand procured for one dipole magnet” and “One test double pancake copper coil made”. The 4 report parts show that, although the magnet construction will be only completed by end 2014, all elements are present for a successful completion. Due to the importance of the project for the future of the participants and given the significant investments done by the participants, there is a full commitment to finish the project.

  16. Dipole model test with one superconducting coil: results analysed

    CERN Document Server

    Bajas, H; Benda, V; Berriaud, C; Bajko, M; Bottura, L; Caspi, S; Charrondiere, M; Clément, S; Datskov, V; Devaux, M; Durante, M; Fazilleau, P; Ferracin, P; Fessia, P; Gauthier, R; Giloux, C; Guinchard, M; Kircher, F; Manil, P; Milanese, A; Millot, J-F; Muñoz Garcia, J-E; Oberli, L; Perez, J-C; Pietrowicz, S; Rifflet, J-M; de Rijk, G; Rondeaux, F; Todesco, E; Viret, P; Ziemianski, D

    2013-01-01

    This report is the deliverable report 7.3.1 “Dipole model test with one superconducting coil; results analysed “. The report has four parts: “Design report for the dipole magnet”, “Dipole magnet structure tested in LN2”, “Nb3Sn strand procured for one dipole magnet” and “One test double pancake copper coil made”. The 4 report parts show that, although the magnet construction will be only completed by end 2014, all elements are present for a successful completion. Due to the importance of the project for the future of the participants and given the significant investments done by the participants, there is a full commitment to finish the project.

  17. The InterFrost benchmark of Thermo-Hydraulic codes for cold regions hydrology - first inter-comparison results

    Science.gov (United States)

    Grenier, Christophe; Roux, Nicolas; Anbergen, Hauke; Collier, Nathaniel; Costard, Francois; Ferrry, Michel; Frampton, Andrew; Frederick, Jennifer; Holmen, Johan; Jost, Anne; Kokh, Samuel; Kurylyk, Barret; McKenzie, Jeffrey; Molson, John; Orgogozo, Laurent; Rivière, Agnès; Rühaak, Wolfram; Selroos, Jan-Olof; Therrien, René; Vidstrand, Patrik

    2015-04-01

    The impacts of climate change in boreal regions has received considerable attention recently due to the warming trends that have been experienced in recent decades and are expected to intensify in the future. Large portions of these regions, corresponding to permafrost areas, are covered by water bodies (lakes, rivers) that interact with the surrounding permafrost. For example, the thermal state of the surrounding soil influences the energy and water budget of the surface water bodies. Also, these water bodies generate taliks (unfrozen zones below) that disturb the thermal regimes of permafrost and may play a key role in the context of climate change. Recent field studies and modeling exercises indicate that a fully coupled 2D or 3D Thermo-Hydraulic (TH) approach is required to understand and model the past and future evolution of landscapes, rivers, lakes and associated groundwater systems in a changing climate. However, there is presently a paucity of 3D numerical studies of permafrost thaw and associated hydrological changes, and the lack of study can be partly attributed to the difficulty in verifying multi-dimensional results produced by numerical models. Numerical approaches can only be validated against analytical solutions for a purely thermic 1D equation with phase change (e.g. Neumann, Lunardini). When it comes to the coupled TH system (coupling two highly non-linear equations), the only possible approach is to compare the results from different codes to provided test cases and/or to have controlled experiments for validation. Such inter-code comparisons can propel discussions to try to improve code performances. A benchmark exercise was initialized in 2014 with a kick-off meeting in Paris in November. Participants from USA, Canada, Germany, Sweden and France convened, representing altogether 13 simulation codes. The benchmark exercises consist of several test cases inspired by existing literature (e.g. McKenzie et al., 2007) as well as new ones. They

  18. RESULTS OF ANALYSIS OF BENCHMARKING METHODS OF INNOVATION SYSTEMS ASSESSMENT IN ACCORDANCE WITH AIMS OF SUSTAINABLE DEVELOPMENT OF SOCIETY

    Directory of Open Access Journals (Sweden)

    A. Vylegzhanina

    2016-01-01

    Full Text Available In this work, we introduce results of comparative analysis of international ratings indexes of innovation systems for their compliance with purposes of sustainable development. Purpose of this research is defining requirements to benchmarking methods of assessing national or regional innovation systems and compare them basing on assumption, that innovation system is aligned with sustainable development concept. Analysis of goal sets and concepts, which underlie observed international composite innovation indexes, comparison of their metrics and calculation techniques, allowed us to reveal opportunities and limitations of using these methods in frames of sustainable development concept. We formulated targets of innovation development on the base of innovation priorities of sustainable socio-economic development. Using comparative analysis of indexes with these targets, we revealed two methods of assessing innovation systems, maximally connected with goals of sustainable development. Nevertheless, today no any benchmarking method, which meets need of innovation systems assessing in compliance with sustainable development concept to a sufficient extent. We suggested practical directions of developing methods, assessing innovation systems in compliance with goals of societal sustainable development.

  19. Solutions of the Two-Dimensional Hubbard Model: Benchmarks and Results from a Wide Range of Numerical Algorithms

    Science.gov (United States)

    LeBlanc, J. P. F.; Antipov, Andrey E.; Becca, Federico; Bulik, Ireneusz W.; Chan, Garnet Kin-Lic; Chung, Chia-Min; Deng, Youjin; Ferrero, Michel; Henderson, Thomas M.; Jiménez-Hoyos, Carlos A.; Kozik, E.; Liu, Xuan-Wen; Millis, Andrew J.; Prokof'ev, N. V.; Qin, Mingpu; Scuseria, Gustavo E.; Shi, Hao; Svistunov, B. V.; Tocchio, Luca F.; Tupitsyn, I. S.; White, Steven R.; Zhang, Shiwei; Zheng, Bo-Xiao; Zhu, Zhenyue; Gull, Emanuel; Simons Collaboration on the Many-Electron Problem

    2015-10-01

    Numerical results for ground-state and excited-state properties (energies, double occupancies, and Matsubara-axis self-energies) of the single-orbital Hubbard model on a two-dimensional square lattice are presented, in order to provide an assessment of our ability to compute accurate results in the thermodynamic limit. Many methods are employed, including auxiliary-field quantum Monte Carlo, bare and bold-line diagrammatic Monte Carlo, method of dual fermions, density matrix embedding theory, density matrix renormalization group, dynamical cluster approximation, diffusion Monte Carlo within a fixed-node approximation, unrestricted coupled cluster theory, and multireference projected Hartree-Fock methods. Comparison of results obtained by different methods allows for the identification of uncertainties and systematic errors. The importance of extrapolation to converged thermodynamic-limit values is emphasized. Cases where agreement between different methods is obtained establish benchmark results that may be useful in the validation of new approaches and the improvement of existing methods.

  20. Solutions of the Two Dimensional Hubbard Model: Benchmarks and Results from a Wide Range of Numerical Algorithms

    Science.gov (United States)

    Leblanc, James

    In this talk we present numerical results for ground state and excited state properties (energies, double occupancies, and Matsubara-axis self energies) of the single-orbital Hubbard model on a two-dimensional square lattice. In order to provide an assessment of our ability to compute accurate results in the thermodynamic limit we employ numerous methods including auxiliary field quantum Monte Carlo, bare and bold-line diagrammatic Monte Carlo, method of dual fermions, density matrix embedding theory, density matrix renormalization group, dynamical cluster approximation, diffusion Monte Carlo within a fixed node approximation, unrestricted coupled cluster theory, and multireference projected Hartree-Fock. We illustrate cases where agreement between different methods is obtained in order to establish benchmark results that should be useful in the validation of future results.

  1. The results of cytogenetic analyses in prenatal diagnosis

    Directory of Open Access Journals (Sweden)

    Jovanović-Privrodski Jadranka

    2007-01-01

    Full Text Available Introduction. G-banding and other classical cytogenetic methods are still in use, together with molecular cytogenetic techniques such as FISH (Fluorescence In Situ Hybridization and SKY (Spectral Karyotyping. Material and methods. This retrospective study evaluated clinical data on individuaols seeking genetic counseling over a 15-year period (1992 - 2007 at the Medical Genetic Center, Child and Youth Health Care Institute of Vojvodina in Novi Sad. The study included 37.191 genetic counselings, and 20.607 prenatal analyses (amniocentesis and cordocentesis. Results Over a 15-year period (1992 - 2007 17.937 amniotic fluid samples were analyzed and 274 abnormal karyotypes were found; out of 2.670 fetal blood samples, there were 78 abnormal karyotypes. During a 15-year period, prenatal diagnosis, using amniocentesis and/or cordocentesis, showed 352 fetuses with chromosomal aberrations. Discussion. On average, over the past 15-year period, 8% of pregnancies were controlled with invasive prenatal procedures. The percentage has changed; in fact, it is increasing from year to year. In 1992, only 0.82% (N=139/17000 of pregnant women in Vojvodina underwent invasive prenatal procedures, and in 2006 the rate increased to 15.65% (N=2660/17000. Conclusion. It is necessary to improve and promote the possibilities of genetic counseling and invasive prenatal diagnosis in order to prevent the occurrence of chromosomal aberrations and other genetic diseases.

  2. RANS analyses on erosion behavior of density stratification consisted of helium–air mixture gas by a low momentum vertical buoyant jet in the PANDA test facility, the third international benchmark exercise (IBE-3)

    Energy Technology Data Exchange (ETDEWEB)

    Abe, Satoshi, E-mail: abe.satoshi@jaea.go.jp; Ishigaki, Masahiro; Sibamoto, Yasuteru; Yonomoto, Taisuke

    2015-08-15

    Highlights: . • The third international benchmark exercise (IBE-3) focused on density stratification erosion by a vertical buoyant jet in the reactor containment vessel. • Two types turbulence model modification were applied in order to accurately simulate the turbulence helium transportation in the density stratification. • The analysis result in case with turbulence model modification is good agreement with the experimental data. • There is a major difference of turbulence helium–mass transportation between in case with and without the turbulence model modification. - Abstract: Density stratification in the reactor containment vessel is an important phenomenon on an issue of hydrogen safety. The Japan Atomic Energy Agency (JAEA) has started the ROSA-SA project on containment thermal hydraulics. As a part of the activity, we participated in the third international CFD benchmark exercise (IBE-3) focused on density stratification erosion by a vertical buoyant jet in containment vessel. This paper shows our approach for the IBE-3, focusing on the turbulence transport phenomena in eroding the density stratification and introducing modified turbulence models for improvement of the CFD analyses. For this analysis, we modified the CFD code OpenFOAM by using two turbulence models; the Kato and Launder modification to estimate turbulent kinetic energy production around a stagnation point, and the Katsuki model to consider turbulence damping in density stratification. As a result, the modified code predicted well the experimental data. The importance of turbulence transport modeling is also discussed using the calculation results.

  3. The InterFrost benchmark of Thermo-Hydraulic codes for cold regions hydrology - first inter-comparison phase results

    Science.gov (United States)

    Grenier, Christophe; Rühaak, Wolfram

    2016-04-01

    Climate change impacts in permafrost regions have received considerable attention recently due to the pronounced warming trends experienced in recent decades and which have been projected into the future. Large portions of these permafrost regions are characterized by surface water bodies (lakes, rivers) that interact with the surrounding permafrost often generating taliks (unfrozen zones) within the permafrost that allow for hydrologic interactions between the surface water bodies and underlying aquifers and thus influence the hydrologic response of a landscape to climate change. Recent field studies and modeling exercises indicate that a fully coupled 2D or 3D Thermo-Hydraulic (TH) approach is required to understand and model past and future evolution such units (Kurylyk et al. 2014). However, there is presently a paucity of 3D numerical studies of permafrost thaw and associated hydrological changes, which can be partly attributed to the difficulty in verifying multi-dimensional results produced by numerical models. A benchmark exercise was initialized at the end of 2014. Participants convened from USA, Canada, Europe, representing 13 simulation codes. The benchmark exercises consist of several test cases inspired by existing literature (e.g. McKenzie et al., 2007) as well as new ones (Kurylyk et al. 2014; Grenier et al. in prep.; Rühaak et al. 2015). They range from simpler, purely thermal 1D cases to more complex, coupled 2D TH cases (benchmarks TH1, TH2, and TH3). Some experimental cases conducted in a cold room complement the validation approach. A web site hosted by LSCE (Laboratoire des Sciences du Climat et de l'Environnement) is an interaction platform for the participants and hosts the test case databases at the following address: https://wiki.lsce.ipsl.fr/interfrost. The results of the first stage of the benchmark exercise will be presented. We will mainly focus on the inter-comparison of participant results for the coupled cases TH2 & TH3. Both cases

  4. Revised benchmarking of contact-less fingerprint scanners for forensic fingerprint detection: challenges and results for chromatic white light scanners (CWL)

    Science.gov (United States)

    Kiltz, Stefan; Leich, Marcus; Dittmann, Jana; Vielhauer, Claus; Ulrich, Michael

    2011-02-01

    Mobile contact-less fingerprint scanners can be very important tools for the forensic investigation of crime scenes. To be admissible in court, data and the collection process must adhere to rules w.r.t. technology and procedures of acquisition, processing and the conclusions drawn from that evidence. Currently, no overall accepted benchmarking methodology is used to support some of the rules regarding the localisation, acquisition and pre-processing using contact-less fingerprint scanners. Benchmarking is seen essential to rate those devices according to their usefulness for investigating crime scenes. Our main contribution is a revised version of our extensible framework for methodological benchmarking of contact-less fingerprint scanners using a collection of extensible categories and items. The suggested main categories describing a contact-less fingerprint scanner are properties of forensic country-specific legal requirements, technical properties, application-related aspects, input sensory technology, pre-processing algorithm, tested object and materials. Using those it is possible to benchmark fingerprint scanners and describe the setup and the resulting data. Additionally, benchmarking profiles for different usage scenarios are defined. First results for all suggested benchmarking properties, which will be presented in detail in the final paper, were gained using an industrial device (FRT MicroProf200) and conducting 18 tests on 10 different materials.

  5. Benchmark physics experiment of metallic-fueled LMFBR at FCA. 2; Experiments of FCA assembly XVI-1 and their analyses

    Energy Technology Data Exchange (ETDEWEB)

    Iijima, Susumu; Oigawa, Hiroyuki; Ohno, Akio; Sakurai, Takeshi; Nemoto, Tatsuo; Osugi, Toshitaka; Satoh, Kunio; Hayasaka, Katsuhisa [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Bando, Masaru

    1993-10-01

    An availability of data and method for a design of metallic-fueled LMFBR is examined by using the experiment results of FCA assembly XVI-1. Experiment included criticality and reactivity coefficients such as Doppler, sodium void, fuel shifting and fuel expansion. Reaction rate ratios, sample worth and control rod worth were also measured. Analysis was made by using three-dimensional diffusion calculations and JENDL-2 cross sections. Predictions of assembly XVI-1 reactor physics parameters agree reasonably well with the measured values, but for some reactivity coefficients such as Doppler, large zone sodium void and fuel shifting further improvement of calculation method was need. (author).

  6. Library Benchmarking

    Directory of Open Access Journals (Sweden)

    Wiji Suwarno

    2017-02-01

    Full Text Available The term benchmarking has been encountered in the implementation of total quality (TQM or in Indonesian termed holistic quality management because benchmarking is a tool to look for ideas or learn from the library. Benchmarking is a processof measuring and comparing for continuous business process of systematic and continuous measurement, the process of measuring and comparing for continuous business process of an organization to get information that can help these organization improve their performance efforts.

  7. Financial Benchmarking

    OpenAIRE

    2012-01-01

    This bachelor's thesis is focused on financial benchmarking of TULIPA PRAHA s.r.o. The aim of this work is to evaluate financial situation of the company, identify its strengths and weaknesses and to find out how efficient is the performance of this company in comparison with top companies within the same field by using INFA benchmarking diagnostic system of financial indicators. The theoretical part includes the characteristic of financial analysis, which financial benchmarking is based on a...

  8. Comparing the Floating Point Systems, Inc. AP-190L to representative scientific computers: some benchmark results

    Energy Technology Data Exchange (ETDEWEB)

    Brengle, T.A.; Maron, N.

    1980-03-27

    Results are presented of comparative timing tests made by running a typical FORTRAN physics simulation code on the following machines: DEC PDP-10 with KI processor; DEC PDP-10, KI processor, and FPS AP-190L; CDC 7600; and CRAY-1. Factors such as DMA overhead, code size for the AP-190L, and the relative utilization of floating point functional units for the different machines are discussed. 1 table.

  9. Solutions of the Two-Dimensional Hubbard Model: Benchmarks and Results from a Wide Range of Numerical Algorithms

    Directory of Open Access Journals (Sweden)

    2015-12-01

    Full Text Available Numerical results for ground-state and excited-state properties (energies, double occupancies, and Matsubara-axis self-energies of the single-orbital Hubbard model on a two-dimensional square lattice are presented, in order to provide an assessment of our ability to compute accurate results in the thermodynamic limit. Many methods are employed, including auxiliary-field quantum Monte Carlo, bare and bold-line diagrammatic Monte Carlo, method of dual fermions, density matrix embedding theory, density matrix renormalization group, dynamical cluster approximation, diffusion Monte Carlo within a fixed-node approximation, unrestricted coupled cluster theory, and multireference projected Hartree-Fock methods. Comparison of results obtained by different methods allows for the identification of uncertainties and systematic errors. The importance of extrapolation to converged thermodynamic-limit values is emphasized. Cases where agreement between different methods is obtained establish benchmark results that may be useful in the validation of new approaches and the improvement of existing methods.

  10. Benchmark selection

    DEFF Research Database (Denmark)

    Hougaard, Jens Leth; Tvede, Mich

    2002-01-01

    Within a production theoretic framework, this paper considers an axiomatic approach to benchmark selection. It is shown that two simple and weak axioms; efficiency and comprehensive monotonicity characterize a natural family of benchmarks which typically becomes unique. Further axioms are added...

  11. Lattice Wess-Zumino model with Ginsparg-Wilson fermions: One-loop results and GPU benchmarks

    Science.gov (United States)

    Chen, Chen; Dzienkowski, Eric; Giedt, Joel

    2010-10-01

    We numerically evaluate the one-loop counterterms for the four-dimensional Wess-Zumino model formulated on the lattice using Ginsparg-Wilson fermions of the overlap (Neuberger) variety, together with an auxiliary fermion (plus superpartners), such that a lattice version of U(1)R symmetry is exactly preserved in the limit of vanishing bare mass. We confirm previous findings by other authors that at one loop there is no renormalization of the superpotential in the lattice theory, but that there is a mismatch in the wave-function renormalization of the auxiliary field. We study the range of the Dirac operator that results when the auxiliary fermion is integrated out, and show that localization does occur, but that it is less pronounced than the exponential localization of the overlap operator. We also present preliminary simulation results for this model, and outline a strategy for nonperturbative improvement of the lattice supercurrent through measurements of supersymmetry Ward identities. Related to this, some benchmarks for our graphics processing unit code are provided. Our simulation results find a nearly vanishing vacuum expectation value for the auxiliary field, consistent with approximate supersymmetry at weak coupling.

  12. PSI Methodologies for Nuclear Data Uncertainty Propagation with CASMO-5M and MCNPX: Results for OECD/NEA UAM Benchmark Phase I

    OpenAIRE

    Wieselquist, W.; Zhu, T.; Vasiliev, A; Ferroukhi, H.

    2013-01-01

    Capabilities for uncertainty quantification (UQ) with respect to nuclear data have been developed at PSI in the recent years and applied to the UAM benchmark. The guiding principle for the PSI UQ development has been to implement nonintrusive “black box” UQ techniques in state-of-the-art, production-quality codes used already for routine analyses. Two complimentary UQ techniques have been developed thus far: (i) direct perturbation (DP) and (ii) stochastic sampling (SS). The DP technique is, ...

  13. Mean Abnormal Result Rate: Proof of Concept of a New Metric for Benchmarking Selectivity in Laboratory Test Ordering.

    Science.gov (United States)

    Naugler, Christopher T; Guo, Maggie

    2016-04-01

    There is a need to develop and validate new metrics to access the appropriateness of laboratory test requests. The mean abnormal result rate (MARR) is a proposed measure of ordering selectivity, the premise being that higher mean abnormal rates represent more selective test ordering. As a validation of this metric, we compared the abnormal rate of lab tests with the number of tests ordered on the same requisition. We hypothesized that requisitions with larger numbers of requested tests represent less selective test ordering and therefore would have a lower overall abnormal rate. We examined 3,864,083 tests ordered on 451,895 requisitions and found that the MARR decreased from about 25% if one test was ordered to about 7% if nine or more tests were ordered, consistent with less selectivity when more tests were ordered. We then examined the MARR for community-based testing for 1,340 family physicians and found both a wide variation in MARR as well as an inverse relationship between the total tests ordered per year per physician and the physician-specific MARR. The proposed metric represents a new utilization metric for benchmarking relative selectivity of test orders among physicians. © American Society for Clinical Pathology, 2016. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  14. Assessment of Static Delamination Propagation Capabilities in Commercial Finite Element Codes Using Benchmark Analysis

    Science.gov (United States)

    Orifici, Adrian C.; Krueger, Ronald

    2010-01-01

    With capabilities for simulating delamination growth in composite materials becoming available, the need for benchmarking and assessing these capabilities is critical. In this study, benchmark analyses were performed to assess the delamination propagation simulation capabilities of the VCCT implementations in Marc TM and MD NastranTM. Benchmark delamination growth results for Double Cantilever Beam, Single Leg Bending and End Notched Flexure specimens were generated using a numerical approach. This numerical approach was developed previously, and involves comparing results from a series of analyses at different delamination lengths to a single analysis with automatic crack propagation. Specimens were analyzed with three-dimensional and two-dimensional models, and compared with previous analyses using Abaqus . The results demonstrated that the VCCT implementation in Marc TM and MD Nastran(TradeMark) was capable of accurately replicating the benchmark delamination growth results and that the use of the numerical benchmarks offers advantages over benchmarking using experimental and analytical results.

  15. Artificial insemination in captive Whooping Cranes: Results from genetic analyses

    Science.gov (United States)

    Jones, K.L.; Nicolich, Jane M.

    2001-01-01

    Artificial insemination has been used frequently in the captive whooping crane (Grus americana) population. In the 1980s, it was necessary at times to inseminate females with semen from several males during the breeding season or with semen from multiple males simultaneously due to unknown sperm viability of the breeding males. The goals of this study were to apply microsatellite DNA profiles to resolve uncertain paternities and to use these results to evaluate the current paternity assignment assumptions used by captive managers. Microsatellite DNA profiles were successful in resolving 20 of 23 paternity questions. When resolved paternities were coupled with data on insemination timing, substantial information was revealed on fertilization timing in captive whooping cranes. Delayed fertilization from inseminations 6+ days pre-oviposition suggests capability of sperm storage.

  16. TASAR Certification and Operational Approval Requirements - Analyses and Results

    Science.gov (United States)

    Koczo, Stefan, Jr.

    2015-01-01

    This report documents the results of research and development work performed by Rockwell Collins in addressing the Task 1 objectives under NASA Contract NNL12AA11C. Under this contract Rockwell Collins provided analytical support to the NASA Langley Research Center (LaRC) in NASA's development of a Traffic Aware Strategic Aircrew Requests (TASAR) flight deck Electronic Flight Bag (EFB) application for technology transition into operational use. The two primary objectives of this contract were for Rockwell Collins and the University of Iowa OPL to 1) perform an implementation assessment of TASAR toward early certification and operational approval of TASAR as an EFB application (Task 1 of this contract), and 2) design, develop and conduct two Human-in-the-Loop (HITL) simulation experiments that evaluate TASAR and the associated Traffic Aware Planner (TAP) software application to determine the situational awareness and workload impacts of TASAR in the flight deck, while also assessing the level of comprehension, usefulness, and usability of the features of TAP (Task 2 of this contract). This report represents the Task 1 summary report. The Task 2 summary report is provided in [0].

  17. Verification and validation benchmarks.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy

    2007-02-01

    Verification and validation (V&V) are the primary means to assess the accuracy and reliability of computational simulations. V&V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V&V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the level of

  18. Comparison and validation of HEU and LEU modeling results to HEU experimental benchmark data for the Massachusetts Institute of Technology MITR reactor.

    Energy Technology Data Exchange (ETDEWEB)

    Newton, T. H.; Wilson, E. H; Bergeron, A.; Horelik, N.; Stevens, J. (Nuclear Engineering Division); (MIT Nuclear Reactor Lab.)

    2011-03-02

    The Massachusetts Institute of Technology Reactor (MITR-II) is a research reactor in Cambridge, Massachusetts designed primarily for experiments using neutron beam and in-core irradiation facilities. It delivers a neutron flux comparable to current LWR power reactors in a compact 6 MW core using Highly Enriched Uranium (HEU) fuel. In the framework of its non-proliferation policies, the international community presently aims to minimize the amount of nuclear material available that could be used for nuclear weapons. In this geopolitical context, most research and test reactors both domestic and international have started a program of conversion to the use of Low Enriched Uranium (LEU) fuel. A new type of LEU fuel based on an alloy of uranium and molybdenum (UMo) is expected to allow the conversion of U.S. domestic high performance reactors like the MITR-II reactor. Towards this goal, comparisons of MCNP5 Monte Carlo neutronic modeling results for HEU and LEU cores have been performed. Validation of the model has been based upon comparison to HEU experimental benchmark data for the MITR-II. The objective of this work was to demonstrate a model which could represent the experimental HEU data, and therefore could provide a basis to demonstrate LEU core performance. This report presents an overview of MITR-II model geometry and material definitions which have been verified, and updated as required during the course of validation to represent the specifications of the MITR-II reactor. Results of calculations are presented for comparisons to historical HEU start-up data from 1975-1976, and to other experimental benchmark data available for the MITR-II Reactor through 2009. This report also presents results of steady state neutronic analysis of an all-fresh LEU fueled core. Where possible, HEU and LEU calculations were performed for conditions equivalent to HEU experiments, which serves as a starting point for safety analyses for conversion of MITR-II from the use of HEU

  19. Benchmarking Investments in Advancement: Results of the Inaugural CASE Advancement Investment Metrics Study (AIMS). CASE White Paper

    Science.gov (United States)

    Kroll, Juidith A.

    2012-01-01

    The inaugural Advancement Investment Metrics Study, or AIMS, benchmarked investments and staffing in each of the advancement disciplines (advancement services, alumni relations, communications and marketing, fundraising and advancement management) as well as the return on the investment in fundraising specifically. This white paper reports on the…

  20. Preliminary Results for the OECD/NEA Time Dependent Benchmark using Rattlesnake, Rattlesnake-IQS and TDKENO

    Energy Technology Data Exchange (ETDEWEB)

    DeHart, Mark D. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Mausolff, Zander [Univ. of Florida, Gainesville, FL (United States); Weems, Zach [Univ. of Florida, Gainesville, FL (United States); Popp, Dustin [Univ. of Florida, Gainesville, FL (United States); Smith, Kristin [Univ. of Florida, Gainesville, FL (United States); Shriver, Forrest [Univ. of Florida, Gainesville, FL (United States); Goluoglu, Sedat [Univ. of Florida, Gainesville, FL (United States); Prince, Zachary [Texas A & M Univ., College Station, TX (United States); Ragusa, Jean [Texas A & M Univ., College Station, TX (United States)

    2016-08-01

    One goal of the MAMMOTH M&S project is to validate the analysis capabilities within MAMMOTH. Historical data has shown limited value for validation of full three-dimensional (3D) multi-physics methods. Initial analysis considered the TREAT startup minimum critical core and one of the startup transient tests. At present, validation is focusing on measurements taken during the M8CAL test calibration series. These exercises will valuable in preliminary assessment of the ability of MAMMOTH to perform coupled multi-physics calculations; calculations performed to date are being used to validate the neutron transport solver Rattlesnake\\cite{Rattlesnake} and the fuels performance code BISON. Other validation projects outside of TREAT are available for single-physics benchmarking. Because the transient solution capability of Rattlesnake is one of the key attributes that makes it unique for TREAT transient simulations, validation of the transient solution of Rattlesnake using other time dependent kinetics benchmarks has considerable value. The Nuclear Energy Agency (NEA) of the Organization for Economic Cooperation and Development (OECD) has recently developed a computational benchmark for transient simulations. This benchmark considered both two-dimensional (2D) and 3D configurations for a total number of 26 different transients. All are negative reactivity insertions, typically returning to the critical state after some time.

  1. Comparative analysis of results between CASMO, MCNP and Serpent for a suite of Benchmark problems on BWR reactors; Analisis comparativo de resultados entre CASMO, MCNP y SERPENT para una suite de problemas Benchmark en reactores BWR

    Energy Technology Data Exchange (ETDEWEB)

    Xolocostli M, J. V.; Vargas E, S.; Gomez T, A. M. [ININ, Carretera Mexico-Toluca s/n, 52750 Ocoyoacac, Estado de Mexico (Mexico); Reyes F, M. del C.; Del Valle G, E., E-mail: vicente.xolocostli@inin.gob.mx [IPN, Escuela Superior de Fisica y Matematicas, UP - Adolfo Lopez Mateos, Edif. 9, 07738 Mexico D. F. (Mexico)

    2014-10-15

    In this paper a comparison is made in analyzing the suite of Benchmark problems for reactors type BWR between CASMO-4, MCNP6 and Serpent code. The Benchmark problem consists of two different geometries: a fuel cell of a pin and assembly type BWR. To facilitate the study of reactors physics in the fuel pin their nuclear characteristics are provided to detail, such as burnt dependence, the reactivity of selected nuclide, etc. With respect to the fuel assembly, the presented results are regarding to infinite multiplication factor for burning different steps and different vacuum conditions. Making the analysis of this set of Benchmark problems provides comprehensive test problems for the next fuels generation of BWR reactors with high extended burned. It is important to note that when making this comparison the purpose is to validate the methodologies used in modeling for different operating conditions, if the case is of other BWR assembly. The results will be within a range with some uncertainty, considering that does not depend on code that is used. Escuela Superior de Fisica y Matematicas of Instituto Politecnico Nacional (IPN (Mexico) has accumulated some experience in using Serpent, due to the potential of this code over other commercial codes such as CASMO and MCNP. The obtained results for the infinite multiplication factor are encouraging and motivate the studies to continue with the generation of the X S of a core to a next step a respective nuclear data library is constructed and this can be used by codes developed as part of the development project of the Mexican Analysis Platform of Nuclear Reactors AZTLAN. (Author)

  2. Evaluation of PWR and BWR assembly benchmark calculations. Status report of EPRI computational benchmark results, performed in the framework of the Netherlands` PINK programme (Joint project of ECN, IRI, KEMA and GKN)

    Energy Technology Data Exchange (ETDEWEB)

    Gruppelaar, H. [Netherlands Energy Research Foundation (ECN), Petten (Netherlands); Klippel, H.T. [Netherlands Energy Research Foundation (ECN), Petten (Netherlands); Kloosterman, J.L. [Netherlands Energy Research Foundation (ECN), Petten (Netherlands); Hoogenboom, J.E. [Technische Univ. Delft (Netherlands). Interfacultair Reactor Instituut; Leege, P.F.A. de [Technische Univ. Delft (Netherlands). Interfacultair Reactor Instituut; Verhagen, F.C.M. [Keuring van Electrotechnische Materialen NV, Arnhem (Netherlands); Bruggink, J.C. [Gemeenschappelijke Kernenergiecentrale Nederland N.V., Dodewaard (Netherlands)

    1993-11-01

    Benchmark results of the Dutch PINK working group on calculational benchmarks on single pin cell and multipin assemblies as defined by EPRI are presented and evaluated. First a short update of methods used by the various institutes involved is given as well as an update of the status with respect to previous performed pin-cell calculations. Problems detected in previous pin-cell calculations are inspected more closely. Detailed discussion of results of multipin assembly calculations is given. The assembly consists of 9 pins in a multicell square lattice in which the central pin is filled differently, i.e. a Gd pin for the BWR assembly and a control rod/guide tube for the PWR assembly. The results for pin cells showed a rather good overall agreement between the four participants although BWR pins with high void fraction turned out to be difficult to calculate. With respect to burnup calculations good overall agreement for the reactivity swing was obtained, provided that a fine time grid is used. (orig.)

  3. Interactive benchmarking

    DEFF Research Database (Denmark)

    Lawson, Lartey; Nielsen, Kurt

    2005-01-01

    distance functions. The frontier is given by an explicit quantile, e.g. “the best 90 %”. Using the explanatory model of the inefficiency, the user can adjust the frontiers by submitting state variables that influence the inefficiency. An efficiency study of Danish dairy farms is implemented......We discuss individual learning by interactive benchmarking using stochastic frontier models. The interactions allow the user to tailor the performance evaluation to preferences and explore alternative improvement strategies by selecting and searching the different frontiers using directional...... in the suggested benchmarking tool. The study investigates how different characteristics on dairy farms influences the technical efficiency....

  4. Toxicological Benchmarks for Wildlife

    Energy Technology Data Exchange (ETDEWEB)

    Sample, B.E. Opresko, D.M. Suter, G.W.

    1993-01-01

    Ecological risks of environmental contaminants are evaluated by using a two-tiered process. In the first tier, a screening assessment is performed where concentrations of contaminants in the environment are compared to no observed adverse effects level (NOAEL)-based toxicological benchmarks. These benchmarks represent concentrations of chemicals (i.e., concentrations presumed to be nonhazardous to the biota) in environmental media (water, sediment, soil, food, etc.). While exceedance of these benchmarks does not indicate any particular level or type of risk, concentrations below the benchmarks should not result in significant effects. In practice, when contaminant concentrations in food or water resources are less than these toxicological benchmarks, the contaminants may be excluded from further consideration. However, if the concentration of a contaminant exceeds a benchmark, that contaminant should be retained as a contaminant of potential concern (COPC) and investigated further. The second tier in ecological risk assessment, the baseline ecological risk assessment, may use toxicological benchmarks as part of a weight-of-evidence approach (Suter 1993). Under this approach, based toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. Other sources of evidence include media toxicity tests, surveys of biota (abundance and diversity), measures of contaminant body burdens, and biomarkers. This report presents NOAEL- and lowest observed adverse effects level (LOAEL)-based toxicological benchmarks for assessment of effects of 85 chemicals on 9 representative mammalian wildlife species (short-tailed shrew, little brown bat, meadow vole, white-footed mouse, cottontail rabbit, mink, red fox, and whitetail deer) or 11 avian wildlife species (American robin, rough-winged swallow, American woodcock, wild turkey, belted kingfisher, great blue heron, barred owl, barn owl, Cooper's hawk, and red

  5. Managing for Results in America's Great City Schools 2014: Results from Fiscal Year 2012-13. A Report of the Performance Measurement and Benchmarking Project

    Science.gov (United States)

    Council of the Great City Schools, 2014

    2014-01-01

    In 2002 the "Council of the Great City Schools" and its members set out to develop performance measures that could be used to improve business operations in urban public school districts. The Council launched the "Performance Measurement and Benchmarking Project" to achieve these objectives. The purposes of the project was to:…

  6. Description and results of a two-dimensional lattice physics code benchmark for the Canadian Pressure Tube Supercritical Water-cooled Reactor (PT-SCWR)

    Energy Technology Data Exchange (ETDEWEB)

    Hummel, D.W.; Langton, S.E.; Ball, M.R.; Novog, D.R.; Buijs, A., E-mail: hummeld@mcmaster.ca [McMaster Univ., Hamilton, Ontario (Canada)

    2013-07-01

    Discrepancies have been observed among a number of recent reactor physics studies in support of the PT-SCWR pre-conceptual design, including differences in lattice-level predictions of infinite neutron multiplication factor, coolant void reactivity, and radial power profile. As a first step to resolving these discrepancies, a lattice-level benchmark problem was designed based on the 78-element plutonium-thorium PT-SCWR fuel design under a set of prescribed local conditions. This benchmark problem was modeled with a suite of both deterministic and Monte Carlo neutron transport codes. The results of these models are presented here as the basis of a code-to-code comparison. (author)

  7. Results of the 2016 UT modeling benchmark proposed by the French Atomic Energy Commission (CEA) obtained with models implemented in CIVA software

    Science.gov (United States)

    Toullelan, Gwénaël; Chatillon, Sylvain; Raillon, Raphaële; Mahaut, Steve; Lonné, Sébastien; Bannouf, Souad

    2017-02-01

    For several years, the World Federation of NDE Centers, WFNDEC, proposes benchmark studies in which simulated results (in either ultrasonic, X-rays or eddy current NDT configurations) obtained with various models are compared to experiments. This year the proposed UT benchmark proposed by CEA concerns inspection configurations with multi-skips echoes i.e. the incident beam undergoes several skips on the surface and bottom of the specimen before interacting with the defect. This technique is commonly used to inspect thin specimen and/or in case of limited access inspection. This technique relies on the use of T45° mode in order to avoid mode conversion and to facilitate the interpretation of the echoes. The inspections were carried out with two probes of different aperture working at 5MHz.

  8. Professional Performance and Bureaucratic Benchmarking Information

    DEFF Research Database (Denmark)

    Schneider, Melanie L.; Mahlendorf, Matthias D.; Schäffer, Utz

    Prior research documents positive effects of benchmarking information provision on performance and attributes this to social comparisons. However, the effects on professional recipients are unclear. Studies of professional control indicate that professional recipients often resist bureaucratic...... controls because of organizational-professional conflicts. We therefore analyze the association between bureaucratic benchmarking information provision and professional performance and suggest that the association is more positive if prior professional performance was low. We test our hypotheses based...... and professional performance but only if prior professional performance was low. Supplemental analyses support the robustness of our results. Findings indicate conditions under which bureaucratic benchmarking information may affect professional performance and advance research on professional control and social...

  9. Lattice Wess-Zumino model with Ginsparg-Wilson fermions: One-loop results and GPU benchmarks

    CERN Document Server

    Chen, Chen; Giedt, Joel

    2010-01-01

    We numerically evaluate the one-loop counterterms for the four-dimensional Wess-Zumino model formulated on the lattice using Ginsparg-Wilson fermions of the overlap (Neuberger) variety, such that a lattice version of U(1)_R symmetry is exactly preserved in the limit of vanishing bare mass. We confirm previous findings by other authors that at one loop there is no renormalization of the superpotential in the lattice theory. We discuss aspects of the simulation of this model that is planned for a follow-up work, and outline a strategy for nonperturbative improvement of the lattice supercurrent through measurements of \\susy\\ Ward identities. Related to this, some benchmarks for our graphics processing unit code are provided. An initial simulation finds a nearly vanishing vacuum expectation value for the auxiliary field, consistent with approximate supersymmetry.

  10. Applications of Integral Benchmark Data

    Energy Technology Data Exchange (ETDEWEB)

    Giuseppe Palmiotti; Teruhiko Kugo; Fitz Trumble; Albert C. (Skip) Kahler; Dale Lancaster

    2014-10-09

    The International Reactor Physics Experiment Evaluation Project (IRPhEP) and the International Criticality Safety Benchmark Evaluation Project (ICSBEP) provide evaluated integral benchmark data that may be used for validation of reactor physics / nuclear criticality safety analytical methods and data, nuclear data testing, advanced modeling and simulation, and safety analysis licensing activities. The handbooks produced by these programs are used in over 30 countries. Five example applications are presented in this paper: (1) Use of IRPhEP Data in Uncertainty Analyses and Cross Section Adjustment, (2) Uncertainty Evaluation Methods for Reactor Core Design at JAEA Using Reactor Physics Experimental Data, (3) Application of Benchmarking Data to a Broad Range of Criticality Safety Problems, (4) Cross Section Data Testing with ICSBEP Benchmarks, and (5) Use of the International Handbook of Evaluated Reactor Physics Benchmark Experiments to Support the Power Industry.

  11. WIPP Benchmark calculations with the large strain SPECTROM codes

    Energy Technology Data Exchange (ETDEWEB)

    Callahan, G.D.; DeVries, K.L. [RE/SPEC, Inc., Rapid City, SD (United States)

    1995-08-01

    This report provides calculational results from the updated Lagrangian structural finite-element programs SPECTROM-32 and SPECTROM-333 for the purpose of qualifying these codes to perform analyses of structural situations in the Waste Isolation Pilot Plant (WIPP). Results are presented for the Second WIPP Benchmark (Benchmark II) Problems and for a simplified heated room problem used in a parallel design calculation study. The Benchmark II problems consist of an isothermal room problem and a heated room problem. The stratigraphy involves 27 distinct geologic layers including ten clay seams of which four are modeled as frictionless sliding interfaces. The analyses of the Benchmark II problems consider a 10-year simulation period. The evaluation of nine structural codes used in the Benchmark II problems shows that inclusion of finite-strain effects is not as significant as observed for the simplified heated room problem, and a variety of finite-strain and small-strain formulations produced similar results. The simplified heated room problem provides stratigraphic complexity equivalent to the Benchmark II problems but neglects sliding along the clay seams. The simplified heated problem does, however, provide a calculational check case where the small strain-formulation produced room closures about 20 percent greater than those obtained using finite-strain formulations. A discussion is given of each of the solved problems, and the computational results are compared with available published results. In general, the results of the two SPECTROM large strain codes compare favorably with results from other codes used to solve the problems.

  12. Efficient algorithms for mixed aleatory-epistemic uncertainty quantification with application to radiation-hardened electronics. Part I, algorithms and benchmark results.

    Energy Technology Data Exchange (ETDEWEB)

    Swiler, Laura Painton; Eldred, Michael Scott

    2009-09-01

    This report documents the results of an FY09 ASC V&V Methods level 2 milestone demonstrating new algorithmic capabilities for mixed aleatory-epistemic uncertainty quantification. Through the combination of stochastic expansions for computing aleatory statistics and interval optimization for computing epistemic bounds, mixed uncertainty analysis studies are shown to be more accurate and efficient than previously achievable. Part I of the report describes the algorithms and presents benchmark performance results. Part II applies these new algorithms to UQ analysis of radiation effects in electronic devices and circuits for the QASPR program.

  13. Benchmarking Pthreads performance

    Energy Technology Data Exchange (ETDEWEB)

    May, J M; de Supinski, B R

    1999-04-27

    The importance of the performance of threads libraries is growing as clusters of shared memory machines become more popular POSIX threads, or Pthreads, is an industry threads library standard. We have implemented the first Pthreads benchmark suite. In addition to measuring basic thread functions, such as thread creation, we apply the L.ogP model to standard Pthreads communication mechanisms. We present the results of our tests for several hardware platforms. These results demonstrate that the performance of existing Pthreads implementations varies widely; parts of nearly all of these implementations could be further optimized. Since hardware differences do not fully explain these performance variations, optimizations could improve the implementations. 2. Incorporating Threads Benchmarks into SKaMPI is an MPI benchmark suite that provides a general framework for performance analysis [7]. SKaMPI does not exhaustively test the MPI standard. Instead, it

  14. HPCS HPCchallenge Benchmark Suite

    Science.gov (United States)

    2007-11-02

    measured HPCchallenge Benchmark performance on various HPC architectures — from Cray X1s to Beowulf clusters — in the presentation and paper...from Cray X1s to Beowulf clusters — using the updated results at http://icl.cs.utk.edu/hpcc/hpcc_results.cgi Even a small percentage of random

  15. Reevaluation of JACS code system benchmark analyses of the heterogeneous system. Fuel rods in U+Pu nitric acid solution system

    Energy Technology Data Exchange (ETDEWEB)

    Takada, Tomoyuki; Miyoshi, Yoshinori; Katakura, Jun-ichi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2003-03-01

    In order to perform accuracy evaluation of the critical calculation by the combination of multi-group constant library MGCL and 3-dimensional Monte Carlo code KENO-IV among critical safety evaluation code system JACS, benchmark calculation was carried out from 1980 in 1982. Some cases where the neutron multiplication factor calculated in the heterogeneous system in it was less than 0.95 were seen. In this report, it re-calculated by considering the cause about the heterogeneous system of the U+Pu nitric acid solution systems containing the neutron poison shown in JAERI-M 9859. The present study has shown that the k{sub eff} value less than 0.95 given in JAERI-M 9859 is caused by the fact that the water reflector below a cylindrical container was not taken into consideration in the KENO-IV calculation model. By taking into the water reflector, the KENO-IV calculation gives a k{sub eff} value greater than 0.95 and a good agreement with the experiment. (author)

  16. Brainstorming as a Tool for the Benchmarking For Achieving Results in the Service-Oriented-Businesses (A Online Survey: Study Approach

    Directory of Open Access Journals (Sweden)

    R. Surya Kiran, D. Shiva Sai Kumar, D. Sateesh Kumar, V. Dilip Kumar, Vikas Kumar Singh

    2013-08-01

    Full Text Available How to benchmark is the problem and this paper produces out an outline on a typical research methodology using the brainstorming technique in order to come to the effective conclusions. With the commencement of the STEP (Socio-Cultural, Technical ,Economical and Political reforms in the previous years , business environments are in a state of dynamic change and the change process is still continuing .There had been a tremendous acceleration from the tradition and the inward looking regime to a progressive and the outward looking regime of the policy framework. With the L.P.G. (Liberalization , Privatization and the Globalization in almost all the sectors of the STEM (Science, Technology, Engineering and Medicine, the roles of the different sectors are undergoing Fundamental/Conceptual changes opening up new sets for analyzing the SWOT (Strength, Weakness, Opportunity and Threat for the business sectors .The main aim of the Six Sigma concept is to make the results were right the first time, every time. So benchmarking is to be done for the profitability and the revenue growth of the organizations . Brainstorming results could be well interpreted with the superposition matrix considering the ABC and the VED analysis as the same has been tested in the designs of the inventory control .

  17. Kvantitativ benchmark - Produktionsvirksomheder

    DEFF Research Database (Denmark)

    Sørensen, Ole H.; Andersen, Vibeke

    Rapport med resultatet af kvantitativ benchmark over produktionsvirksomhederne i VIPS projektet.......Rapport med resultatet af kvantitativ benchmark over produktionsvirksomhederne i VIPS projektet....

  18. Benchmarking in Student Affairs.

    Science.gov (United States)

    Mosier, Robert E.; Schwarzmueller, Gary J.

    2002-01-01

    Discusses the use of benchmarking in student affairs, focusing on issues related to student housing. Provides examples of how benchmarking has influenced administrative practice at many institutions. (EV)

  19. Comparison of the PHISICS/RELAP5-3D Ring and Block Model Results for Phase I of the OECD MHTGR-350 Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Gerhard Strydom

    2014-04-01

    The INL PHISICS code system consists of three modules providing improved core simulation capability: INSTANT (performing 3D nodal transport core calculations), MRTAU (depletion and decay heat generation) and a perturbation/mixer module. Coupling of the PHISICS code suite to the thermal hydraulics system code RELAP5-3D has recently been finalized, and as part of the code verification and validation program the exercises defined for Phase I of the OECD/NEA MHTGR 350 MW Benchmark were completed. This paper provides an overview of the MHTGR Benchmark, and presents selected results of the three steady state exercises 1-3 defined for Phase I. For Exercise 1, a stand-alone steady-state neutronics solution for an End of Equilibrium Cycle Modular High Temperature Reactor (MHTGR) was calculated with INSTANT, using the provided geometry, material descriptions, and detailed cross-section libraries. Exercise 2 required the modeling of a stand-alone thermal fluids solution. The RELAP5-3D results of four sub-cases are discussed, consisting of various combinations of coolant bypass flows and material thermophysical properties. Exercise 3 combined the first two exercises in a coupled neutronics and thermal fluids solution, and the coupled code suite PHISICS/RELAP5-3D was used to calculate the results of two sub-cases. The main focus of the paper is a comparison of the traditional RELAP5-3D “ring” model approach vs. a much more detailed model that include kinetics feedback on individual block level and thermal feedbacks on a triangular sub-mesh. The higher fidelity of the block model is illustrated with comparison results on the temperature, power density and flux distributions, and the typical under-predictions produced by the ring model approach are highlighted.

  20. Benchmarking: Achieving the best in class

    Energy Technology Data Exchange (ETDEWEB)

    Kaemmerer, L

    1996-05-01

    Oftentimes, people find the process of organizational benchmarking an onerous task, or, because they do not fully understand the nature of the process, end up with results that are less than stellar. This paper presents the challenges of benchmarking and reasons why benchmarking can benefit an organization in today`s economy.

  1. Association between Adult Height and Risk of Colorectal, Lung, and Prostate Cancer : Results from Meta-analyses of Prospective Studies and Mendelian Randomization Analyses

    NARCIS (Netherlands)

    Khankari, Nikhil K.; Shu, Xiao Ou; Wen, Wanqing; Kraft, Peter; Lindström, Sara; Peters, Ulrike; Schildkraut, Joellen; Schumacher, Fredrick; Bofetta, Paolo; Risch, Angela; Bickeböller, Heike; Amos, Christopher I.; Easton, Douglas; Eeles, Rosalind A.; Gruber, Stephen B.; Haiman, Christopher A.; Hunter, David J.; Chanock, Stephen J.; Pierce, Brandon L.; Zheng, Wei; Blalock, Kendra; Campbell, Peter T.; Casey, Graham; Conti, David V.; Edlund, Christopher K.; Figueiredo, Jane; James Gauderman, W.; Gong, Jian; Green, Roger C.; Harju, John F.; Harrison, Tabitha A.; Jacobs, Eric J.; Jenkins, Mark A.; Jiao, Shuo; Li, Li; Lin, Yi; Manion, Frank J.; Moreno, Victor; Mukherjee, Bhramar; Raskin, Leon; Schumacher, Fredrick R.; Seminara, Daniela; Severi, Gianluca; Stenzel, Stephanie L.; Thomas, Duncan C.; Hopper, John L.; Southey, Melissa C.; Makalic, Enes; Schmidt, Daniel F.; Fletcher, Olivia; Peto, Julian; Gibson, Lorna; dos Santos Silva, Isabel; Ahsan, Habib; Whittemore, Alice; Waisfisz, Quinten; Meijers-Heijboer, Hanne; Adank, Muriel; van der Luijt, Rob B.; Uitterlinden, Andre G.; Hofman, Albert; Meindl, Alfons; Schmutzler, Rita K.; Müller-Myhsok, Bertram; Lichtner, Peter; Nevanlinna, Heli; Muranen, Taru A.; Aittomäki, Kristiina; Blomqvist, Carl; Chang-Claude, Jenny; Hein, Rebecca; Dahmen, Norbert; Beckman, Lars; Crisponi, Laura; Hall, Per; Czene, Kamila; Irwanto, Astrid; Liu, Jianjun; Easton, Douglas F.; Turnbull, Clare; Rahman, Nazneen; Eeles, Rosalind; Kote-Jarai, Zsofia; Muir, Kenneth; Giles, Graham; Neal, David; Donovan, Jenny L.; Hamdy, Freddie C.; Wiklund, Fredrik; Gronberg, Henrik; Haiman, Christopher; Schumacher, Fred; Travis, Ruth; Riboli, Elio; Hunter, David; Gapstur, Susan; Berndt, Sonja; Chanock, Stephen; Han, Younghun; Su, Li; Wei, Yongyue; Hung, Rayjean J.; Brhane, Yonathan; McLaughlin, John; Brennan, Paul; McKay, James D.; Rosenberger, Albert; Houlston, Richard S.; Caporaso, Neil; Teresa Landi, Maria; Heinrich, Joachim; Wu, Xifeng; Ye, Yuanqing; Christiani, David C.

    2016-01-01

    Background: Observational studies examining associations between adult height and risk of colorectal, prostate, and lung cancers have generated mixed results. We conducted meta-analyses using data from prospective cohort studies and further carried out Mendelian randomization analyses, using height-

  2. Benchmarking the Multidimensional Stellar Implicit Code MUSIC

    Science.gov (United States)

    Goffrey, T.; Pratt, J.; Viallet, M.; Baraffe, I.; Popov, M. V.; Walder, R.; Folini, D.; Geroux, C.; Constantino, T.

    2017-04-01

    We present the results of a numerical benchmark study for the MUltidimensional Stellar Implicit Code (MUSIC) based on widely applicable two- and three-dimensional compressible hydrodynamics problems relevant to stellar interiors. MUSIC is an implicit large eddy simulation code that uses implicit time integration, implemented as a Jacobian-free Newton Krylov method. A physics based preconditioning technique which can be adjusted to target varying physics is used to improve the performance of the solver. The problems used for this benchmark study include the Rayleigh-Taylor and Kelvin-Helmholtz instabilities, and the decay of the Taylor-Green vortex. Additionally we show a test of hydrostatic equilibrium, in a stellar environment which is dominated by radiative effects. In this setting the flexibility of the preconditioning technique is demonstrated. This work aims to bridge the gap between the hydrodynamic test problems typically used during development of numerical methods and the complex flows of stellar interiors. A series of multidimensional tests were performed and analysed. Each of these test cases was analysed with a simple, scalar diagnostic, with the aim of enabling direct code comparisons. As the tests performed do not have analytic solutions, we verify MUSIC by comparing it to established codes including ATHENA and the PENCIL code. MUSIC is able to both reproduce behaviour from established and widely-used codes as well as results expected from theoretical predictions. This benchmarking study concludes a series of papers describing the development of the MUSIC code and provides confidence in future applications.

  3. Managing for Results in America's Great City Schools. A Report of the Performance Measurement and Benchmarking Project

    Science.gov (United States)

    Council of the Great City Schools, 2012

    2012-01-01

    "Managing for Results in America's Great City Schools, 2012" is presented by the Council of the Great City Schools to its members and the public. The purpose of the project was and is to develop performance measures that can improve the business operations of urban public school districts nationwide. This year's report includes data from 61 of the…

  4. Benchmarking v ICT

    OpenAIRE

    Blecher, Jan

    2009-01-01

    The aim of this paper is to describe benefits of benchmarking IT in wider context and benchmarking scope at all. I specify benchmarking as a process and mention basic rules and guidelines. Further I define IT benchmarking domains and describe possibilities of their use. Best known type of IT benchmark is cost benchmark which represents only a subset of benchmark opportunities. In this paper, is cost benchmark rather an imaginary first step to benchmarking contribution to company. IT benchmark...

  5. HS06 Benchmark for an ARM Server

    Science.gov (United States)

    Kluth, Stefan

    2014-06-01

    We benchmarked an ARM cortex-A9 based server system with a four-core CPU running at 1.1 GHz. The system used Ubuntu 12.04 as operating system and the HEPSPEC 2006 (HS06) benchmarking suite was compiled natively with gcc-4.4 on the system. The benchmark was run for various settings of the relevant gcc compiler options. We did not find significant influence from the compiler options on the benchmark result. The final HS06 benchmark result is 10.4.

  6. HS06 Benchmark for an ARM Server

    CERN Document Server

    Kluth, Stefan

    2013-01-01

    We benchmarked an ARM cortex-A9 based server system with a four-core CPU running at 1.1 GHz. The system used Ubuntu 12.04 as operating system and the HEPSPEC 2006 (HS06) benchmarking suite was compiled natively with gcc-4.4 on the system. The benchmark was run for various settings of the relevant gcc compiler options. We did not find significant influence from the compiler options on the benchmark result. The final HS06 benchmark result is 10.4.

  7. Benchmarking of energy time series

    Energy Technology Data Exchange (ETDEWEB)

    Williamson, M.A.

    1990-04-01

    Benchmarking consists of the adjustment of time series data from one source in order to achieve agreement with similar data from a second source. The data from the latter source are referred to as the benchmark(s), and often differ in that they are observed at a lower frequency, represent a higher level of temporal aggregation, and/or are considered to be of greater accuracy. This report provides an extensive survey of benchmarking procedures which have appeared in the statistical literature, and reviews specific benchmarking procedures currently used by the Energy Information Administration (EIA). The literature survey includes a technical summary of the major benchmarking methods and their statistical properties. Factors influencing the choice and application of particular techniques are described and the impact of benchmark accuracy is discussed. EIA applications and procedures are reviewed and evaluated for residential natural gas deliveries series and coal production series. It is found that the current method of adjusting the natural gas series is consistent with the behavior of the series and the methods used in obtaining the initial data. As a result, no change is recommended. For the coal production series, a staged approach based on a first differencing technique is recommended over the current procedure. A comparison of the adjustments produced by the two methods is made for the 1987 Indiana coal production series. 32 refs., 5 figs., 1 tab.

  8. Operating costs and energy demand of wastewater treatment plants in Austria: benchmarking results of the last 10 years.

    Science.gov (United States)

    Haslinger, J; Lindtner, S; Krampe, J

    2016-12-01

    This work presents operating costs and energy consumption of Austrian municipal wastewater treatment plants (WWTPs) (≥10,000 PE-design capacity) that have been classified into different size groups. Different processes as well as cost elements are investigated and processes with high relevance regarding operating costs and energy consumption are identified. Furthermore, the work shows the cost-relevance of six investigated cost elements. The analysis demonstrates the size-dependency of operating costs and energy consumption. For the examination of the energy consumption the investigated WWTPs were further classified into WWTPs with aerobic sludge stabilisation and WWTPs with mesophilic sludge digestion. The work proves that energy consumption depends mainly on the type of sludge stabilisation. The results of the investigation can help to determine reduction potential in operating costs and energy consumption of WWTPs and form a basis for more detailed analysis which helps to identify cost and energy saving potential.

  9. Benchmark Results and Theoretical Treatments for Valence-to-Core X-ray Emission Spectroscopy in Transition Metal Compounds

    Energy Technology Data Exchange (ETDEWEB)

    Mortensen, Devon R.; Seidler, Gerald T.; Kas, Joshua J.; Govind, Niranjan; Schwartz, Craig; Pemmaraju, Das; Prendergast, David

    2017-09-20

    We report measurement of the valence-to-core (VTC) region of the K-shell x-ray emission spectra from several Zn and Fe inorganic compounds, and their critical comparison with several existing theoretical treatments. We find generally good agreement between the respective theories and experiment, and in particular find an important admixture of dipole and quadrupole character for Zn materials that is much weaker in Fe-based systems. These results on materials whose simple crystal structures should not, a prior, pose deep challenges to theory, will prove useful in guiding the further development of DFT and time-dependent DFT methods for VTC-XES predictions and their comparison to experiment.

  10. DSP Platform Benchmarking : DSP Platform Benchmarking

    OpenAIRE

    Xinyuan, Luo

    2009-01-01

    Benchmarking of DSP kernel algorithms was conducted in the thesis on a DSP processor for teaching in the course TESA26 in the department of Electrical Engineering. It includes benchmarking on cycle count and memory usage. The goal of the thesis is to evaluate the quality of a single MAC DSP instruction set and provide suggestions for further improvement in instruction set architecture accordingly. The scope of the thesis is limited to benchmark the processor only based on assembly coding. The...

  11. Benchmarking Universiteitsvastgoed: Managementinformatie bij vastgoedbeslissingen

    NARCIS (Netherlands)

    Den Heijer, A.C.; De Vries, J.C.

    2004-01-01

    Voor u ligt het eindrapport van het onderzoek "Benchmarking universiteitsvastgoed". Dit rapport is de samenvoeging van twee deel producten: het theorierapport (verschenen in december 2003) en het praktijkrapport (verschenen in januari 2004). Onderwerpen in het theoriedeel zijn de analyse van andere

  12. Radiography benchmark 2014

    Energy Technology Data Exchange (ETDEWEB)

    Jaenisch, G.-R., E-mail: Gerd-Ruediger.Jaenisch@bam.de; Deresch, A., E-mail: Gerd-Ruediger.Jaenisch@bam.de; Bellon, C., E-mail: Gerd-Ruediger.Jaenisch@bam.de [Federal Institute for Materials Research and Testing, Unter den Eichen 87, 12205 Berlin (Germany); Schumm, A.; Lucet-Sanchez, F.; Guerin, P. [EDF R and D, 1 avenue du Général de Gaulle, 92141 Clamart (France)

    2015-03-31

    The purpose of the 2014 WFNDEC RT benchmark study was to compare predictions of various models of radiographic techniques, in particular those that predict the contribution of scattered radiation. All calculations were carried out for homogenous materials and a mono-energetic X-ray point source in the energy range between 100 keV and 10 MeV. The calculations were to include the best physics approach available considering electron binding effects. Secondary effects like X-ray fluorescence and bremsstrahlung production were to be taken into account if possible. The problem to be considered had two parts. Part I examined the spectrum and the spatial distribution of radiation behind a single iron plate. Part II considered two equally sized plates, made of iron and aluminum respectively, only evaluating the spatial distribution. Here we present the results of above benchmark study, comparing them to MCNP as the assumed reference model. The possible origins of the observed deviations are discussed.

  13. Benchmarking – A tool for judgment or improvement?

    DEFF Research Database (Denmark)

    Rasmussen, Grane Mikael Gregaard

    2010-01-01

    these issues, and describes how effects are closely connected to the perception of benchmarking, the intended users of the system and the application of the benchmarking results. The fundamental basis of this paper is taken from the development of benchmarking in the Danish construction sector. Two distinct...... perceptions of benchmarking will be presented; public benchmarking and best practice benchmarking. These two types of benchmarking are used to characterize and discuss the Danish benchmarking system and to enhance which effects, possibilities and challenges that follow in the wake of using this kind...... of benchmarking. In conclusion it is argued that clients and the Danish government are the intended users of the benchmarking system. The benchmarking results are primarily used by the government for monitoring and regulation of the construction sector and by clients for contractor selection. The dominating use...

  14. [Benchmarking in health care: conclusions and recommendations].

    Science.gov (United States)

    Geraedts, Max; Selbmann, Hans-Konrad

    2011-01-01

    The German Health Ministry funded 10 demonstration projects and accompanying research of benchmarking in health care. The accompanying research work aimed to infer generalisable findings and recommendations. We performed a meta-evaluation of the demonstration projects and analysed national and international approaches to benchmarking in health care. It was found that the typical benchmarking sequence is hardly ever realised. Most projects lack a detailed analysis of structures and processes of the best performers as a starting point for the process of learning from and adopting best practice. To tap the full potential of benchmarking in health care, participation in voluntary benchmarking projects should be promoted that have been demonstrated to follow all the typical steps of a benchmarking process.

  15. Preliminary Results of Ancillary Safety Analyses Supporting TREAT LEU Conversion Activities

    Energy Technology Data Exchange (ETDEWEB)

    Brunett, A. J. [Argonne National Lab. (ANL), Argonne, IL (United States); Fei, T. [Argonne National Lab. (ANL), Argonne, IL (United States); Strons, P. S. [Argonne National Lab. (ANL), Argonne, IL (United States); Papadias, D. D. [Argonne National Lab. (ANL), Argonne, IL (United States); Hoffman, E. A. [Argonne National Lab. (ANL), Argonne, IL (United States); Kontogeorgakos, D. C. [Argonne National Lab. (ANL), Argonne, IL (United States); Connaway, H. M. [Argonne National Lab. (ANL), Argonne, IL (United States); Wright, A. E. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2015-10-01

    Report (FSAR) [3]. Depending on the availability of historical data derived from HEU TREAT operation, results calculated for the LEU core are compared to measurements obtained from HEU TREAT operation. While all analyses in this report are largely considered complete and have been reviewed for technical content, it is important to note that all topics will be revisited once the LEU design approaches its final stages of maturity. For most safety significant issues, it is expected that the analyses presented here will be bounding, but additional calculations will be performed as necessary to support safety analyses and safety documentation. It should also be noted that these analyses were completed as the LEU design evolved, and therefore utilized different LEU reference designs. Preliminary shielding, neutronic, and thermal hydraulic analyses have been completed and have generally demonstrated that the various LEU core designs will satisfy existing safety limits and standards also satisfied by the existing HEU core. These analyses include the assessment of the dose rate in the hodoscope room, near a loaded fuel transfer cask, above the fuel storage area, and near the HEPA filters. The potential change in the concentration of tramp uranium and change in neutron flux reaching instrumentation has also been assessed. Safety-significant thermal hydraulic items addressed in this report include thermally-induced mechanical distortion of the grid plate, and heating in the radial reflector.

  16. The Statistical Analyses of the White-Light Flares: Two Main Results About Flare Behaviours

    CERN Document Server

    Dal, H A

    2012-01-01

    We present two main results, based on the models and the statistical analyses of 1672 U-band flares. We also discuss the behaviours of the white-light flares. In addition, the parameters of the flares detected from two years of observations on CR Dra are presented. By comparing with the flare parameters obtained from other UV Ceti type stars, we examine the behaviour of optical flare processes along the spectral types. Moreover, we aimed, using large white-light flare data,to analyse the flare time-scales in respect to some results obtained from the X-ray observations. Using the SPSS V17.0 and the GraphPad Prism V5.02 software, the flares detected from CR Dra were modelled with the OPEA function and analysed with t-Test method to compare similar flare events in other stars. In addition, using some regression calculations in order to derive the best histograms, the time-scales of the white-light flares were analysed. Firstly, CR Dra flares have revealed that the white-light flares behave in a similar way as th...

  17. Benchmarks for measurement of duplicate detection methods in nucleotide databases.

    Science.gov (United States)

    Chen, Qingyu; Zobel, Justin; Verspoor, Karin

    2017-01-08

    Duplication of information in databases is a major data quality challenge. The presence of duplicates, implying either redundancy or inconsistency, can have a range of impacts on the quality of analyses that use the data. To provide a sound basis for research on this issue in databases of nucleotide sequences, we have developed new, large-scale validated collections of duplicates, which can be used to test the effectiveness of duplicate detection methods. Previous collections were either designed primarily to test efficiency, or contained only a limited number of duplicates of limited kinds. To date, duplicate detection methods have been evaluated on separate, inconsistent benchmarks, leading to results that cannot be compared and, due to limitations of the benchmarks, of questionable generality. In this study, we present three nucleotide sequence database benchmarks, based on information drawn from a range of resources, including information derived from mapping to two data sections within the UniProt Knowledgebase (UniProtKB), UniProtKB/Swiss-Prot and UniProtKB/TrEMBL. Each benchmark has distinct characteristics. We quantify these characteristics and argue for their complementary value in evaluation. The benchmarks collectively contain a vast number of validated biological duplicates; the largest has nearly half a billion duplicate pairs (although this is probably only a tiny fraction of the total that is present). They are also the first benchmarks targeting the primary nucleotide databases. The records include the 21 most heavily studied organisms in molecular biology research. Our quantitative analysis shows that duplicates in the different benchmarks, and in different organisms, have different characteristics. It is thus unreliable to evaluate duplicate detection methods against any single benchmark. For example, the benchmark derived from UniProtKB/Swiss-Prot mappings identifies more diverse types of duplicates, showing the importance of expert curation, but

  18. Aquatic Life Benchmarks

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Aquatic Life Benchmarks is an EPA-developed set of criteria for freshwater species. These benchmarks are based on toxicity values reviewed by EPA and used in the...

  19. Results of initial analyses of the salt (macro) batch 9 tank 21H qualification samples

    Energy Technology Data Exchange (ETDEWEB)

    Peters, T. B. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2015-10-01

    Savannah River National Laboratory (SRNL) analyzed samples from Tank 21H in support of qualification of Interim Salt Disposition Project (ISDP) Salt (Macro) Batch 9 for processing through the Actinide Removal Process (ARP) and the Modular Caustic-Side Solvent Extraction Unit (MCU). This document reports the initial results of the analyses of samples of Tank 21H. Analysis of the Tank 21H Salt (Macro) Batch 9 composite sample indicates that the material does not display any unusual characteristics or observations, such as floating solids, the presence of large amount of solids, or unusual colors. Further results on the chemistry and other tests will be issued in the future.

  20. Results Of Initial Analyses Of The Salt (Macro) Batch 9 Tank 21H Qualification Samples

    Energy Technology Data Exchange (ETDEWEB)

    Peters, T. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2015-10-08

    Savannah River National Laboratory (SRNL) analyzed samples from Tank 21H in support of qualification of Interim Salt Disposition Project (ISDP) Salt (Macro) Batch 9 for processing through the Actinide Removal Process (ARP) and the Modular Caustic-Side Solvent Extraction Unit (MCU). This document reports the initial results of the analyses of samples of Tank 21H. Analysis of the Tank 21H Salt (Macro) Batch 9 composite sample indicates that the material does not display any unusual characteristics. Further results on the chemistry and other tests will be issued in the future.

  1. The validation benchmark analyses for CMS data

    CERN Document Server

    Holub, Lukas

    2016-01-01

    The main goal of this report is to summarize my work at CERN during this summer. My first task was to transport code and dataset files from CERN Open Data Portal to Github that will be more convenient for users. The second part of my work was to copy environment from CERN Open Data Virtual Machine and apply it in the analysis environment SWAN. The last task was to rescale X-axis of the histogram.

  2. Benchmarking in Identifying Priority Directions of Development of Telecommunication Operators

    Directory of Open Access Journals (Sweden)

    Zaharchenko Lolita A.

    2013-12-01

    Full Text Available The article analyses evolution of development and possibilities of application of benchmarking in the telecommunication sphere. It studies essence of benchmarking on the basis of generalisation of approaches of different scientists to definition of this notion. In order to improve activity of telecommunication operators, the article identifies the benchmarking technology and main factors, that determine success of the operator in the modern market economy, and the mechanism of benchmarking and component stages of carrying out benchmarking by a telecommunication operator. It analyses the telecommunication market and identifies dynamics of its development and tendencies of change of the composition of telecommunication operators and providers. Having generalised the existing experience of benchmarking application, the article identifies main types of benchmarking of telecommunication operators by the following features: by the level of conduct of (branch, inter-branch and international benchmarking; by relation to participation in the conduct (competitive and joint; and with respect to the enterprise environment (internal and external.

  3. Benchmarking semantic web technology

    CERN Document Server

    García-Castro, R

    2009-01-01

    This book addresses the problem of benchmarking Semantic Web Technologies; first, from a methodological point of view, proposing a general methodology to follow in benchmarking activities over Semantic Web Technologies and, second, from a practical point of view, presenting two international benchmarking activities that involved benchmarking the interoperability of Semantic Web technologies using RDF(S) as the interchange language in one activity and OWL in the other.The book presents in detail how the different resources needed for these interoperability benchmarking activities were defined:

  4. Benchmarking in Identifying Priority Directions of Development of Telecommunication Operators

    OpenAIRE

    Zaharchenko Lolita A.; Kolesnyk Oksana A.

    2013-01-01

    The article analyses evolution of development and possibilities of application of benchmarking in the telecommunication sphere. It studies essence of benchmarking on the basis of generalisation of approaches of different scientists to definition of this notion. In order to improve activity of telecommunication operators, the article identifies the benchmarking technology and main factors, that determine success of the operator in the modern market economy, and the mechanism of benchmarking an...

  5. A Benchmark for Virtual Camera Control

    DEFF Research Database (Denmark)

    Burelli, Paolo; Yannakakis, Georgios N.

    2015-01-01

    Automatically animating and placing the virtual camera in a dynamic environment is a challenging task. The camera is expected to maximise and maintain a set of properties — i.e. visual composition — while smoothly moving through the environment and avoiding obstacles. A large number of different....... For this reason, in this paper, we propose a benchmark for the problem of virtual camera control and we analyse a number of different problems in different virtual environments. Each of these scenarios is described through a set of complexity measures and, as a result of this analysis, a subset of scenarios...

  6. Benchmarking Query Execution Robustness

    Science.gov (United States)

    Wiener, Janet L.; Kuno, Harumi; Graefe, Goetz

    Benchmarks that focus on running queries on a well-tuned database system ignore a long-standing problem: adverse runtime conditions can cause database system performance to vary widely and unexpectedly. When the query execution engine does not exhibit resilience to these adverse conditions, addressing the resultant performance problems can contribute significantly to the total cost of ownership for a database system in over-provisioning, lost efficiency, and increased human administrative costs. For example, focused human effort may be needed to manually invoke workload management actions or fine-tune the optimization of specific queries.

  7. Benchmarking in University Toolbox

    Directory of Open Access Journals (Sweden)

    Katarzyna Kuźmicz

    2015-06-01

    Full Text Available In the face of global competition and rising challenges that higher education institutions (HEIs meet, it is imperative to increase innovativeness and efficiency of their management. Benchmarking can be the appropriate tool to search for a point of reference necessary to assess institution’s competitive position and learn from the best in order to improve. The primary purpose of the paper is to present in-depth analysis of benchmarking application in HEIs worldwide. The study involves indicating premises of using benchmarking in HEIs. It also contains detailed examination of types, approaches and scope of benchmarking initiatives. The thorough insight of benchmarking applications enabled developing classification of benchmarking undertakings in HEIs. The paper includes review of the most recent benchmarking projects and relating them to the classification according to the elaborated criteria (geographical range, scope, type of data, subject, support and continuity. The presented examples were chosen in order to exemplify different approaches to benchmarking in higher education setting. The study was performed on the basis of the published reports from benchmarking projects, scientific literature and the experience of the author from the active participation in benchmarking projects. The paper concludes with recommendations for university managers undertaking benchmarking, derived on the basis of the conducted analysis.

  8. Diplodon shells from Northwest Patagonia as continental proxy archives: Oxygen isotopic results and sclerochronological analyses

    Science.gov (United States)

    Soldati, A. L.; Beierlein, L.; Jacob, D. E.

    2009-04-01

    Freshwater mussels of the genus Diplodon (Bivalvia, Hyriidae) are the most abundant bivalve (today and in the past) in freshwater bodies at both sides of the South-Andean Cordillera. There are about 25 different Diplodon genera in Argentina and Chile that could be assigned almost completely to the species Diplodon chilensis (Gray, 1828) and two subspecies: D. ch. chilensis and D. ch. patagonicus; this latter species is found in Argentina between Mendoza (32˚ 52' S; 68˚ 51' W) and Chubut (45˚ 51' S; 67˚ 28' W), including the lakes and rivers of the target area, the Nahuel Huapi National Park (Castellanos, 1960). Despite their wide geographic distribution, Diplodon species have only rarely been used as climate archives in the southern hemisphere. Kaandorp et al. (2005) demonstrated for Diplodon longulus (Conrad 1874) collected from the Peruvian Amazonas that oxygen isotopic patterns in the shells could be used in order to reconstruct the precipitation regime and dry/wet seasonal of the monsoonal system in Amazonia. Although this study demonstrated the potential of Diplodon in climatological and ecological reconstructions in the southern hemisphere, as of yet, no systematic study of Diplodon as a multi-proxy archive has been undertaken for the Patagonian region. In this work we present sclerochronological analyses supported by ^18Oshell in recent mussel of Diplodon chilensis patagonicus (D'Orbigny, 1835) collected at Laguna El Trébol (42°S-71°W, Patagonia Argentina), one of the best studied water bodies in the region for paleoclimate analysis. Water temperature was measured every six hours for one year using a temperature sensor (Starmon mini®) placed at 5m depth in the lake, close to a mussel bank. Additionally, ^18Owater was measured monthly for the same time range.g^18Oshell values obtained by micro-milling at high spatial resolution in the growth increments of three Diplodon shells were compared to these records, and to air temperature and

  9. Criteria of benchmark selection for efficient flexible multibody system formalisms

    Directory of Open Access Journals (Sweden)

    Valášek M.

    2007-10-01

    Full Text Available The paper deals with the selection process of benchmarks for testing and comparing efficient flexible multibody formalisms. The existing benchmarks are briefly summarized. The purposes for benchmark selection are investigated. The result of this analysis is the formulation of the criteria of benchmark selection for flexible multibody formalisms. Based on them the initial set of suitable benchmarks is described. Besides that the evaluation measures are revised and extended.

  10. Results of initial analyses of the salt (macro) batch 10 tank 21H qualification samples

    Energy Technology Data Exchange (ETDEWEB)

    Peters, T. B. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2017-01-01

    Savannah River National Laboratory (SRNL) analyzed samples from Tank 21H in support of qualification of Interim Salt Disposition Project (ISDP) Salt (Macro) Batch 10 for processing through the Actinide Removal Process (ARP) and the Modular Caustic-Side Solvent Extraction Unit (MCU). This document reports the initial results of the analyses of samples of Tank 21H. Analysis of the Tank 21H Salt (Macro) Batch 10 composite sample indicates that the material does not display any unusual characteristics or observations, such as floating solids, the presence of large amount of solids, or unusual colors. Further sample results will be reported in a future document. This memo satisfies part of Deliverable 3 of the Technical Task Request (TTR).

  11. RESULTS OF ANALYSES OF MACROBATCH 3 DECONTAMINATED SALT SOLUTION (DSS) COALESCER AND PRE-FILTERS

    Energy Technology Data Exchange (ETDEWEB)

    Peters, T.; Fondeur, F.; Fink, S.

    2012-06-13

    SRNL analyzed the pre-filter and Decontamination Salt Solution (DSS) coalescer from MCU by several analytical methods. The results of these analyses indicate that overall there is light to moderate solids fouling of both the coalescer and pre-filter elements. The majority of the solids contain aluminum, sodium, silicon, and titanium, in oxide and/or hydroxide forms that we have noted before. The titanium is presumably precipitated from leached, dissolved monosodium titanate (MST) or fines from MST at ARP, and the quantity we find is significantly greater than in the past. A parallel report discusses potential causes for the increased leaching rate of MST, showing that increases in free hydroxide concentration of the feed solutions and of chemical cleaning solutions lead to faster leaching of titanium.

  12. Radiation Detection Computational Benchmark Scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Shaver, Mark W.; Casella, Andrew M.; Wittman, Richard S.; McDonald, Ben S.

    2013-09-24

    Modeling forms an important component of radiation detection development, allowing for testing of new detector designs, evaluation of existing equipment against a wide variety of potential threat sources, and assessing operation performance of radiation detection systems. This can, however, result in large and complex scenarios which are time consuming to model. A variety of approaches to radiation transport modeling exist with complementary strengths and weaknesses for different problems. This variety of approaches, and the development of promising new tools (such as ORNL’s ADVANTG) which combine benefits of multiple approaches, illustrates the need for a means of evaluating or comparing different techniques for radiation detection problems. This report presents a set of 9 benchmark problems for comparing different types of radiation transport calculations, identifying appropriate tools for classes of problems, and testing and guiding the development of new methods. The benchmarks were drawn primarily from existing or previous calculations with a preference for scenarios which include experimental data, or otherwise have results with a high level of confidence, are non-sensitive, and represent problem sets of interest to NA-22. From a technical perspective, the benchmarks were chosen to span a range of difficulty and to include gamma transport, neutron transport, or both and represent different important physical processes and a range of sensitivity to angular or energy fidelity. Following benchmark identification, existing information about geometry, measurements, and previous calculations were assembled. Monte Carlo results (MCNP decks) were reviewed or created and re-run in order to attain accurate computational times and to verify agreement with experimental data, when present. Benchmark information was then conveyed to ORNL in order to guide testing and development of hybrid calculations. The results of those ADVANTG calculations were then sent to PNNL for

  13. The Conic Benchmark Format

    DEFF Research Database (Denmark)

    Friberg, Henrik A.

    This document constitutes the technical reference manual of the Conic Benchmark Format with le extension: .cbf or .CBF. It unies linear, second-order cone (also known as conic quadratic) and semidenite optimization with mixed-integer variables. The format has been designed with benchmark libraries...... in mind, and therefore focuses on compact and easily parsable representations. The problem structure is separated from the problem data, and the format moreover facilitate benchmarking of hotstart capability through sequences of changes....

  14. Geoarchaeology of Ancient Karnak's harbour (Upper Egypt) : preliminary results derived from sedimentological analyses

    Science.gov (United States)

    Ghilardi, M.

    2009-04-01

    This paper aims to detail the first results of a geomorphological study, led in the western part of the Karnak Temple, Upper Egypt. The geoarchaeological approach privileged here helps to better understand the Nile River dynamics in the neighbourhood of the ancient harbour and of the jetty identified by archaeologists. Based on the study of six stratigraphical profiles, realized by the Egyptian Supreme Council of Antiquities and sixteen manual auger boreholes (up to a maximum depth of 3.50m) drilled in November 2008, the results clearly indicate the continuous presence of Nile River westward of the first Pylon. The boreholes were drilled westward and eastward of the ancient fluvial harbour. Fluvial dynamics characterized by flood events, sandy accretions and large Nile silts depositions are presented and discussed here for later palaeoenvironmental reconstruction. The accurate levelling of the different profiles and boreholes, with the help a topographic survey, allow us to get long sedimentological sequences and to correlate the different sedimentary units. Perspectives of research are introduced with the possibility to realize sedimentological analyses which include the grain-size distribution (sieving method employed) and a magnetic susceptibility study of the different sediments described. Finally, in order to obtain chronostratigraphic sequences, it is also proposed to perform radiocarbon dating on charcoal samples.

  15. The benchmark analysis of gastric, colorectal and rectal cancer pathways: toward establishing standardized clinical pathway in the cancer care.

    Science.gov (United States)

    Ryu, Munemasa; Hamano, Masaaki; Nakagawara, Akira; Shinoda, Masayuki; Shimizu, Hideaki; Miura, Takeshi; Yoshida, Isao; Nemoto, Atsushi; Yoshikawa, Aki

    2011-01-01

    Most clinical pathways in treating cancers in Japan are based on individual physician's personal experiences rather than on an empirical analysis of clinical data such as benchmark comparison with other hospitals. Therefore, these pathways are far from being standardized. By comparing detailed clinical data from five cancer centers, we have observed various differences among hospitals. By conducting benchmark analyses, providing detailed feedback to the participating hospitals and by repeating the benchmark a year later, we strive to develop more standardized clinical pathways for the treatment of cancers. The Cancer Quality Initiative was launched in 2007 by five cancer centers. Using diagnosis procedure combination data, the member hospitals benchmarked their pre-operative and post-operative length of stays, the duration of antibiotics administrations and the post-operative fasting duration for gastric, colon and rectal cancers. The benchmark was conducted by disclosing hospital identities and performed using 2007 and 2008 data. In the 2007 benchmark, substantial differences were shown among five hospitals in the treatment of gastric, colon and rectal cancers. After providing the 2007 results to the participating hospitals and organizing several brainstorming discussions, significant improvements were observed in the 2008 data study. The benchmark analysis of clinical data is extremely useful in promoting more standardized care and, thus in improving the quality of cancer treatment in Japan. By repeating the benchmark analyses, we can offer truly clinical evidence-based higher quality standardized cancer treatment to our patients.

  16. "What If" Analyses: Ways to Interpret Statistical Significance Test Results Using EXCEL or "R"

    Science.gov (United States)

    Ozturk, Elif

    2012-01-01

    The present paper aims to review two motivations to conduct "what if" analyses using Excel and "R" to understand the statistical significance tests through the sample size context. "What if" analyses can be used to teach students what statistical significance tests really do and in applied research either prospectively to estimate what sample size…

  17. Emergency School Aid Act (ESAA) Evaluation: Results of Supplemental Analyses Conducted in the Contract Extension Period.

    Science.gov (United States)

    Carriere, Ronald A.; And Others

    This report focuses on a set of supplemental analyses that were performed on portions of the Emergency School Aid Act (ESAA) evaluation data. The goal of these analyses was to explore additional relationships in the data that might help to inform program policy, to confirm and/or further explicate some of the findings reported earlier, and to put…

  18. Continuum beliefs in the stigma process regarding persons with schizophrenia and depression: results of path analyses

    Science.gov (United States)

    Mnich, Eva E.; Angermeyer, Matthias C.; von dem Knesebeck, Olaf

    2016-01-01

    Background Individuals with mental illness often experience stigmatization and encounter stereotypes such as being dangerous or unpredictable. To further improve measures against psychiatric stigma, it is of importance to understand its components. In this study, we attend to the step of separation between “us” and “them” in the stigma process as conceptualized by Link and Phelan. In using the belief in continuity of mental illness symptoms as a proxy for separation, we explore its associations with stereotypes, emotional responses and desire for social distance in the stigma process. Methods Analyses are based on a representative survey in Germany. Vignettes with symptoms suggestive of schizophrenia (n = 1,338) or depression (n = 1,316) were presented to the respondents, followed by questions on continuum belief, stereotypes, emotional reactions and desire for social distance. To examine the relationship between these items, path models were computed. Results Respondents who endorsed the continuum belief tended to show greater prosocial reactions (schizophrenia: 0.07; p social distance (schizophrenia: −0.13; p social distance. There were no statistically significant relations between stereotypes and continuum beliefs. Discussion Assumptions regarding continuum beliefs in the stigma process were only partially confirmed. However, there were associations of continuum beliefs with less stigmatizing attitudes toward persons affected by either schizophrenia or depression. Including information on continuity of symptoms, and thus oppose perceived separation, could prove helpful in future anti-stigma campaigns. PMID:27703840

  19. Synthetic analyses of the LAVA experimental results on in-vessel corium retention through gap cooling

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Kyoung Ho; Cho, Young Ro; Koo, Kil Mo; Park, Rae Joon; Kim, Jong Hwan; Kim, Jong Tae; Ha, Kwang Sun; Kim, Sang Baik; Kim, Hee Dong

    2001-03-01

    LAVA(Lower-plenum Arrested Vessel Attack) has been performed to gather proof of gap formation between the debris and lower head vessel and to evaluate the effect of the gap formation on in-vessel cooling. Through the total of 12 tests, the analyses on the melt relocation process, gap formation and the thermal and mechanical behaviors of the vessel were performed. The thermal behaviors of the lower head vessel were affected by the formation of the fragmented particles and melt pool during the melt relocation process depending on mass and composition of melt and subcooling and depth of water. During the melt relocation process 10.0 to 20.0 % of the melt mass was fragmented and also 15.5 to 47.5 % of the thermal energy of the melt was transferred to water. The experimental results address the non-adherence of the debris to the lower head vessel and the consequent gap formation between the debris and the lower head vessel in case there was an internal pressure load across the vessel abreast with the thermal load induced by the thermite melt. The thermal behaviors of the lower head vessel during the cooldown period were mainly affected by the heat removal characteristics through this gap, which were determined by the possibilities of the water ingression into the gap depending on the melt composition of the corium simulant. The enhanced cooling capacity through the gap was distinguished in the Al{sub 2}O{sub 3} melt tests. It could be inferred from the analyses on the heat removal capacity through the gap that the lower head vessel could effectively cooldown via heat removal in the gap governed by counter current flow limits(CCFL) even if 2mm thick gap should form in the 30 kg Al{sub 2}O{sub 3} melt tests, which was also confirmed through the variations of the conduction heat flux in the vessel and rapid cool down of the vessel outer surface in the Al{sub 2}O{sub 3} melt tests. In the case of large melt mass of 70 kg Al{sub 2}O{sub 3} melt, however, the infinite

  20. Internet based benchmarking

    DEFF Research Database (Denmark)

    Bogetoft, Peter; Nielsen, Kurt

    2005-01-01

    We discuss the design of interactive, internet based benchmarking using parametric (statistical) as well as nonparametric (DEA) models. The user receives benchmarks and improvement potentials. The user is also given the possibility to search different efficiency frontiers and hereby to explore...

  1. Thermal Performance Benchmarking (Presentation)

    Energy Technology Data Exchange (ETDEWEB)

    Moreno, G.

    2014-11-01

    This project will benchmark the thermal characteristics of automotive power electronics and electric motor thermal management systems. Recent vehicle systems will be benchmarked to establish baseline metrics, evaluate advantages and disadvantages of different thermal management systems, and identify areas of improvement to advance the state-of-the-art.

  2. Handleiding benchmark VO

    NARCIS (Netherlands)

    Blank, j.l.t.

    2008-01-01

    OnderzoeksrapportenArchiefTechniek, Bestuur en Management> Over faculteit> Afdelingen> Innovation Systems> IPSE> Onderzoek> Publicaties> Onderzoeksrapporten> Handleiding benchmark VO Handleiding benchmark VO 25 november 2008 door IPSE Studies Door J.L.T. Blank. Handleiding voor het lezen van de i

  3. Benchmark af erhvervsuddannelserne

    DEFF Research Database (Denmark)

    Bogetoft, Peter; Wittrup, Jesper

    I dette arbejdspapir diskuterer vi, hvorledes de danske erhvervsskoler kan benchmarkes, og vi præsenterer resultaterne af en række beregningsmodeller. Det er begrebsmæssigt kompliceret at benchmarke erhvervsskolerne. Skolerne udbyder en lang række forskellige uddannelser. Det gør det vanskeligt...

  4. Benchmarking af kommunernes sagsbehandling

    DEFF Research Database (Denmark)

    Amilon, Anna

    Fra 2007 skal Ankestyrelsen gennemføre benchmarking af kommuernes sagsbehandlingskvalitet. Formålet med benchmarkingen er at udvikle praksisundersøgelsernes design med henblik på en bedre opfølgning og at forbedre kommunernes sagsbehandling. Dette arbejdspapir diskuterer metoder for benchmarking...

  5. An Effective Approach for Benchmarking Implementation

    Directory of Open Access Journals (Sweden)

    B. M. Deros

    2011-01-01

    Full Text Available Problem statement: The purpose of this study is to present a benchmarking guideline, conceptual framework and computerized mini program to assists companies achieve better performance in terms of quality, cost, delivery, supply chain and eventually increase their competitiveness in the market. The study begins with literature review on benchmarking definition, barriers and advantages from the implementation and the study of benchmarking framework. Approach: Thirty respondents were involved in the case study. They comprise of industrial practitioners, which had assessed usability and practicability of the guideline, conceptual framework and computerized mini program. Results: A guideline and template were proposed to simplify the adoption of benchmarking techniques. A conceptual framework was proposed by integrating the Deming’s PDCA and Six Sigma DMAIC theory. It was provided a step-by-step method to simplify the implementation and to optimize the benchmarking results. A computerized mini program was suggested to assist the users in adopting the technique as part of improvement project. As the result from the assessment test, the respondents found that the implementation method provided an idea for company to initiate benchmarking implementation and it guides them to achieve the desired goal as set in a benchmarking project. Conclusion: The result obtained and discussed in this study can be applied in implementing benchmarking in a more systematic way for ensuring its success.

  6. Machines are benchmarked by code, not algorithms

    NARCIS (Netherlands)

    Poss, R.

    2013-01-01

    This article highlights how small modifications to either the source code of a benchmark program or the compilation options may impact its behavior on a specific machine. It argues that for evaluating machines, benchmark providers and users be careful to ensure reproducibility of results based on th

  7. Methodology for Benchmarking IPsec Gateways

    Directory of Open Access Journals (Sweden)

    Adam Tisovský

    2012-08-01

    Full Text Available The paper analyses forwarding performance of IPsec gateway over the rage of offered loads. It focuses on the forwarding rate and packet loss particularly at the gateway’s performance peak and at the state of gateway’s overload. It explains possible performance degradation when the gateway is overloaded by excessive offered load. The paper further evaluates different approaches for obtaining forwarding performance parameters – a widely used throughput described in RFC 1242, maximum forwarding rate with zero packet loss and us proposed equilibrium throughput. According to our observations equilibrium throughput might be the most universal parameter for benchmarking security gateways as the others may be dependent on the duration of test trials. Employing equilibrium throughput would also greatly shorten the time required for benchmarking. Lastly, the paper presents methodology and a hybrid step/binary search algorithm for obtaining value of equilibrium throughput.

  8. Engine Benchmarking - Final CRADA Report

    Energy Technology Data Exchange (ETDEWEB)

    Wallner, Thomas [Argonne National Lab. (ANL), Argonne, IL (United States)

    2016-01-01

    Detailed benchmarking of the powertrains of three light-duty vehicles was performed. Results were presented and provided to CRADA partners. The vehicles included a MY2011 Audi A4, a MY2012 Mini Cooper and a MY2014 Nissan Versa.

  9. How Activists Use Benchmarks

    DEFF Research Database (Denmark)

    Seabrooke, Leonard; Wigan, Duncan

    2015-01-01

    Non-governmental organisations use benchmarks as a form of symbolic violence to place political pressure on firms, states, and international organisations. The development of benchmarks requires three elements: (1) salience, that the community of concern is aware of the issue and views...... are put to the test. The first is a reformist benchmarking cycle where organisations defer to experts to create a benchmark that conforms with the broader system of politico-economic norms. The second is a revolutionary benchmarking cycle driven by expert-activists that seek to contest strong vested...... interests and challenge established politico-economic norms. Differentiating these cycles provides insights into how activists work through organisations and with expert networks, as well as how campaigns on complex economic issues can be mounted and sustained....

  10. Posatirelin for the treatment of degenerative and vascular dementia: results of explanatory and pragmatic efficacy analyses.

    Science.gov (United States)

    Gasbarrini, G; Stefanini, G; Addolorato, G; Foschi, F; Ricci, C; Bertolotti, P; Voltolini, G; Bonavita, E; Bertoncelli, R; Renzi, G; Bianchini, G; Bonaiuto, S; Giannandrea, E; Cavassini, G; Mazzini, V; Chioma, V; Marzara, G; D'Addetta, G; Totaro, G; Dalmonte, E; Tassini, D; Giungi, F; De Nitto, C; Di Fazio, G; Tessitore, A; Guadagnino, M; Tessitore, E; Spina, P; Luppi, M; Bignamini, A; Peracino, L; Fiorentino, M; Beun-Garbe, D; Poli, A; Ambrosoli, L; Girardello, R

    1998-01-01

    In order to confirm the efficacy and safety of posatirelin (L-pyro-2-aminoadipyl-L-leucyl-L-prolinamide), a synthetic peptide having cholinergic, catecholaminergic and neurotrophic activities, a multicentre, double-blind, controlled study versus placebo was planned in elderly patients suffering from Alzheimer's disease and vascular dementia, according to National Institute of Neurological and Communicative Disorders and Stroke/Alzheimer's Disease and Related Disorders Association (NINCDS-ADRDA) and National Institute of Neurological Disorders and Stroke/Association Internationale pour la Recherche et l'Enseignement en Neurosciences (NINDS-AIREN) criteria, respectively. The trial consisted of a 2-week run-in phase with placebo administered once a day orally, followed by a double-blind period of 3 months, with posatirelin or placebo administered once a day intramuscularly. Efficacy was assessed using the Gottfries-Bråne-Steen (GBS) Rating Scale (primary variable) and the Rey Memory Test (secondary variable). Laboratory tests, vital signs and adverse events were monitored. A total of 360 patients were randomized, the intent-to-treat sample (ITT) being made up of 357 patients and the per protocol sample (PP) of 260 patients. Both pragmatic and explanatory analyses showed significant differences between treatment groups in the GBS Rating Scale and the Rey Memory Test, with no difference in the two types of dementia. No difference between treatments was observed in safety variables, the incidence of adverse events in the posatirelin group being 7.3%. The study confirms previous results showing that treatment with posatirelin can improve cognitive and functional abilities of patients suffering from degenerative or vascular dementia.

  11. Benchmarking: A tool to enhance performance

    Energy Technology Data Exchange (ETDEWEB)

    Munro, J.F. [Oak Ridge National Lab., TN (United States); Kristal, J. [USDOE Assistant Secretary for Environmental Management, Washington, DC (United States); Thompson, G.; Johnson, T. [Los Alamos National Lab., NM (United States)

    1996-12-31

    The Office of Environmental Management is bringing Headquarters and the Field together to implement process improvements throughout the Complex through a systematic process of organizational learning called benchmarking. Simply stated, benchmarking is a process of continuously comparing and measuring practices, processes, or methodologies with those of other private and public organizations. The EM benchmarking program, which began as the result of a recommendation from Xerox Corporation, is building trust and removing barriers to performance enhancement across the DOE organization. The EM benchmarking program is designed to be field-centered with Headquarters providing facilitatory and integrative functions on an ``as needed`` basis. One of the main goals of the program is to assist Field Offices and their associated M&O/M&I contractors develop the capabilities to do benchmarking for themselves. In this regard, a central precept is that in order to realize tangible performance benefits, program managers and staff -- the ones closest to the work - must take ownership of the studies. This avoids the ``check the box`` mentality associated with some third party studies. This workshop will provide participants with a basic level of understanding why the EM benchmarking team was developed and the nature and scope of its mission. Participants will also begin to understand the types of study levels and the particular methodology the EM benchmarking team is using to conduct studies. The EM benchmarking team will also encourage discussion on ways that DOE (both Headquarters and the Field) can team with its M&O/M&I contractors to conduct additional benchmarking studies. This ``introduction to benchmarking`` is intended to create a desire to know more and a greater appreciation of how benchmarking processes could be creatively employed to enhance performance.

  12. Successful interventions to reduce first-case tardiness in Dutch university medical centers: results of a nationwide operating room benchmark study.

    Science.gov (United States)

    van Veen-Berkx, Elizabeth; Elkhuizen, Sylvia G; Kalkman, Cor J; Buhre, Wolfgang F; Kazemier, Geert

    2014-06-01

    First-case tardiness is still a common source of frustration. In this study, a nationwide operating room (OR) Benchmark database was used to assess the effectiveness of interventions implemented to reduce tardiness and calculate its economic impact. Data from 8 University Medical Centers over 7 years were included: 190,295 elective inpatient first cases. Data were analyzed with SPSS statistics and multidisciplinary focus-group study meetings. Analysis of variance with contrast analysis measured the influence of interventions. Seven thousand ninety-four hours were lost annually to first-case tardiness, which has a considerable economic impact. Four University Medical Centers implemented interventions and effectuated a significant reduction in tardiness, eg providing feedbacks directly when ORs started too late, new agreements between OR and intensive care unit departments concerning "intensive care unit bed release" policy, and a shift in responsibilities regarding transport of patients to the OR. Nationwide benchmarking can be applied to identify and measure the effectiveness of interventions to reduce first-case tardiness in a university hospital OR environment. The implemented interventions in 4 centers were successful in significantly reducing first-case tardiness. Copyright © 2014 Elsevier Inc. All rights reserved.

  13. Preventive intervention possibilities in radiotherapy- and chemotherapy-induced oral mucositis : Results of meta-analyses

    NARCIS (Netherlands)

    Stokman, M A; Spijkervet, F K L; Boezen, H M; Schouten, J.P.; Roodenburg, J L N; de Vries, E G E

    2006-01-01

    The aim of these meta-analyses was to evaluate the effectiveness of interventions for the prevention of oral mucositis in cancer patients treated with head and neck radiotherapy and/or chemotherapy, with a focus on randomized clinical trials. A literature search was performed for reports of randomiz

  14. The forest health monitoring national technical reports: examples of analyses and results from 2001-2004

    Science.gov (United States)

    Mark J. Ambrose; Barbara L. Conkling; Kurt H. Riitters; John W. Coulston

    2008-01-01

    This brochure presents examples of analyses included in the first four Forest Health Monitoring (FHM) national technical reports. Its purpose is to introduce the reader to the kinds of information available in these and subsequent FHM national technical reports. Indicators presented here include drought, air pollution, forest fragmentation, and tree mortality. These...

  15. Compilation and analyses of results from cross-hole tracer tests with conservative tracers

    Energy Technology Data Exchange (ETDEWEB)

    Hjerne, Calle; Nordqvist, Rune; Harrstroem, Johan (Geosigma AB (Sweden))

    2010-09-15

    Radionuclide transport in hydrogeological formations is one of the key factors for the safety analysis of a future repository of nuclear waste. Tracer tests have therefore been an important field method within the SKB investigation programmes at several sites since the late 1970's. This report presents a compilation and analyses of results from cross-hole tracer tests with conservative tracers performed within various SKB investigations. The objectives of the study are to facilitate, improve and reduce uncertainties in predictive tracer modelling and to provide supporting information for SKB's safety assessment of a final repository of nuclear waste. More specifically, the focus of the report is the relationship between the tracer mean residence time and fracture hydraulic parameters, i.e. the relationship between mass balance aperture and fracture transmissivity, hydraulic diffusivity and apparent storativity. For 74 different combinations of pumping and injection section at six different test sites (Studsvik, Stripa, Finnsjoen, Aespoe, Forsmark, Laxemar), estimates of mass balance aperture from cross-hole tracer tests as well as transmissivity were extracted from reports or in the SKB database Sicada. For 28 of these combinations of pumping and injection section, estimates of hydraulic diffusivity and apparent storativity from hydraulic interference tests were also found. An empirical relationship between mass balance aperture and transmissivity was estimated, although some uncertainties for individual data exist. The empirical relationship between mass balance aperture and transmissivity presented in this study deviates considerably from other previously suggested relationships, such as the cubic law and transport aperture as suggested by /Dershowitz and Klise 2002/, /Dershowitz et al. 2002/ and /Dershowitz et al. 2003/, which also is discussed in this report. No clear and direct empirical relationship between mass balance aperture and hydraulic

  16. Benchmarking expert system tools

    Science.gov (United States)

    Riley, Gary

    1988-01-01

    As part of its evaluation of new technologies, the Artificial Intelligence Section of the Mission Planning and Analysis Div. at NASA-Johnson has made timing tests of several expert system building tools. Among the production systems tested were Automated Reasoning Tool, several versions of OPS5, and CLIPS (C Language Integrated Production System), an expert system builder developed by the AI section. Also included in the test were a Zetalisp version of the benchmark along with four versions of the benchmark written in Knowledge Engineering Environment, an object oriented, frame based expert system tool. The benchmarks used for testing are studied.

  17. Association between Adult Height and Risk of Colorectal, Lung, and Prostate Cancer: Results from Meta-analyses of Prospective Studies and Mendelian Randomization Analyses

    Science.gov (United States)

    Khankari, Nikhil K.; Shu, Xiao-Ou; Wen, Wanqing; Kraft, Peter; Lindström, Sara; Peters, Ulrike; Schildkraut, Joellen; Schumacher, Fredrick; Bofetta, Paolo; Risch, Angela; Bickeböller, Heike; Amos, Christopher I.; Easton, Douglas; Gruber, Stephen B.; Haiman, Christopher A.; Hunter, David J.; Chanock, Stephen J.; Pierce, Brandon L.; Zheng, Wei

    2016-01-01

    Background Observational studies examining associations between adult height and risk of colorectal, prostate, and lung cancers have generated mixed results. We conducted meta-analyses using data from prospective cohort studies and further carried out Mendelian randomization analyses, using height-associated genetic variants identified in a genome-wide association study (GWAS), to evaluate the association of adult height with these cancers. Methods and Findings A systematic review of prospective studies was conducted using the PubMed, Embase, and Web of Science databases. Using meta-analyses, results obtained from 62 studies were summarized for the association of a 10-cm increase in height with cancer risk. Mendelian randomization analyses were conducted using summary statistics obtained for 423 genetic variants identified from a recent GWAS of adult height and from a cancer genetics consortium study of multiple cancers that included 47,800 cases and 81,353 controls. For a 10-cm increase in height, the summary relative risks derived from the meta-analyses of prospective studies were 1.12 (95% CI 1.10, 1.15), 1.07 (95% CI 1.05, 1.10), and 1.06 (95% CI 1.02, 1.11) for colorectal, prostate, and lung cancers, respectively. Mendelian randomization analyses showed increased risks of colorectal (odds ratio [OR] = 1.58, 95% CI 1.14, 2.18) and lung cancer (OR = 1.10, 95% CI 1.00, 1.22) associated with each 10-cm increase in genetically predicted height. No association was observed for prostate cancer (OR = 1.03, 95% CI 0.92, 1.15). Our meta-analysis was limited to published studies. The sample size for the Mendelian randomization analysis of colorectal cancer was relatively small, thus affecting the precision of the point estimate. Conclusions Our study provides evidence for a potential causal association of adult height with the risk of colorectal and lung cancers and suggests that certain genetic factors and biological pathways affecting adult height may also affect the

  18. Financial Integrity Benchmarks

    Data.gov (United States)

    City of Jackson, Mississippi — This data compiles standard financial integrity benchmarks that allow the City to measure its financial standing. It measure the City's debt ratio and bond ratings....

  19. GeodeticBenchmark_GEOMON

    Data.gov (United States)

    Vermont Center for Geographic Information — The GeodeticBenchmark_GEOMON data layer consists of geodetic control monuments (points) that have a known position or spatial reference. The locations of these...

  20. Diagnostic Algorithm Benchmarking

    Science.gov (United States)

    Poll, Scott

    2011-01-01

    A poster for the NASA Aviation Safety Program Annual Technical Meeting. It describes empirical benchmarking on diagnostic algorithms using data from the ADAPT Electrical Power System testbed and a diagnostic software framework.

  1. On Big Data Benchmarking

    OpenAIRE

    Han, Rui; Lu, Xiaoyi

    2014-01-01

    Big data systems address the challenges of capturing, storing, managing, analyzing, and visualizing big data. Within this context, developing benchmarks to evaluate and compare big data systems has become an active topic for both research and industry communities. To date, most of the state-of-the-art big data benchmarks are designed for specific types of systems. Based on our experience, however, we argue that considering the complexity, diversity, and rapid evolution of big data systems, fo...

  2. Benchmarking in Foodservice Operations.

    Science.gov (United States)

    2007-11-02

    51. Lingle JH, Schiemann WA. From balanced scorecard to strategic gauges: Is measurement worth it? Mgt Rev. 1996; 85(3):56-61. 52. Struebing L...studies lasted from nine to twelve months, and could extend beyond that time for numerous reasons (49). Benchmarking was not industrial tourism , a...not simply data comparison, a fad, a means for reducing resources, a quick-fix program, or industrial tourism . Benchmarking was a complete process

  3. Regression Benchmarking: An Approach to Quality Assurance in Performance

    OpenAIRE

    2005-01-01

    The paper presents a short summary of our work in the area of regression benchmarking and its application to software development. Specially, we explain the concept of regression benchmarking, the requirements for employing regression testing in a software project, and methods used for analyzing the vast amounts of data resulting from repeated benchmarking. We present the application of regression benchmarking on a real software project and conclude with a glimpse at the challenges for the fu...

  4. Multitemporal satellite data analyses for archaeological mark detection: preliminary results in Italy and Argentina

    Science.gov (United States)

    Lasaponara, Rosa; Masini, Nicola

    2014-05-01

    within Basilicata and Puglia Region, southern Patagonia and Payunia-Campo Volcanicos Liancanelo e PayunMatru respectively, in Italy and Argentina. We focused our attention on diverse surfaces and soil types in different periods of the year in order to assess the capabilities of both optical and radar data to detect archaeological marks in different ecosystems and seasons. We investigated not only crop culture during the "favourable vegetative period" to enhance the presence of subsurface remains but also the "spectral response" of spontaneous, sparse herbaceous covers during periods considered and expected to be less favourable (as for example summer and winter) for this type of investigation. The main interesting results were the capability of radar (cosmoskymed) and multispectral optical data satellite data (Pleiades, Quickbird, Geoeye) to highlight the presence of structures below the surface even (i) in during period of years generally considered not "suitable for crop mark investigations" and even (ii) in areas only covered by sparse, spontaneous herbaceous plants in several test sites investigate din both Argentine and Italian areas of interest. Preliminary results conducted in both Italian and Argentina sites pointed out that Earth Observation (EO) technology can be successfully used for extracting useful information on traces the past human activities still fossilized in the modern landscape in different ecosystems and seasons. Moreover the multitemporal analyses of satellite data can fruitfully applied to: (i) improve knowledge, (ii) support monitoring of natural and cultural site, (iii) assess natural and man-made risks including emerging threats to the heritage sites. References Lasaponara R, N Masini 2009 Full-waveform Airborne Laser Scanning for the detection of medieval archaeological microtopographic relief Journal of Cultural Heritage 10, e78-e82 Ciminale M, D Gallo, R Lasaponara, N Masini 2009 A multiscale approach for reconstructing archaeological

  5. Relativistic and non-relativistic LDA, benchmark results and investigation on the dimers Cu{sub 2}, Ag{sub 2}, Au{sub 2}, Rg{sub 2}.

    Energy Technology Data Exchange (ETDEWEB)

    Kullie, Ossama [University of Kassel, Department of Natural Science, Institute of Physics (Germany)

    2008-07-01

    Using two spinor minimax method combined with finite element methods accompanied with extrapolation and counterpoise techniques enable us to obtain relativistic highly accurate results for two atomic molecules. Like in our previous work for the (Hartree-) Dirac-Fock-Slater (DFS) functional approximation, we investigate in this work the density functional approximations of the relativistic and nonrelativistic local-density functional, presenting highly accurate benchmark results of chemical properties on the dimers of the group 11(Ib) of the periodic table of elements. The comparison with DFS, with experimental and literature's results shows that DFS is better behaved than the other two local functionals.

  6. The value of urban open space: meta-analyses of contingent valuation and hedonic pricing results.

    Science.gov (United States)

    Brander, Luke M; Koetse, Mark J

    2011-10-01

    Urban open space provides a number of valuable services to urban populations, including recreational opportunities, aesthetic enjoyment, environmental functions, and may also be associated with existence values. In separate meta-analyses of the contingent valuation (CV) and hedonic pricing (HP) literature we examine which physical, socio-economic, and study characteristics determine the value of open space. The dependent variable in the CV meta-regression is defined as the value of open space per hectare per year in 2003 US$, and in the HP model as the percentage change in house price for a 10 m decrease in distance to open space. Using a multi-level modelling approach we find in both the CV and HP analyses that there is a positive and significant relationship between the value of urban open space and population density, indicating that scarcity and crowdedness matter, and that the value of open space does not vary significantly with income. Further, urban parks are more highly valued than other types of urban open space (forests, agricultural and undeveloped land) and methodological differences in study design have a large influence on estimated values from both CV and HP. We also find important regional differences in preferences for urban open space, which suggests that the potential for transferring estimated values between regions is likely to be limited. Copyright © 2011 Elsevier Ltd. All rights reserved.

  7. Preliminary Benchmark Evaluation of Japan’s High Temperature Engineering Test Reactor

    Energy Technology Data Exchange (ETDEWEB)

    John Darrell Bess

    2009-05-01

    A benchmark model of the initial fully-loaded start-up core critical of Japan’s High Temperature Engineering Test Reactor (HTTR) was developed to provide data in support of ongoing validation efforts of the Very High Temperature Reactor Program using publicly available resources. The HTTR is a 30 MWt test reactor utilizing graphite moderation, helium coolant, and prismatic TRISO fuel. The benchmark was modeled using MCNP5 with various neutron cross-section libraries. An uncertainty evaluation was performed by perturbing the benchmark model and comparing the resultant eigenvalues. The calculated eigenvalues are approximately 2-3% greater than expected with an uncertainty of ±0.70%. The primary sources of uncertainty are the impurities in the core and reflector graphite. The release of additional HTTR data could effectively reduce the benchmark model uncertainties and bias. Sensitivity of the results to the graphite impurity content might imply that further evaluation of the graphite content could significantly improve calculated results. Proper characterization of graphite for future Next Generation Nuclear Power reactor designs will improve computational modeling capabilities. Current benchmarking activities include evaluation of the annular HTTR cores and assessment of the remaining start-up core physics experiments, including reactivity effects, reactivity coefficient, and reaction-rate distribution measurements. Long term benchmarking goals might include analyses of the hot zero-power critical, rise-to-power tests, and other irradiation, safety, and technical evaluations performed with the HTTR.

  8. Taxonomy and Phylogeny Can Yield Comparable Results in Comparative Paleontological Analyses.

    Science.gov (United States)

    Soul, Laura C; Friedman, Matt

    2015-07-01

    Many extinct taxa with extensive fossil records and mature taxonomic classifications have not yet been the subject of formal phylogenetic analysis. Here, we test whether the taxonomies available for such groups represent useful (i.e., non-misleading) substitutes for trees derived from matrix-based phylogenetic analyses. We collected data for 52 animal clades that included fossil representatives, and for which a recent cladogram and pre-cladistic taxonomy were available. We quantified the difference between the time-scaled phylogenies implied by taxonomies and cladograms using the matching cluster distance metric. We simulated phenotypic trait values and used them to estimate a series of commonly used, phylogenetically explicit measures (phylogenetic signal [Blomberg's [Formula: see text

  9. The number and choice of muscles impact the results of muscle synergy analyses

    Directory of Open Access Journals (Sweden)

    Katherine Muterspaugh Steele

    2013-08-01

    Full Text Available One theory for how humans control movement is that muscles are activated in weighted groups or synergies. Studies have shown that electromyography (EMG from a variety of tasks can be described by a low-dimensional space thought to reflect synergies. These studies use algorithms, such as nonnegative matrix factorization, to identify synergies from EMG. Due to experimental constraints, EMG can rarely be taken from all muscles involved in a task. However, it is unclear if the choice of muscles included in the analysis impacts estimated synergies. The aim of our study was to evaluate the impact of the number and choice of muscles on synergy analyses. We used a musculoskeletal model to calculate muscle activations required to perform an isometric upper-extremity task. Synergies calculated from the activations from the musculoskeletal model were similar to a prior experimental study. To evaluate the impact of the number of muscles included in the analysis, we randomly selected subsets of between 5 and 29 muscles and compared the similarity of the synergies calculated from each subset to a master set of synergies calculated from all muscles. We determined that the structure of synergies is dependent upon the number and choice of muscles included in the analysis. When five muscles were included in the analysis, the similarity of the synergies to the master set was only 0.57 ± 0.54; however, the similarity improved to over 0.8 with more than ten muscles. We identified two methods, selecting dominant muscles from the master set or selecting muscles with the largest maximum isometric force, which significantly improved similarity to the master set and can help guide future experimental design. Analyses that included a small subset of muscles also over-estimated the variance accounted for (VAF by the synergies compared to an analysis with all muscles. Thus, researchers should use caution using VAF to evaluate synergies when EMG is measured from a small

  10. Benchmarking File System Benchmarking: It *IS* Rocket Science

    OpenAIRE

    Seltzer, Margo I.; Tarasov, Vasily; Bhanage, Saumitra; Zadok, Erez

    2011-01-01

    The quality of file system benchmarking has not improved in over a decade of intense research spanning hundreds of publications. Researchers repeatedly use a wide range of poorly designed benchmarks, and in most cases, develop their own ad-hoc benchmarks. Our community lacks a definition of what we want to benchmark in a file system. We propose several dimensions of file system benchmarking and review the wide range of tools and techniques in widespread use. We experimentally show that even t...

  11. The KMAT: Benchmarking Knowledge Management.

    Science.gov (United States)

    de Jager, Martha

    Provides an overview of knowledge management and benchmarking, including the benefits and methods of benchmarking (e.g., competitive, cooperative, collaborative, and internal benchmarking). Arthur Andersen's KMAT (Knowledge Management Assessment Tool) is described. The KMAT is a collaborative benchmarking tool, designed to help organizations make…

  12. Correlational effect size benchmarks.

    Science.gov (United States)

    Bosco, Frank A; Aguinis, Herman; Singh, Kulraj; Field, James G; Pierce, Charles A

    2015-03-01

    Effect size information is essential for the scientific enterprise and plays an increasingly central role in the scientific process. We extracted 147,328 correlations and developed a hierarchical taxonomy of variables reported in Journal of Applied Psychology and Personnel Psychology from 1980 to 2010 to produce empirical effect size benchmarks at the omnibus level, for 20 common research domains, and for an even finer grained level of generality. Results indicate that the usual interpretation and classification of effect sizes as small, medium, and large bear almost no resemblance to findings in the field, because distributions of effect sizes exhibit tertile partitions at values approximately one-half to one-third those intuited by Cohen (1988). Our results offer information that can be used for research planning and design purposes, such as producing better informed non-nil hypotheses and estimating statistical power and planning sample size accordingly. We also offer information useful for understanding the relative importance of the effect sizes found in a particular study in relationship to others and which research domains have advanced more or less, given that larger effect sizes indicate a better understanding of a phenomenon. Also, our study offers information about research domains for which the investigation of moderating effects may be more fruitful and provide information that is likely to facilitate the implementation of Bayesian analysis. Finally, our study offers information that practitioners can use to evaluate the relative effectiveness of various types of interventions. PsycINFO Database Record (c) 2015 APA, all rights reserved.

  13. A framework of benchmarking land models

    Science.gov (United States)

    Luo, Y. Q.; Randerson, J.; Abramowitz, G.; Bacour, C.; Blyth, E.; Carvalhais, N.; Ciais, P.; Dalmonech, D.; Fisher, J.; Fisher, R.; Friedlingstein, P.; Hibbard, K.; Hoffman, F.; Huntzinger, D.; Jones, C. D.; Koven, C.; Lawrence, D.; Li, D. J.; Mahecha, M.; Niu, S. L.; Norby, R.; Piao, S. L.; Qi, X.; Peylin, P.; Prentice, I. C.; Riley, W.; Reichstein, M.; Schwalm, C.; Wang, Y. P.; Xia, J. Y.; Zaehle, S.; Zhou, X. H.

    2012-02-01

    Land models, which have been developed by the modeling community in the past two decades to predict future states of ecosystems and climate, have to be critically evaluated for their performance skills of simulating ecosystem responses and feedback to climate change. Benchmarking is an emerging procedure to measure and evaluate performance of models against a set of defined standards. This paper proposes a benchmarking framework for evaluation of land models. The framework includes (1) targeted aspects of model performance to be evaluated; (2) a set of benchmarks as defined references to test model performance; (3) metrics to measure and compare performance skills among models so as to identify model strengths and deficiencies; and (4) model improvement. Component 4 may or may not be involved in a benchmark analysis but is an ultimate goal of general modeling research. Land models are required to simulate exchange of water, energy, carbon and sometimes other trace gases between the atmosphere and the land-surface, and should be evaluated for their simulations of biophysical processes, biogeochemical cycles, and vegetation dynamics across timescales in response to both weather and climate change. Benchmarks that are used to evaluate models generally consist of direct observations, data-model products, and data-derived patterns and relationships. Metrics of measuring mismatches between models and benchmarks may include (1) a priori thresholds of acceptable model performance and (2) a scoring system to combine data-model mismatches for various processes at different temporal and spatial scales. The benchmark analyses should identify clues of weak model performance for future improvement. Iterations between model evaluation and improvement via benchmarking shall demonstrate progress of land modeling and help establish confidence in land models for their predictions of future states of ecosystems and climate.

  14. A framework of benchmarking land models

    Directory of Open Access Journals (Sweden)

    Y. Q. Luo

    2012-02-01

    Full Text Available Land models, which have been developed by the modeling community in the past two decades to predict future states of ecosystems and climate, have to be critically evaluated for their performance skills of simulating ecosystem responses and feedback to climate change. Benchmarking is an emerging procedure to measure and evaluate performance of models against a set of defined standards. This paper proposes a benchmarking framework for evaluation of land models. The framework includes (1 targeted aspects of model performance to be evaluated; (2 a set of benchmarks as defined references to test model performance; (3 metrics to measure and compare performance skills among models so as to identify model strengths and deficiencies; and (4 model improvement. Component 4 may or may not be involved in a benchmark analysis but is an ultimate goal of general modeling research. Land models are required to simulate exchange of water, energy, carbon and sometimes other trace gases between the atmosphere and the land-surface, and should be evaluated for their simulations of biophysical processes, biogeochemical cycles, and vegetation dynamics across timescales in response to both weather and climate change. Benchmarks that are used to evaluate models generally consist of direct observations, data-model products, and data-derived patterns and relationships. Metrics of measuring mismatches between models and benchmarks may include (1 a priori thresholds of acceptable model performance and (2 a scoring system to combine data-model mismatches for various processes at different temporal and spatial scales. The benchmark analyses should identify clues of weak model performance for future improvement. Iterations between model evaluation and improvement via benchmarking shall demonstrate progress of land modeling and help establish confidence in land models for their predictions of future states of ecosystems and climate.

  15. A framework for benchmarking land models

    Science.gov (United States)

    Luo, Y. Q.; Randerson, J. T.; Abramowitz, G.; Bacour, C.; Blyth, E.; Carvalhais, N.; Ciais, P.; Dalmonech, D.; Fisher, J. B.; Fisher, R.; Friedlingstein, P.; Hibbard, K.; Hoffman, F.; Huntzinger, D.; Jones, C. D.; Koven, C.; Lawrence, D.; Li, D. J.; Mahecha, M.; Niu, S. L.; Norby, R.; Piao, S. L.; Qi, X.; Peylin, P.; Prentice, I. C.; Riley, W.; Reichstein, M.; Schwalm, C.; Wang, Y. P.; Xia, J. Y.; Zaehle, S.; Zhou, X. H.

    2012-10-01

    Land models, which have been developed by the modeling community in the past few decades to predict future states of ecosystems and climate, have to be critically evaluated for their performance skills of simulating ecosystem responses and feedback to climate change. Benchmarking is an emerging procedure to measure performance of models against a set of defined standards. This paper proposes a benchmarking framework for evaluation of land model performances and, meanwhile, highlights major challenges at this infant stage of benchmark analysis. The framework includes (1) targeted aspects of model performance to be evaluated, (2) a set of benchmarks as defined references to test model performance, (3) metrics to measure and compare performance skills among models so as to identify model strengths and deficiencies, and (4) model improvement. Land models are required to simulate exchange of water, energy, carbon and sometimes other trace gases between the atmosphere and land surface, and should be evaluated for their simulations of biophysical processes, biogeochemical cycles, and vegetation dynamics in response to climate change across broad temporal and spatial scales. Thus, one major challenge is to select and define a limited number of benchmarks to effectively evaluate land model performance. The second challenge is to develop metrics of measuring mismatches between models and benchmarks. The metrics may include (1) a priori thresholds of acceptable model performance and (2) a scoring system to combine data-model mismatches for various processes at different temporal and spatial scales. The benchmark analyses should identify clues of weak model performance to guide future development, thus enabling improved predictions of future states of ecosystems and climate. The near-future research effort should be on development of a set of widely acceptable benchmarks that can be used to objectively, effectively, and reliably evaluate fundamental properties of land models

  16. A framework for benchmarking land models

    Directory of Open Access Journals (Sweden)

    Y. Q. Luo

    2012-10-01

    Full Text Available Land models, which have been developed by the modeling community in the past few decades to predict future states of ecosystems and climate, have to be critically evaluated for their performance skills of simulating ecosystem responses and feedback to climate change. Benchmarking is an emerging procedure to measure performance of models against a set of defined standards. This paper proposes a benchmarking framework for evaluation of land model performances and, meanwhile, highlights major challenges at this infant stage of benchmark analysis. The framework includes (1 targeted aspects of model performance to be evaluated, (2 a set of benchmarks as defined references to test model performance, (3 metrics to measure and compare performance skills among models so as to identify model strengths and deficiencies, and (4 model improvement. Land models are required to simulate exchange of water, energy, carbon and sometimes other trace gases between the atmosphere and land surface, and should be evaluated for their simulations of biophysical processes, biogeochemical cycles, and vegetation dynamics in response to climate change across broad temporal and spatial scales. Thus, one major challenge is to select and define a limited number of benchmarks to effectively evaluate land model performance. The second challenge is to develop metrics of measuring mismatches between models and benchmarks. The metrics may include (1 a priori thresholds of acceptable model performance and (2 a scoring system to combine data–model mismatches for various processes at different temporal and spatial scales. The benchmark analyses should identify clues of weak model performance to guide future development, thus enabling improved predictions of future states of ecosystems and climate. The near-future research effort should be on development of a set of widely acceptable benchmarks that can be used to objectively, effectively, and reliably evaluate fundamental properties

  17. Kinetic features revealed by top-hat electrostatic analysers: numerical simulations and instrument response results

    Science.gov (United States)

    De Marco, Rossana; Marcucci, Maria Federica; Brienza, Daniele; Bruno, Roberto; Consolini, Giuseppe; Perrone, Denise; Valentini, Franceso; Servidio, Sergio; Stabile, Sara; Pezzi, Oreste; Sorriso-Valvo, Luca; Lavraud, Benoit; De Keyser, Johan; Retinò, Alessandro; Fazakerley, Andrew; Wicks, Robert; Vaivads, Andris; Salatti, Mario; Veltri, Pierliugi

    2017-04-01

    Turbulence Heating ObserveR (THOR) is the first mission devoted to study energization, acceleration and heating of turbulent space plasmas, and designed to perform field and particle measurements at kinetic scales in different near-Earth regions and in the solar wind. Solar Orbiter (SolO), together with Solar Probe Plus, will provide the first comprehensive remote and in situ measurements which are critical to establish the fundamental physical links between the Sun's dynamic atmosphere and the turbulent solar wind. The fundamental process of turbulent dissipation is mediated by physical mechanism that occur at a variety of temporal and spatial scales, and most efficiently at the kinetics scales. Hybrid Vlasov-Maxwell simulations of solar-wind turbulence show that kinetic effects manifest as particle beams, production of temperature anisotropies and ring-like modulations, preferential heating of heavy ions. We use a numerical code able to reproduce the response of a typical electrostatic analyzer of top-hat type starting from velocity distribution functions (VDFs) generated by Hybrid Vlasov-Maxwell (HVM) numerical simulations. Here, we show how optimized particle measurements by top-hat analysers can capture the kinetic features injected by turbulence in the VDFs.

  18. Benchmarking in Mobarakeh Steel Company

    OpenAIRE

    Sasan Ghasemi; Mohammad Nazemi; Mehran Nejati

    2008-01-01

    Benchmarking is considered as one of the most effective ways of improving performance in companies. Although benchmarking in business organizations is a relatively new concept and practice, it has rapidly gained acceptance worldwide. This paper introduces the benchmarking project conducted in Esfahan's Mobarakeh Steel Company, as the first systematic benchmarking project conducted in Iran. It aims to share the process deployed for the benchmarking project in this company and illustrate how th...

  19. Benchmarking in Mobarakeh Steel Company

    Directory of Open Access Journals (Sweden)

    Sasan Ghasemi

    2008-05-01

    Full Text Available Benchmarking is considered as one of the most effective ways of improving performance incompanies. Although benchmarking in business organizations is a relatively new concept and practice, ithas rapidly gained acceptance worldwide. This paper introduces the benchmarking project conducted in Esfahan’s Mobarakeh Steel Company, as the first systematic benchmarking project conducted in Iran. It aimsto share the process deployed for the benchmarking project in this company and illustrate how the projectsystematic implementation led to succes.

  20. Benchmarking of PR Function in Serbian Companies

    National Research Council Canada - National Science Library

    Nikolić, Milan; Sajfert, Zvonko; Vukonjanski, Jelena

    2009-01-01

    The purpose of this paper is to present methodologies for carrying out benchmarking of the PR function in Serbian companies and to test the practical application of the research results and proposed...

  1. UVES Analyses the Universe: A First Portfolio of Most Promising Results

    Science.gov (United States)

    2000-04-01

    environments, e.g., in the large Milky Way Galaxy vrs. the much smaller dwarf galaxies. However, until now there it was not possible to test this result by means of a detailed comparison between the well-known properties of stars in the Milky-Way with those of stars in other galaxies. Only direct spectral measurements of the metallicity of stars in their host galaxy would allow to disentangle the similar effects of age and metallicity in the CM diagrammes. However, these stars are so faint that their spectra cannot be observed in sufficient detail with 4-m class telescopes. By analysing high-resolution spectra of individual stars, like those that can now be recorded by UVES , it will be possible to obtain accurate values of the metallicities for the Local Group galaxies. High-resolution spectra also allow to measure the relative abundances of different elements, hereby providing unique information about the nucleosynthesis processes that dominate different phases of the chemical enrichment in the host galaxy. ESO PR Photo 09d/00 ESO PR Photo 09d/00 [Preview - JPEG: 400 x 346 pix - 52k] [Normal - JPEG: 800 x 692 pix - 120k] Caption : ESO PR Photo 09d/00 shows a tiny portion of the spectrum of a giant star (V = 18.2) in the Sagittarius Dwarf galaxy, as recorded by UVES . A total integration time of 3 hours covered the spectral range 480 - 680 nm and provided a S/N-ratio of approx. 40 at 600 nm. The figure shows transitions due to FeI, CaI and BaII and illustrates the wealth of different elements that can be accurately measured in an object as faint as this one. UVES has been tested on this crucial issue during the commissioning period. Spectra of unmatched quality were obtained for stars in 3 Local Group galaxies, two giants in the Sagittarius Dwarf galaxy, two supergiant stars in NGC 6822 and a cluster giant star in the Large Magellanic Cloud. The analysis of the two stars in the Sagittarius Dwarf galaxy has now been completed. Abundances were determined for 20 elements: O, Na

  2. PNNL Information Technology Benchmarking

    Energy Technology Data Exchange (ETDEWEB)

    DD Hostetler

    1999-09-08

    Benchmarking is a methodology for searching out industry best practices that lead to superior performance. It is exchanging information, not just with any organization, but with organizations known to be the best within PNNL, in industry, or in dissimilar industries with equivalent functions. It is used as a continuous improvement tool for business and technical processes, products, and services. Information technology--comprising all computer and electronic communication products and services--underpins the development and/or delivery of many PNNL products and services. This document describes the Pacific Northwest National Laboratory's (PNNL's) approach to information technology (IT) benchmarking. The purpose is to engage other organizations in the collaborative process of benchmarking in order to improve the value of IT services provided to customers. TM document's intended audience consists of other US Department of Energy (DOE) national laboratories and their IT staff. Although the individual participants must define the scope of collaborative benchmarking, an outline of IT service areas for possible benchmarking is described.

  3. Benchmarking of Decay Heat Measured Values of ITER Materials Induced by 14 MeV Neutron Activation with Calculated Results by ACAB Activation Code

    Energy Technology Data Exchange (ETDEWEB)

    Tore, C.; Ortego, P.; Rodriguez Rivada, A.

    2014-07-01

    The aim of this paper is the comparison between the calculated and measured decay heat of material samples which were irradiated at the Fusion Neutron Source of JAERI in Japan with D-T production of 14MeV neutrons. In the International Thermonuclear Experimental Reactor (ITER) neutron activation of the structural material will result in a source of heat after shutdown of the reactor. The estimation of decay heat value with qualified codes and nuclear data is an important parameter for the safety analyses of fusion reactors against lost of coolant accidents. When a loss of coolant and/or flow accident happen plasma facing components are heated up by decay heat. If the temperature of the components exceeds the allowable temperature, the accident would expand to loose the integrity of ITER. Uncertainties associated with decay prediction less than 15% are strongly requested by the ITER designers. Additionally, accurate decay heat prediction is required for making reasonable shutdown scenarios of ITER. (Author)

  4. The Isprs Benchmark on Indoor Modelling

    Science.gov (United States)

    Khoshelham, K.; Díaz Vilariño, L.; Peter, M.; Kang, Z.; Acharya, D.

    2017-09-01

    Automated generation of 3D indoor models from point cloud data has been a topic of intensive research in recent years. While results on various datasets have been reported in literature, a comparison of the performance of different methods has not been possible due to the lack of benchmark datasets and a common evaluation framework. The ISPRS benchmark on indoor modelling aims to address this issue by providing a public benchmark dataset and an evaluation framework for performance comparison of indoor modelling methods. In this paper, we present the benchmark dataset comprising several point clouds of indoor environments captured by different sensors. We also discuss the evaluation and comparison of indoor modelling methods based on manually created reference models and appropriate quality evaluation criteria. The benchmark dataset is available for download at: http://www2.isprs.org/commissions/comm4/wg5/benchmark-on-indoor-modelling.html"target="_blank">http://www2.isprs.org/commissions/comm4/wg5/benchmark-on-indoor-modelling.html.

  5. Benchmarking Variable Selection in QSAR.

    Science.gov (United States)

    Eklund, Martin; Norinder, Ulf; Boyer, Scott; Carlsson, Lars

    2012-02-01

    Variable selection is important in QSAR modeling since it can improve model performance and transparency, as well as reduce the computational cost of model fitting and predictions. Which variable selection methods that perform well in QSAR settings is largely unknown. To address this question we, in a total of 1728 benchmarking experiments, rigorously investigated how eight variable selection methods affect the predictive performance and transparency of random forest models fitted to seven QSAR datasets covering different endpoints, descriptors sets, types of response variables, and number of chemical compounds. The results show that univariate variable selection methods are suboptimal and that the number of variables in the benchmarked datasets can be reduced with about 60 % without significant loss in model performance when using multivariate adaptive regression splines MARS and forward selection.

  6. Results of de-novo and Motif activity analyses - FANTOM5 | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us FANTOM5 Results of de-novo and Motif activity analyses Data detail Data name Results of de-n...S motif near TSS de-novo motif analysis with HOMER etc. Significance of the corre.../extra/Motifs/ File size: 6.2 GB Simple search URL - Data acquisition method - Data anal...ysis method JASPER motif search HOMER motif analysis Number of data entries 400 files - About This Da...tabase Database Description Download License Update History of This Database Site Policy | Contact Us Results of de-novo and Motif activity analyses - FANTOM5 | LSDB Archive ...

  7. Quasi-3D Waveform Inversion for Velocity Structures and Source Process Analyses Using its Results

    Science.gov (United States)

    Hikima, K.; Koketsu, K.

    2007-12-01

    In this study, we propose an efficient waveform inversion method for 2-D velocity structures and 3-D velocity structures are constructed by interpolating the results of the 2-D inversions. We apply these methods to a source process study of the 2003 Miyagi-ken Hokubu earthquake. We will first construct a velocity model, then determine the source processes of this earthquake sequence using the Green's function calculated with the resultant 3-D velocity model. We formulate the inversion procedure in a 2-D cross section. In a 2-D problem, an earthquake is forced to be a line source. Therefore, we introduce approximate transformation from a line source to a point source (Vidale and Helmberger, 1987). We use the 2-D velocity-stress staggered-grid finite difference scheme, so that the source representation is somewhat different from the original 'source box method' and we apply additional corrections to calculated waveforms. The boundary shapes of layers are expressed by connected nodes and we invert observed waveforms for layer thicknesses at the nodes. We perform 2-D velocity inversions along cross sections which involve a medium-size earthquake and observation points. We assemble the results for many stations and interpolated them to construct the 3-D velocity model. Finally, we calculate waveforms from the target earthquake by the 3-D finite difference method with this velocity model to confirm the validity of the model. We next perform waveform inversions for source processes of the 2003 Miyagi-ken Hokubu earthquake sequence using the resultant 3-D velocity model. We divide the fault plane into northern and southern subplanes, so that the southern subplane includes the hypocenter of the mainshock and the largest foreshock. The strike directions of the northern and southern subplanes were N-S and NE-SW, respectively. The Green's functions for these source inversions are calculated using the reciprocal theorem. We determine the slip models using the 3- D structure and

  8. Reporting Results from Structural Equation Modeling Analyses in Archives of Scientific Psychology.

    Science.gov (United States)

    Hoyle, Rick H; Isherwood, Jennifer C

    2013-02-01

    Psychological research typically involves the analysis of data (e.g., questionnaire responses, records of behavior) using statistical methods. The description of how those methods are used and the results they produce is a key component of scholarly publications. Despite their importance, these descriptions are not always complete and clear. In order to ensure the completeness and clarity of these descriptions, the Archives of Scientific Psychology requires that authors of manuscripts to be considered for publication adhere to a set of publication standards. Although the current standards cover most of the statistical methods commonly used in psychological research, they do not cover them all. In this manuscript, we propose adjustments to the current standards and the addition of additional standards for a statistical method not adequately covered in the current standards-structural equation modeling (SEM). Adherence to the standards we propose would ensure that scholarly publications that report results of data analyzed using SEM are complete and clear.

  9. Uncertainty result of biotic index in analysing the water quality of Cikapundung river catchment area, Bandung

    Science.gov (United States)

    Surtikanti, Hertien Koosbandiah

    2017-05-01

    The Biotic Index was developed in Western Countries in response to the need in water quality evaluation. This method analysis is based on the classification of aquatic macrobenthos as a bioindicator for clean and polluted water. The aim of this study is to compare the analysis of Cikapundung river using 6 different Biotic Indexes. BI Shannon-Weiner, Belgian Biological Index (BBI), Family Biotic Index (FBI), Biological Monitoring Working Party (BMWP), Biological Monitoring Working Party-Average Score Per Taxon (BMWP-ASPT), and A Scoring System for Macroinvertebrate in Australian River (A SIGNAL). Those analysis are compared with Physical Water Index (CPI) which is developed in Indonesia. The result shows that a decreasing water quality is detected upstream to downstream of Cikapundung River. However, based on the CPI analysis result, the BMWP-ASPT biotic index analysis is more comprehensive than other BI in explaining Cikapundung water quality.

  10. Thermal Performance Benchmarking: Annual Report

    Energy Technology Data Exchange (ETDEWEB)

    Moreno, Gilbert

    2016-04-08

    The goal for this project is to thoroughly characterize the performance of state-of-the-art (SOA) automotive power electronics and electric motor thermal management systems. Information obtained from these studies will be used to: Evaluate advantages and disadvantages of different thermal management strategies; establish baseline metrics for the thermal management systems; identify methods of improvement to advance the SOA; increase the publicly available information related to automotive traction-drive thermal management systems; help guide future electric drive technologies (EDT) research and development (R&D) efforts. The performance results combined with component efficiency and heat generation information obtained by Oak Ridge National Laboratory (ORNL) may then be used to determine the operating temperatures for the EDT components under drive-cycle conditions. In FY15, the 2012 Nissan LEAF power electronics and electric motor thermal management systems were benchmarked. Testing of the 2014 Honda Accord Hybrid power electronics thermal management system started in FY15; however, due to time constraints it was not possible to include results for this system in this report. The focus of this project is to benchmark the thermal aspects of the systems. ORNL's benchmarking of electric and hybrid electric vehicle technology reports provide detailed descriptions of the electrical and packaging aspects of these automotive systems.

  11. RISKIND verification and benchmark comparisons

    Energy Technology Data Exchange (ETDEWEB)

    Biwer, B.M.; Arnish, J.J.; Chen, S.Y.; Kamboj, S.

    1997-08-01

    This report presents verification calculations and benchmark comparisons for RISKIND, a computer code designed to estimate potential radiological consequences and health risks to individuals and the population from exposures associated with the transportation of spent nuclear fuel and other radioactive materials. Spreadsheet calculations were performed to verify the proper operation of the major options and calculational steps in RISKIND. The program is unique in that it combines a variety of well-established models into a comprehensive treatment for assessing risks from the transportation of radioactive materials. Benchmark comparisons with other validated codes that incorporate similar models were also performed. For instance, the external gamma and neutron dose rate curves for a shipping package estimated by RISKIND were compared with those estimated by using the RADTRAN 4 code and NUREG-0170 methodology. Atmospheric dispersion of released material and dose estimates from the GENII and CAP88-PC codes. Verification results have shown the program to be performing its intended function correctly. The benchmark results indicate that the predictions made by RISKIND are within acceptable limits when compared with predictions from similar existing models.

  12. Preliminary results and analyses of using IGS GPS data to determine global ionospheric TEC

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    Using the spherical harmonic (SH) function model and the dual frequency GPS data of 139 International GPS Service (IGS) stations for July 15 of 2000, the global ionospheric total electron content (TEC) is calculated and the basic method is investigated. Here, preliminary results are reported and the problems and difficulties to be solved for using GPS data to determine the global ionospheric TEC are discussed.

  13. Deviating From the Benchmarks

    DEFF Research Database (Denmark)

    Rocha, Vera; Van Praag, Mirjam; Carneiro, Anabela

    survival? The analysis is based on a matched employer-employee dataset and covers about 17,500 startups in manufacturing and services. We adopt a new procedure to estimate individual benchmarks for the quantity and quality of initial human resources, acknowledging correlations between hiring decisions...... the benchmark can be substantial, are persistent over time, and hinder the survival of firms. The implications may, however, vary according to the sector and the ownership structure at entry. Given the stickiness of initial choices, wrong human capital decisions at entry turn out to be a close to irreversible...

  14. Benchmarking for Best Practice

    CERN Document Server

    Zairi, Mohamed

    1998-01-01

    Benchmarking for Best Practice uses up-to-the-minute case-studies of individual companies and industry-wide quality schemes to show how and why implementation has succeeded. For any practitioner wanting to establish best practice in a wide variety of business areas, this book makes essential reading. .It is also an ideal textbook on the applications of TQM since it describes concepts, covers definitions and illustrates the applications with first-hand examples. Professor Mohamed Zairi is an international expert and leading figure in the field of benchmarking. His pioneering work in this area l

  15. Comparative analyses reveal discrepancies among results of commonly used methods for Anopheles gambiaemolecular form identification

    Directory of Open Access Journals (Sweden)

    Pinto João

    2011-08-01

    Full Text Available Abstract Background Anopheles gambiae M and S molecular forms, the major malaria vectors in the Afro-tropical region, are ongoing a process of ecological diversification and adaptive lineage splitting, which is affecting malaria transmission and vector control strategies in West Africa. These two incipient species are defined on the basis of single nucleotide differences in the IGS and ITS regions of multicopy rDNA located on the X-chromosome. A number of PCR and PCR-RFLP approaches based on form-specific SNPs in the IGS region are used for M and S identification. Moreover, a PCR-method to detect the M-specific insertion of a short interspersed transposable element (SINE200 has recently been introduced as an alternative identification approach. However, a large-scale comparative analysis of four widely used PCR or PCR-RFLP genotyping methods for M and S identification was never carried out to evaluate whether they could be used interchangeably, as commonly assumed. Results The genotyping of more than 400 A. gambiae specimens from nine African countries, and the sequencing of the IGS-amplicon of 115 of them, highlighted discrepancies among results obtained by the different approaches due to different kinds of biases, which may result in an overestimation of MS putative hybrids, as follows: i incorrect match of M and S specific primers used in the allele specific-PCR approach; ii presence of polymorphisms in the recognition sequence of restriction enzymes used in the PCR-RFLP approaches; iii incomplete cleavage during the restriction reactions; iv presence of different copy numbers of M and S-specific IGS-arrays in single individuals in areas of secondary contact between the two forms. Conclusions The results reveal that the PCR and PCR-RFLP approaches most commonly utilized to identify A. gambiae M and S forms are not fully interchangeable as usually assumed, and highlight limits of the actual definition of the two molecular forms, which might

  16. Water Use in Parabolic Trough Power Plants: Summary Results from WorleyParsons' Analyses

    Energy Technology Data Exchange (ETDEWEB)

    Turchi, C. S.; Wagner, M. J.; Kutscher, C. F.

    2010-12-01

    The National Renewable Energy Laboratory (NREL) contracted with WorleyParsons Group, Inc. to examine the effect of switching from evaporative cooling to alternative cooling systems on a nominal 100-MW parabolic trough concentrating solar power (CSP) plant. WorleyParsons analyzed 13 different cases spanning three different geographic locations (Daggett, California; Las Vegas, Nevada; and Alamosa, Colorado) to assess the performance, cost, and water use impacts of switching from wet to dry or hybrid cooling systems. NREL developed matching cases in its Solar Advisor Model (SAM) for each scenario to allow for hourly modeling and provide a comparison to the WorleyParsons results.Our findings indicate that switching from 100% wet to 100% dry cooling will result in levelized cost of electricity (LCOE) increases of approximately 3% to 8% for parabolic trough plants throughout most of the southwestern United States. In cooler, high-altitude areas like Colorado's San Luis Valley, WorleyParsons estimated the increase at only 2.5%, while SAM predicted a 4.4% difference. In all cases, the transition to dry cooling will reduce water consumption by over 90%. Utility time-of-delivery (TOD) schedules had similar impacts for wet- and dry-cooled plants, suggesting that TOD schedules have a relatively minor effect on the dry-cooling penalty.

  17. Biosphere analyses for the safety assessment SR-Site - synthesis and summary of results

    Energy Technology Data Exchange (ETDEWEB)

    Saetre, Peter (comp.)

    2010-12-15

    This report summarises nearly 20 biosphere reports and gives a synthesis of the work performed within the SR-Site Biosphere project, i.e. the biosphere part of SR-Site. SR-Site Biosphere provides the main project with dose conversion factors (LDFs), given a unit release rate, for calculation of human doses under different release scenarios, and assesses if a potential release from the repository would have detrimental effects on the environment. The intention of this report is to give sufficient details for an overview of methods, results and major conclusions, with references to the biosphere reports where methods, data and results are presented and discussed in detail. The philosophy of the biosphere assessment was to make estimations of the radiological risk for humans and the environment as realistic as possible, based on the knowledge of present-day conditions at Forsmark and the past and expected future development of the site. This was achieved by using the best available knowledge, understanding and data from extensive site investigations from two sites. When sufficient information was not available, uncertainties were handled cautiously. A systematic identification and evaluation of features and processes that affect transport and accumulation of radionuclides at the site was conducted, and the results were summarised in an interaction matrix. Data and understanding from the site investigation was an integral part of this work, the interaction matrix underpinned the development of the radionuclide model used in the biosphere assessment. Understanding of the marine, lake and river and terrestrial ecosystems at the site was summarized in a conceptual model, and relevant features and process have been characterized to capture site specific parameter values. Detailed investigations of the structure and history of the regolith at the site and simulations of regolith dynamics were used to describe the present day state at Forsmark and the expected development of

  18. Multiple Frequency Contrast Source Inversion Method for Vertical Electromagnetic Profiling: 2D Simulation Results and Analyses

    Science.gov (United States)

    Li, Jinghe; Song, Linping; Liu, Qing Huo

    2016-02-01

    A simultaneous multiple frequency contrast source inversion (CSI) method is applied to reconstructing hydrocarbon reservoir targets in a complex multilayered medium in two dimensions. It simulates the effects of a salt dome sedimentary formation in the context of reservoir monitoring. In this method, the stabilized biconjugate-gradient fast Fourier transform (BCGS-FFT) algorithm is applied as a fast solver for the 2D volume integral equation for the forward computation. The inversion technique with CSI combines the efficient FFT algorithm to speed up the matrix-vector multiplication and the stable convergence of the simultaneous multiple frequency CSI in the iteration process. As a result, this method is capable of making quantitative conductivity image reconstruction effectively for large-scale electromagnetic oil exploration problems, including the vertical electromagnetic profiling (VEP) survey investigated here. A number of numerical examples have been demonstrated to validate the effectiveness and capacity of the simultaneous multiple frequency CSI method for a limited array view in VEP.

  19. Benchmarking Danish Vocational Education and Training Programmes

    DEFF Research Database (Denmark)

    Bogetoft, Peter; Wittrup, Jesper

    This study paper discusses methods whereby Danish vocational education and training colleges can be benchmarked, and presents results from a number of models. It is conceptually complicated to benchmark vocational colleges, as the various colleges in Denmark offer a wide range of course programmes...... attempt to summarise the various effects that the colleges have in two relevant figures, namely retention rates of students and employment rates among students who have completed training programmes....

  20. The MCNP6 Analytic Criticality Benchmark Suite

    Energy Technology Data Exchange (ETDEWEB)

    Brown, Forrest B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Monte Carlo Codes Group

    2016-06-16

    Analytical benchmarks provide an invaluable tool for verifying computer codes used to simulate neutron transport. Several collections of analytical benchmark problems [1-4] are used routinely in the verification of production Monte Carlo codes such as MCNP® [5,6]. Verification of a computer code is a necessary prerequisite to the more complex validation process. The verification process confirms that a code performs its intended functions correctly. The validation process involves determining the absolute accuracy of code results vs. nature. In typical validations, results are computed for a set of benchmark experiments using a particular methodology (code, cross-section data with uncertainties, and modeling) and compared to the measured results from the set of benchmark experiments. The validation process determines bias, bias uncertainty, and possibly additional margins. Verification is generally performed by the code developers, while validation is generally performed by code users for a particular application space. The VERIFICATION_KEFF suite of criticality problems [1,2] was originally a set of 75 criticality problems found in the literature for which exact analytical solutions are available. Even though the spatial and energy detail is necessarily limited in analytical benchmarks, typically to a few regions or energy groups, the exact solutions obtained can be used to verify that the basic algorithms, mathematics, and methods used in complex production codes perform correctly. The present work has focused on revisiting this benchmark suite. A thorough review of the problems resulted in discarding some of them as not suitable for MCNP benchmarking. For the remaining problems, many of them were reformulated to permit execution in either multigroup mode or in the normal continuous-energy mode for MCNP. Execution of the benchmarks in continuous-energy mode provides a significant advance to MCNP verification methods.

  1. Excavation damage and disturbance in crystalline rock - results from experiments and analyses

    Energy Technology Data Exchange (ETDEWEB)

    Baeckblom, Goeran (Conrox AB, Stockholm (Sweden))

    2008-11-15

    SKB plans to submit the application to site and construct the final repository for spent nuclear fuel in 2010. One important basis for the application is the results of the safety assessments, for which one particular dataset is the axial hydraulic properties along the underground openings used to calculate the transport resistance for radionuclide transport in the event that the canister is impaired. SKB initiated a project (Zuse) to be run over the period 2007-2009 to: - establish the current knowledge base on excavation damage and disturbance with particular focus on the axial hydraulic properties along the underground openings; - provide a basis for the requirements and compliance criteria for the excavation damaged and disturbed zone; - devise methods and instruments to infer or measure the excavation damage and disturbance at different times during the repository construction and operation before closure; - propose demonstration tests for which the methods are used in situ to qualify appropriate data for use in the safety reports. This report presents the results of the first stage of the Zuse project. Previous major experiments and studies in Canada, Finland, Japan, Sweden and Switzerland on spalling, excavation damage and disturbance was compiled and evaluated to provide the SR-Site report with a defendable database on the properties for the excavation damage and disturbance. In preparation for the SR-Site report, a number of sensitivity studies were conducted in which reasonable ranges of values for spalling and damage were selected in combination with an impaired backfill. The report here describes the construction of the repository in eleven steps and for each of these steps, the potential evolution of THMCB (Thermal, Mechanical, Hydraulic and Chemical/ Biological) processes are reviewed. In this work it was found that descriptions of the chemical and microbiological evolution connected with excavation damage and disturbance was lacking. The preliminary

  2. Biosphere analyses for the safety assessment SR-Site - synthesis and summary of results

    Energy Technology Data Exchange (ETDEWEB)

    Saetre, Peter (comp.)

    2010-12-15

    This report summarises nearly 20 biosphere reports and gives a synthesis of the work performed within the SR-Site Biosphere project, i.e. the biosphere part of SR-Site. SR-Site Biosphere provides the main project with dose conversion factors (LDFs), given a unit release rate, for calculation of human doses under different release scenarios, and assesses if a potential release from the repository would have detrimental effects on the environment. The intention of this report is to give sufficient details for an overview of methods, results and major conclusions, with references to the biosphere reports where methods, data and results are presented and discussed in detail. The philosophy of the biosphere assessment was to make estimations of the radiological risk for humans and the environment as realistic as possible, based on the knowledge of present-day conditions at Forsmark and the past and expected future development of the site. This was achieved by using the best available knowledge, understanding and data from extensive site investigations from two sites. When sufficient information was not available, uncertainties were handled cautiously. A systematic identification and evaluation of features and processes that affect transport and accumulation of radionuclides at the site was conducted, and the results were summarised in an interaction matrix. Data and understanding from the site investigation was an integral part of this work, the interaction matrix underpinned the development of the radionuclide model used in the biosphere assessment. Understanding of the marine, lake and river and terrestrial ecosystems at the site was summarized in a conceptual model, and relevant features and process have been characterized to capture site specific parameter values. Detailed investigations of the structure and history of the regolith at the site and simulations of regolith dynamics were used to describe the present day state at Forsmark and the expected development of

  3. Excavation damage and disturbance in crystalline rock - results from experiments and analyses

    Energy Technology Data Exchange (ETDEWEB)

    Baeckblom, Goeran (Conrox AB, Stockholm (Sweden))

    2008-11-15

    SKB plans to submit the application to site and construct the final repository for spent nuclear fuel in 2010. One important basis for the application is the results of the safety assessments, for which one particular dataset is the axial hydraulic properties along the underground openings used to calculate the transport resistance for radionuclide transport in the event that the canister is impaired. SKB initiated a project (Zuse) to be run over the period 2007-2009 to: - establish the current knowledge base on excavation damage and disturbance with particular focus on the axial hydraulic properties along the underground openings; - provide a basis for the requirements and compliance criteria for the excavation damaged and disturbed zone; - devise methods and instruments to infer or measure the excavation damage and disturbance at different times during the repository construction and operation before closure; - propose demonstration tests for which the methods are used in situ to qualify appropriate data for use in the safety reports. This report presents the results of the first stage of the Zuse project. Previous major experiments and studies in Canada, Finland, Japan, Sweden and Switzerland on spalling, excavation damage and disturbance was compiled and evaluated to provide the SR-Site report with a defendable database on the properties for the excavation damage and disturbance. In preparation for the SR-Site report, a number of sensitivity studies were conducted in which reasonable ranges of values for spalling and damage were selected in combination with an impaired backfill. The report here describes the construction of the repository in eleven steps and for each of these steps, the potential evolution of THMCB (Thermal, Mechanical, Hydraulic and Chemical/ Biological) processes are reviewed. In this work it was found that descriptions of the chemical and microbiological evolution connected with excavation damage and disturbance was lacking. The preliminary

  4. Benchmarking local healthcare-associated infections: available benchmarks and interpretation challenges.

    Science.gov (United States)

    El-Saed, Aiman; Balkhy, Hanan H; Weber, David J

    2013-10-01

    Growing numbers of healthcare facilities are routinely collecting standardized data on healthcare-associated infection (HAI), which can be used not only to track internal performance but also to compare local data to national and international benchmarks. Benchmarking overall (crude) HAI surveillance metrics without accounting or adjusting for potential confounders can result in misleading conclusions. Methods commonly used to provide risk-adjusted metrics include multivariate logistic regression analysis, stratification, indirect standardization, and restrictions. The characteristics of recognized benchmarks worldwide, including the advantages and limitations are described. The choice of the right benchmark for the data from the Gulf Cooperation Council (GCC) states is challenging. The chosen benchmark should have similar data collection and presentation methods. Additionally, differences in surveillance environments including regulations should be taken into consideration when considering such a benchmark. The GCC center for infection control took some steps to unify HAI surveillance systems in the region. GCC hospitals still need to overcome legislative and logistic difficulties in sharing data to create their own benchmark. The availability of a regional GCC benchmark may better enable health care workers and researchers to obtain more accurate and realistic comparisons.

  5. Benchmarking Danish Industries

    DEFF Research Database (Denmark)

    Gammelgaard, Britta; Bentzen, Eric; Aagaard Andreassen, Mette

    2003-01-01

    compatible survey. The International Manufacturing Strategy Survey (IMSS) doesbring up the question of supply chain management, but unfortunately, we did not have access to thedatabase. Data from the members of the SCOR-model, in the form of benchmarked performance data,may exist, but are nonetheless...

  6. Benchmarks: WICHE Region 2012

    Science.gov (United States)

    Western Interstate Commission for Higher Education, 2013

    2013-01-01

    Benchmarks: WICHE Region 2012 presents information on the West's progress in improving access to, success in, and financing of higher education. The information is updated annually to monitor change over time and encourage its use as a tool for informed discussion in policy and education communities. To establish a general context for the…

  7. Surveys and Benchmarks

    Science.gov (United States)

    Bers, Trudy

    2012-01-01

    Surveys and benchmarks continue to grow in importance for community colleges in response to several factors. One is the press for accountability, that is, for colleges to report the outcomes of their programs and services to demonstrate their quality and prudent use of resources, primarily to external constituents and governing boards at the state…

  8. Benchmarking and Performance Management

    Directory of Open Access Journals (Sweden)

    Adrian TANTAU

    2010-12-01

    Full Text Available The relevance of the chosen topic is explained by the meaning of the firm efficiency concept - the firm efficiency means the revealed performance (how well the firm performs in the actual market environment given the basic characteristics of the firms and their markets that are expected to drive their profitability (firm size, market power etc.. This complex and relative performance could be due to such things as product innovation, management quality, work organization, some other factors can be a cause even if they are not directly observed by the researcher. The critical need for the management individuals/group to continuously improve their firm/company’s efficiency and effectiveness, the need for the managers to know which are the success factors and the competitiveness determinants determine consequently, what performance measures are most critical in determining their firm’s overall success. Benchmarking, when done properly, can accurately identify both successful companies and the underlying reasons for their success. Innovation and benchmarking firm level performance are critical interdependent activities. Firm level variables, used to infer performance, are often interdependent due to operational reasons. Hence, the managers need to take the dependencies among these variables into account when forecasting and benchmarking performance. This paper studies firm level performance using financial ratio and other type of profitability measures. It uses econometric models to describe and then propose a method to forecast and benchmark performance.

  9. Benchmarking ENDF/B-VII.0

    Science.gov (United States)

    van der Marck, Steven C.

    2006-12-01

    The new major release VII.0 of the ENDF/B nuclear data library has been tested extensively using benchmark calculations. These were based upon MCNP-4C3 continuous-energy Monte Carlo neutronics simulations, together with nuclear data processed using the code NJOY. Three types of benchmarks were used, viz., criticality safety benchmarks, (fusion) shielding benchmarks, and reference systems for which the effective delayed neutron fraction is reported. For criticality safety, more than 700 benchmarks from the International Handbook of Criticality Safety Benchmark Experiments were used. Benchmarks from all categories were used, ranging from low-enriched uranium, compound fuel, thermal spectrum ones (LEU-COMP-THERM), to mixed uranium-plutonium, metallic fuel, fast spectrum ones (MIX-MET-FAST). For fusion shielding many benchmarks were based on IAEA specifications for the Oktavian experiments (for Al, Co, Cr, Cu, LiF, Mn, Mo, Si, Ti, W, Zr), Fusion Neutronics Source in Japan (for Be, C, N, O, Fe, Pb), and Pulsed Sphere experiments at Lawrence Livermore National Laboratory (for 6Li, 7Li, Be, C, N, O, Mg, Al, Ti, Fe, Pb, D 2O, H 2O, concrete, polyethylene and teflon). For testing delayed neutron data more than thirty measurements in widely varying systems were used. Among these were measurements in the Tank Critical Assembly (TCA in Japan) and IPEN/MB-01 (Brazil), both with a thermal spectrum, and two cores in Masurca (France) and three cores in the Fast Critical Assembly (FCA, Japan), all with fast spectra. In criticality safety, many benchmarks were chosen from the category with a thermal spectrum, low-enriched uranium, compound fuel (LEU-COMP-THERM), because this is typical of most current-day reactors, and because these benchmarks were previously underpredicted by as much as 0.5% by most nuclear data libraries (such as ENDF/B-VI.8, JEFF-3.0). The calculated results presented here show that this underprediction is no longer there for ENDF/B-VII.0. The average over 257

  10. Overview and Discussion of the OECD/NRC Benchmark Based on NUPEC PWR Subchannel and Bundle Tests

    Directory of Open Access Journals (Sweden)

    M. Avramova

    2013-01-01

    Full Text Available The Pennsylvania State University (PSU under the sponsorship of the US Nuclear Regulatory Commission (NRC has prepared, organized, conducted, and summarized the Organisation for Economic Co-operation and Development/US Nuclear Regulatory Commission (OECD/NRC benchmark based on the Nuclear Power Engineering Corporation (NUPEC pressurized water reactor (PWR subchannel and bundle tests (PSBTs. The international benchmark activities have been conducted in cooperation with the Nuclear Energy Agency (NEA of OECD and the Japan Nuclear Energy Safety Organization (JNES, Japan. The OECD/NRC PSBT benchmark was organized to provide a test bed for assessing the capabilities of various thermal-hydraulic subchannel, system, and computational fluid dynamics (CFDs codes. The benchmark was designed to systematically assess and compare the participants’ numerical models for prediction of detailed subchannel void distribution and department from nucleate boiling (DNB, under steady-state and transient conditions, to full-scale experimental data. This paper provides an overview of the objectives of the benchmark along with a definition of the benchmark phases and exercises. The NUPEC PWR PSBT facility and the specific methods used in the void distribution measurements are discussed followed by a summary of comparative analyses of submitted final results for the exercises of the two benchmark phases.

  11. Benchmarking infrastructure for mutation text mining

    Science.gov (United States)

    2014-01-01

    Background Experimental research on the automatic extraction of information about mutations from texts is greatly hindered by the lack of consensus evaluation infrastructure for the testing and benchmarking of mutation text mining systems. Results We propose a community-oriented annotation and benchmarking infrastructure to support development, testing, benchmarking, and comparison of mutation text mining systems. The design is based on semantic standards, where RDF is used to represent annotations, an OWL ontology provides an extensible schema for the data and SPARQL is used to compute various performance metrics, so that in many cases no programming is needed to analyze results from a text mining system. While large benchmark corpora for biological entity and relation extraction are focused mostly on genes, proteins, diseases, and species, our benchmarking infrastructure fills the gap for mutation information. The core infrastructure comprises (1) an ontology for modelling annotations, (2) SPARQL queries for computing performance metrics, and (3) a sizeable collection of manually curated documents, that can support mutation grounding and mutation impact extraction experiments. Conclusion We have developed the principal infrastructure for the benchmarking of mutation text mining tasks. The use of RDF and OWL as the representation for corpora ensures extensibility. The infrastructure is suitable for out-of-the-box use in several important scenarios and is ready, in its current state, for initial community adoption. PMID:24568600

  12. Meta-Analyses of Human Cell-Based Cardiac Regeneration Therapies: Controversies in Meta-Analyses Results on Cardiac Cell-Based Regenerative Studies.

    Science.gov (United States)

    Gyöngyösi, Mariann; Wojakowski, Wojciech; Navarese, Eliano P; Moye, Lemuel À

    2016-04-15

    In contrast to multiple publication-based meta-analyses involving clinical cardiac regeneration therapy in patients with recent myocardial infarction, a recently published meta-analysis based on individual patient data reported no effect of cell therapy on left ventricular function or clinical outcome. A comprehensive review of the data collection, statistics, and the overall principles of meta-analyses provides further clarification and explanation for this controversy. The advantages and pitfalls of different types of meta-analyses are reviewed here. Each meta-analysis approach has a place when pivotal clinical trials are lacking and sheds light on the magnitude of the treatment in a complex healthcare field.

  13. Benchmarking for Cost Improvement. Final report

    Energy Technology Data Exchange (ETDEWEB)

    1993-09-01

    The US Department of Energy`s (DOE) Office of Environmental Restoration and Waste Management (EM) conducted the Benchmarking for Cost Improvement initiative with three objectives: Pilot test benchmarking as an EM cost improvement tool; identify areas for cost improvement and recommend actions to address these areas; provide a framework for future cost improvement. The benchmarking initiative featured the use of four principal methods (program classification, nationwide cost improvement survey, paired cost comparison and component benchmarking). Interested parties contributed during both the design and execution phases. The benchmarking initiative was conducted on an accelerated basis. Of necessity, it considered only a limited set of data that may not be fully representative of the diverse and complex conditions found at the many DOE installations. The initiative generated preliminary data about cost differences and it found a high degree of convergence on several issues. Based on this convergence, the report recommends cost improvement strategies and actions. This report describes the steps taken as part of the benchmarking initiative and discusses the findings and recommended actions for achieving cost improvement. The results and summary recommendations, reported below, are organized by the study objectives.

  14. Benchmarking i den offentlige sektor

    DEFF Research Database (Denmark)

    Bukh, Per Nikolaj; Dietrichson, Lars; Sandalgaard, Niels

    2008-01-01

    I artiklen vil vi kort diskutere behovet for benchmarking i fraværet af traditionelle markedsmekanismer. Herefter vil vi nærmere redegøre for, hvad benchmarking er med udgangspunkt i fire forskellige anvendelser af benchmarking. Regulering af forsyningsvirksomheder vil blive behandlet, hvorefter...

  15. Benchmarking of LSTM Networks

    OpenAIRE

    Breuel, Thomas M.

    2015-01-01

    LSTM (Long Short-Term Memory) recurrent neural networks have been highly successful in a number of application areas. This technical report describes the use of the MNIST and UW3 databases for benchmarking LSTM networks and explores the effect of different architectural and hyperparameter choices on performance. Significant findings include: (1) LSTM performance depends smoothly on learning rates, (2) batching and momentum has no significant effect on performance, (3) softmax training outperf...

  16. Full sphere hydrodynamic and dynamo benchmarks

    KAUST Repository

    Marti, P.

    2014-01-26

    Convection in planetary cores can generate fluid flow and magnetic fields, and a number of sophisticated codes exist to simulate the dynamic behaviour of such systems. We report on the first community activity to compare numerical results of computer codes designed to calculate fluid flow within a whole sphere. The flows are incompressible and rapidly rotating and the forcing of the flow is either due to thermal convection or due to moving boundaries. All problems defined have solutions that alloweasy comparison, since they are either steady, slowly drifting or perfectly periodic. The first two benchmarks are defined based on uniform internal heating within the sphere under the Boussinesq approximation with boundary conditions that are uniform in temperature and stress-free for the flow. Benchmark 1 is purely hydrodynamic, and has a drifting solution. Benchmark 2 is a magnetohydrodynamic benchmark that can generate oscillatory, purely periodic, flows and magnetic fields. In contrast, Benchmark 3 is a hydrodynamic rotating bubble benchmark using no slip boundary conditions that has a stationary solution. Results from a variety of types of code are reported, including codes that are fully spectral (based on spherical harmonic expansions in angular coordinates and polynomial expansions in radius), mixed spectral and finite difference, finite volume, finite element and also a mixed Fourier-finite element code. There is good agreement between codes. It is found that in Benchmarks 1 and 2, the approximation of a whole sphere problem by a domain that is a spherical shell (a sphere possessing an inner core) does not represent an adequate approximation to the system, since the results differ from whole sphere results. © The Authors 2014. Published by Oxford University Press on behalf of The Royal Astronomical Society.

  17. Full dimensional (15-dimensional) quantum-dynamical simulation of the protonated water-dimer III: Mixed Jacobi-valence parametrization and benchmark results for the zero point energy, vibrationally excited states, and infrared spectrum.

    Science.gov (United States)

    Vendrell, Oriol; Brill, Michael; Gatti, Fabien; Lauvergnat, David; Meyer, Hans-Dieter

    2009-06-21

    Quantum dynamical calculations are reported for the zero point energy, several low-lying vibrational states, and the infrared spectrum of the H(5)O(2)(+) cation. The calculations are performed by the multiconfiguration time-dependent Hartree (MCTDH) method. A new vector parametrization based on a mixed Jacobi-valence description of the system is presented. With this parametrization the potential energy surface coupling is reduced with respect to a full Jacobi description, providing a better convergence of the n-mode representation of the potential. However, new coupling terms appear in the kinetic energy operator. These terms are derived and discussed. A mode-combination scheme based on six combined coordinates is used, and the representation of the 15-dimensional potential in terms of a six-combined mode cluster expansion including up to some 7-dimensional grids is discussed. A statistical analysis of the accuracy of the n-mode representation of the potential at all orders is performed. Benchmark, fully converged results are reported for the zero point energy, which lie within the statistical uncertainty of the reference diffusion Monte Carlo result for this system. Some low-lying vibrationally excited eigenstates are computed by block improved relaxation, illustrating the applicability of the approach to large systems. Benchmark calculations of the linear infrared spectrum are provided, and convergence with increasing size of the time-dependent basis and as a function of the order of the n-mode representation is studied. The calculations presented here make use of recent developments in the parallel version of the MCTDH code, which are briefly discussed. We also show that the infrared spectrum can be computed, to a very good approximation, within D(2d) symmetry, instead of the G(16) symmetry used before, in which the complete rotation of one water molecule with respect to the other is allowed, thus simplifying the dynamical problem.

  18. Results of hair analyses for drugs of abuse and comparison with self-reports and urine tests.

    Science.gov (United States)

    Musshoff, F; Driever, F; Lachenmeier, K; Lachenmeier, D W; Banger, M; Madea, B

    2006-01-27

    Urine as well as head and pubic hair samples from drug abusers were analysed for opiates, cocaine and its metabolites, amphetamines, methadone and cannabinoids. Urine immunoassay results and the results of hair tests by means of gas chromatography-mass spectrometry were compared to the self-reported data of the patients in an interview protocol. With regard to the study group, opiate abuse was claimed from the majority in self-reports (89%), followed by cannabinoids (55%), cocaine (38%), and methadone (32%). Except for opiates the comparison between self-reported drug use and urinalysis at admission showed a low correlation. In contrast to urinalysis, hair tests revealed consumption in more cases. There was also a good agreement between self-reports of patients taking part in an official methadone maintenance program and urine test results concerning methadone. However, hair test results demonstrated that methadone abuse in general was under-reported by people who did not participate in a substitution program. Comparing self-reports and the results of hair analyses drug use was dramatically under-reported, especially cocaine. Cocaine hair tests appeared to be highly sensitive and specific in identifying past cocaine use even in settings of negative urine tests. In contrast to cocaine, hair lacks sensitivity as a detection agent for cannabinoids and a proof of cannabis use by means of hair analysis should include the sensitive detection of the metabolite THC carboxylic acid in the lower picogram range.

  19. CompaRNA: a server for continuous benchmarking of automated methods for RNA secondary structure prediction.

    Science.gov (United States)

    Puton, Tomasz; Kozlowski, Lukasz P; Rother, Kristian M; Bujnicki, Janusz M

    2013-04-01

    We present a continuous benchmarking approach for the assessment of RNA secondary structure prediction methods implemented in the CompaRNA web server. As of 3 October 2012, the performance of 28 single-sequence and 13 comparative methods has been evaluated on RNA sequences/structures released weekly by the Protein Data Bank. We also provide a static benchmark generated on RNA 2D structures derived from the RNAstrand database. Benchmarks on both data sets offer insight into the relative performance of RNA secondary structure prediction methods on RNAs of different size and with respect to different types of structure. According to our tests, on the average, the most accurate predictions obtained by a comparative approach are generated by CentroidAlifold, MXScarna, RNAalifold and TurboFold. On the average, the most accurate predictions obtained by single-sequence analyses are generated by CentroidFold, ContextFold and IPknot. The best comparative methods typically outperform the best single-sequence methods if an alignment of homologous RNA sequences is available. This article presents the results of our benchmarks as of 3 October 2012, whereas the rankings presented online are continuously updated. We will gladly include new prediction methods and new measures of accuracy in the new editions of CompaRNA benchmarks.

  20. Remarks on a benchmark nonlinear constrained optimization problem

    Institute of Scientific and Technical Information of China (English)

    Luo Yazhong; Lei Yongjun; Tang Guojin

    2006-01-01

    Remarks on a benchmark nonlinear constrained optimization problem are made. Due to a citation error, two absolutely different results for the benchmark problem are obtained by independent researchers. Parallel simulated annealing using simplex method is employed in our study to solve the benchmark nonlinear constrained problem with mistaken formula and the best-known solution is obtained, whose optimality is testified by the Kuhn-Tucker conditions.

  1. Benchmarking: applications to transfusion medicine.

    Science.gov (United States)

    Apelseth, Torunn Oveland; Molnar, Laura; Arnold, Emmy; Heddle, Nancy M

    2012-10-01

    Benchmarking is as a structured continuous collaborative process in which comparisons for selected indicators are used to identify factors that, when implemented, will improve transfusion practices. This study aimed to identify transfusion medicine studies reporting on benchmarking, summarize the benchmarking approaches used, and identify important considerations to move the concept of benchmarking forward in the field of transfusion medicine. A systematic review of published literature was performed to identify transfusion medicine-related studies that compared at least 2 separate institutions or regions with the intention of benchmarking focusing on 4 areas: blood utilization, safety, operational aspects, and blood donation. Forty-five studies were included: blood utilization (n = 35), safety (n = 5), operational aspects of transfusion medicine (n = 5), and blood donation (n = 0). Based on predefined criteria, 7 publications were classified as benchmarking, 2 as trending, and 36 as single-event studies. Three models of benchmarking are described: (1) a regional benchmarking program that collects and links relevant data from existing electronic sources, (2) a sentinel site model where data from a limited number of sites are collected, and (3) an institutional-initiated model where a site identifies indicators of interest and approaches other institutions. Benchmarking approaches are needed in the field of transfusion medicine. Major challenges include defining best practices and developing cost-effective methods of data collection. For those interested in initiating a benchmarking program, the sentinel site model may be most effective and sustainable as a starting point, although the regional model would be the ideal goal.

  2. Benchmarking Non-Hardware Balance of System (Soft) Costs for U.S. Photovoltaic Systems Using a Data-Driven Analysis from PV Installer Survey Results

    Energy Technology Data Exchange (ETDEWEB)

    Ardani, K.; Barbose, G.; Margolis, R.; Wiser, R.; Feldman, D.; Ong, S.

    2012-11-01

    This report presents results from the first U.S. Department of Energy (DOE) sponsored, bottom-up data-collection and analysis of non-hardware balance-of-system costs--often referred to as 'business process' or 'soft' costs--for residential and commercial photovoltaic (PV) systems.

  3. Benchmarking and Regulation

    DEFF Research Database (Denmark)

    Agrell, Per J.; Bogetoft, Peter

    nchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators....... The application of benchmarking in regulation, however, requires specific steps in terms of data validation, model specification and outlier detection that are not systematically documented in open publications, leading to discussions about regulatory stability and economic feasibility of these techniques....... In this paper, we review the modern foundations for frontier-based regulation and we discuss its actual use in several jurisdictions....

  4. 2001 benchmarking guide.

    Science.gov (United States)

    Hoppszallern, S

    2001-01-01

    Our fifth annual guide to benchmarking under managed care presents data that is a study in market dynamics and adaptation. New this year are financial indicators on HMOs exiting the market and those remaining. Hospital financial ratios and details on department performance are included. The physician group practice numbers show why physicians are scrutinizing capitated payments. Overall, hospitals in markets with high managed care penetration are more successful in managing labor costs and show productivity gains in imaging services, physical therapy and materials management.

  5. Modeling of Phenix End-of-Life control rod withdrawal benchmark with DYN3D SFR version

    Energy Technology Data Exchange (ETDEWEB)

    Nikitin, Evgeny; Fridman, Emil [Helmholtz-Zentrum Dresden-Rossendorf e.V., Dresden (Germany). Reactor Safety

    2017-06-01

    The reactor dynamics code DYN3D is currently under extension for Sodium cooled Fast Reactor applications. The control rod withdrawal benchmark from the Phenix End-of-Life experiments was selected for verification and validation purposes. This report presents some selected results to demonstrate the feasibility of using DYN3D for steady-state Sodium cooled Fast Reactor analyses.

  6. Benchmarking concentrating photovoltaic systems

    Science.gov (United States)

    Duerr, Fabian; Muthirayan, Buvaneshwari; Meuret, Youri; Thienpont, Hugo

    2010-08-01

    Integral to photovoltaics is the need to provide improved economic viability. To achieve this goal, photovoltaic technology has to be able to harness more light at less cost. A large variety of concentrating photovoltaic concepts has provided cause for pursuit. To obtain a detailed profitability analysis, a flexible evaluation is crucial for benchmarking the cost-performance of this variety of concentrating photovoltaic concepts. To save time and capital, a way to estimate the cost-performance of a complete solar energy system is to use computer aided modeling. In this work a benchmark tool is introduced based on a modular programming concept. The overall implementation is done in MATLAB whereas Advanced Systems Analysis Program (ASAP) is used for ray tracing calculations. This allows for a flexible and extendable structuring of all important modules, namely an advanced source modeling including time and local dependence, and an advanced optical system analysis of various optical designs to obtain an evaluation of the figure of merit. An important figure of merit: the energy yield for a given photovoltaic system at a geographical position over a specific period, can be calculated.

  7. Entropy-based benchmarking methods

    OpenAIRE

    2012-01-01

    We argue that benchmarking sign-volatile series should be based on the principle of movement and sign preservation, which states that a bench-marked series should reproduce the movement and signs in the original series. We show that the widely used variants of Denton (1971) method and the growth preservation method of Causey and Trager (1981) may violate this principle, while its requirements are explicitly taken into account in the pro-posed entropy-based benchmarking methods. Our illustrati...

  8. Features and technology of enterprise internal benchmarking

    Directory of Open Access Journals (Sweden)

    A.V. Dubodelova

    2013-06-01

    Full Text Available The aim of the article. The aim of the article is to generalize characteristics, objectives, advantages of internal benchmarking. The stages sequence of internal benchmarking technology is formed. It is focused on continuous improvement of process of the enterprise by implementing existing best practices.The results of the analysis. Business activity of domestic enterprises in crisis business environment has to focus on the best success factors of their structural units by using standard research assessment of their performance and their innovative experience in practice. Modern method of those needs satisfying is internal benchmarking. According to Bain & Co internal benchmarking is one the three most common methods of business management.The features and benefits of benchmarking are defined in the article. The sequence and methodology of implementation of individual stages of benchmarking technology projects are formulated.The authors define benchmarking as a strategic orientation on the best achievement by comparing performance and working methods with the standard. It covers the processes of researching, organization of production and distribution, management and marketing methods to reference objects to identify innovative practices and its implementation in a particular business.Benchmarking development at domestic enterprises requires analysis of theoretical bases and practical experience. Choice best of experience helps to develop recommendations for their application in practice.Also it is essential to classificate species, identify characteristics, study appropriate areas of use and development methodology of implementation. The structure of internal benchmarking objectives includes: promoting research and establishment of minimum acceptable levels of efficiency processes and activities which are available at the enterprise; identification of current problems and areas that need improvement without involvement of foreign experience

  9. HPC Benchmark Suite NMx Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Intelligent Automation Inc., (IAI) and University of Central Florida (UCF) propose to develop a comprehensive numerical test suite for benchmarking current and...

  10. COG validation: SINBAD Benchmark Problems

    Energy Technology Data Exchange (ETDEWEB)

    Lent, E M; Sale, K E; Buck, R M; Descalle, M

    2004-02-23

    We validated COG, a 3D Monte Carlo radiation transport code, against experimental data and MNCP4C simulations from the Shielding Integral Benchmark Archive Database (SINBAD) compiled by RSICC. We modeled three experiments: the Osaka Nickel and Aluminum sphere experiments conducted at the OKTAVIAN facility, and the liquid oxygen experiment conducted at the FNS facility. COG results are in good agreement with experimental data and generally within a few % of MCNP results. There are several possible sources of discrepancy between MCNP and COG results: (1) the cross-section database versions are different, MCNP uses ENDFB VI 1.1 while COG uses ENDFB VIR7, (2) the code implementations are different, and (3) the models may differ slightly. We also limited the use of variance reduction methods when running the COG version of the problems.

  11. Evaluating the statistical conclusion validity of weighted mean results in meta-analysis by analysing funnel graph diagrams.

    Science.gov (United States)

    Elvik, R

    1998-03-01

    The validity of weighted mean results estimated in meta-analysis has been criticized. This paper presents a set of simple statistical and graphical techniques that can be used in meta-analysis to evaluate common points of criticism. The graphical techniques are based on funnel graph diagrams. Problems and techniques for dealing with them that are discussed include: (1) the so-called 'apples and oranges' problem, stating that mean results in meta-analysis tend to gloss over important differences that should be highlighted. A test of the homogeneity of results is described for testing the presence of this problem. If results are highly heterogeneous, a random effects model of meta-analysis is more appropriate than the fixed effects model of analysis. (2) The possible presence of skewness in a sample of results. This can be tested by comparing the mode, median and mean of the results in the sample. (3) The possible presence of more than one mode in a sample of results. This can be tested by forming a frequency distribution of the results and examining the shape of this distribution. (4) The sensitivity of the mean to the possible presence of atypical results (outliers) can be tested by comparing the overall mean to the mean of all results except the one suspected of being atypical. (5) The possible presence of publication bias can be tested by visual inspection of funnel graph diagrams in which data points have been sorted according to statistical significance and direction of effect. (6) The possibility of underestimating the standard error of the mean in meta-analyses by using multiple, correlated results from the same study as the unit of analysis can be addressed by using the jack-knife technique for estimating the uncertainty of the mean. Brief examples, taken from road safety research, are given of all these techniques.

  12. Benchmarking Danish Vocational Education and Training Programmes

    DEFF Research Database (Denmark)

    Bogetoft, Peter; Wittrup, Jesper

    This study paper discusses methods whereby Danish vocational education and training colleges can be benchmarked, and presents results from a number of models. It is conceptually complicated to benchmark vocational colleges, as the various colleges in Denmark offer a wide range of course programmes....... This makes it difficult to compare the resources used, since some programmes by their nature require more classroom time and equipment than others. It is also far from straightforward to compare college effects with respect to grades, since the various programmes apply very different forms of assessment...

  13. Benchmarking of Heavy Ion Transport Codes

    Energy Technology Data Exchange (ETDEWEB)

    Remec, Igor [ORNL; Ronningen, Reginald M. [Michigan State University, East Lansing; Heilbronn, Lawrence [University of Tennessee, Knoxville (UTK)

    2011-01-01

    Accurate prediction of radiation fields generated by heavy ion interactions is important in medical applications, space missions, and in designing and operation of rare isotope research facilities. In recent years, several well-established computer codes in widespread use for particle and radiation transport calculations have been equipped with the capability to simulate heavy ion transport and interactions. To assess and validate these capabilities, we performed simulations of a series of benchmark-quality heavy ion experiments with the computer codes FLUKA, MARS15, MCNPX, and PHITS. We focus on the comparisons of secondary neutron production. Results are encouraging; however, further improvements in models and codes and additional benchmarking are required.

  14. Physical activity on prescription schemes (PARS): do programme characteristics influence effectiveness? Results of a systematic review and meta-analyses

    Science.gov (United States)

    Arsenijevic, Jelena; Groot, Wim

    2017-01-01

    Background Physical activity on prescription schemes (PARS) are health promotion programmes that have been implemented in various countries. The aim of this study was to outline the differences in the design of PARS in different countries. This study also explored the differences in the adherence rate to PARS and the self-reported level of physical activity between PARS users in different countries. Method A systematic literature review and meta-analyses were conducted. We searched PubMed and EBASCO in July 2015 and updated our search in September 2015. Studies that reported adherence to the programme and self-reported level of physical activity, published in the English language in a peer-reviewed journal since 2000, were included. The difference in the pooled adherence rate after finishing the PARS programme and the adherence rate before or during the PARS programme was 17% (95% CI 9% to 24%). The difference in the pooled physical activity was 0.93 unit score (95 CI −3.57 to 1.71). For the adherence rate, a meta-regression was conducted. Results In total, 37 studies conducted in 11 different countries met the inclusion criteria. Among them, 31 reported the adherence rate, while the level of physical activity was reported in 17 studies. Results from meta-analyses show that PARS had an effect on the adherence rate of physical activity, while the results from the meta-regressions show that programme characteristics such as type of chronic disease and the follow-up period influenced the adherence rate. Conclusions The effects of PARS on adherence and self-reported physical activity were influenced by programme characteristics and also by the design of the study. Future studies on the effectiveness of PARS should use a prospective longitudinal design and combine quantitative and qualitative data. Furthermore, future evaluation studies should distinguish between evaluating the adherence rate and the self-reported physical activity among participants with different

  15. Benchmark duration of work hours for development of fatigue symptoms in Japanese workers with adjustment for job-related stress.

    Science.gov (United States)

    Suwazono, Yasushi; Dochi, Mirei; Kobayashi, Etsuko; Oishi, Mitsuhiro; Okubo, Yasushi; Tanaka, Kumihiko; Sakata, Kouichi

    2008-12-01

    The objective of this study was to calculate benchmark durations and lower 95% confidence limits for benchmark durations of working hours associated with subjective fatigue symptoms by applying the benchmark dose approach while adjusting for job-related stress using multiple logistic regression analyses. A self-administered questionnaire was completed by 3,069 male and 412 female daytime workers (age 18-67 years) in a Japanese steel company. The eight dependent variables in the Cumulative Fatigue Symptoms Index were decreased vitality, general fatigue, physical disorders, irritability, decreased willingness to work, anxiety, depressive feelings, and chronic tiredness. Independent variables were daily working hours, four subscales (job demand, job control, interpersonal relationship, and job suitability) of the Brief Job Stress Questionnaire, and other potential covariates. Using significant parameters for working hours and those for other covariates, the benchmark durations of working hours were calculated for the corresponding Index property. Benchmark response was set at 5% or 10%. Assuming a condition of worst job stress, the benchmark duration/lower 95% confidence limit for benchmark duration of working hours per day with a benchmark response of 5% or 10% were 10.0/9.4 or 11.7/10.7 (irritability) and 9.2/8.9 or 10.4/9.8 (chronic tiredness) in men and 8.9/8.4 or 9.8/8.9 (chronic tiredness) in women. The threshold amounts of working hours for fatigue symptoms under the worst job-related stress were very close to the standard daily working hours in Japan. The results strongly suggest that special attention should be paid to employees whose working hours exceed threshold amounts based on individual levels of job-related stress.

  16. Benchmarking Asteroid-Deflection Experiment

    Science.gov (United States)

    Remington, Tane; Bruck Syal, Megan; Owen, John Michael; Miller, Paul L.

    2016-10-01

    An asteroid impacting Earth could have devastating consequences. In preparation to deflect or disrupt one before it reaches Earth, it is imperative to have modeling capabilities that adequately simulate the deflection actions. Code validation is key to ensuring full confidence in simulation results used in an asteroid-mitigation plan. We are benchmarking well-known impact experiments using Spheral, an adaptive smoothed-particle hydrodynamics code, to validate our modeling of asteroid deflection. We describe our simulation results, compare them with experimental data, and discuss what we have learned from our work. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. LLNL-ABS-695540

  17. Benchmarking foreign electronics technologies

    Energy Technology Data Exchange (ETDEWEB)

    Bostian, C.W.; Hodges, D.A.; Leachman, R.C.; Sheridan, T.B.; Tsang, W.T.; White, R.M.

    1994-12-01

    This report has been drafted in response to a request from the Japanese Technology Evaluation Center`s (JTEC) Panel on Benchmarking Select Technologies. Since April 1991, the Competitive Semiconductor Manufacturing (CSM) Program at the University of California at Berkeley has been engaged in a detailed study of quality, productivity, and competitiveness in semiconductor manufacturing worldwide. The program is a joint activity of the College of Engineering, the Haas School of Business, and the Berkeley Roundtable on the International Economy, under sponsorship of the Alfred P. Sloan Foundation, and with the cooperation of semiconductor producers from Asia, Europe and the United States. Professors David A. Hodges and Robert C. Leachman are the project`s Co-Directors. The present report for JTEC is primarily based on data and analysis drawn from that continuing program. The CSM program is being conducted by faculty, graduate students and research staff from UC Berkeley`s Schools of Engineering and Business, and Department of Economics. Many of the participating firms are represented on the program`s Industry Advisory Board. The Board played an important role in defining the research agenda. A pilot study was conducted in 1991 with the cooperation of three semiconductor plants. The research plan and survey documents were thereby refined. The main phase of the CSM benchmarking study began in mid-1992 and will continue at least through 1997. reports are presented on the manufacture of integrated circuits; data storage; wireless technology; human-machine interfaces; and optoelectronics. Selected papers are indexed separately for inclusion in the Energy Science and Technology Database.

  18. Benchmarking monthly homogenization algorithms

    Directory of Open Access Journals (Sweden)

    V. K. C. Venema

    2011-08-01

    Full Text Available The COST (European Cooperation in Science and Technology Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative. The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random break-type inhomogeneities were added to the simulated datasets modeled as a Poisson process with normally distributed breakpoint sizes. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide trend was added.

    Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including (i the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii the error in linear trend estimates and (iii traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve

  19. First results from a novel methodological approach for δ18O analyses of sugars using GC-Py-IRMS

    Science.gov (United States)

    Zech, Michael; Saurer, Matthias; Tuthorn, Mario; Rinne, Katja; Werner, Roland; Juchelka, Dieter; Siegwolf, Rolf; Glaser, Bruno

    2013-04-01

    Although the instrumental coupling of gas chromatography-pyrolysis-isotope ratio mass spectrometry (GC-Py-IRMS) for compound-specific δ18O analyses is commercially available for more than 10 years, this method is hardly applied by isotope researchers so far. Using GC-Py-IRMS, Zech and Glaser (2009) and Zech et al. (2013; 2012) developed and applied a method, which allows determining δ18O of hemicellulose-derived sugar biomarkers extracted from soils and sediments. However, the used methylboronic acid (MBA) derivatization is suitable only for pentoses and deoxyhexoses, not for hexoses. Here we present first GC-Py-IRMS results for TMS-(trimethylsilyl)-derivatives of plant sap-relevant sugars (glucose, fucose, sucrose, raffinose) and a polyalkohol (pinitol) produced using BSTFA (N,O-Bis(trimethylsilyl)trifluoroacetamide) as the derivatization reagent. Particularly, we focus on sucrose, which is the most important transport sugar in plants and hence of utmost relevance in plant physiology and in tree-ring studies. Replicate analyses of sucrose standards with known δ18O values suggest that the δ18O measurements are not stable over several days. A calibration (including a drift correction) against an external sucrose standard is hence essential when measuring sample batches. Furthermore, we observed a large dependence of the δ18O values on the analyte amount (area), which needs to be considered by a respective correction procedure. Tests with 18O-enriched water do not provide any evidence for oxygen exchange reactions between water and sucrose, glucose and raffinose. Finally we present the first application of compound-specific δ18O analyses from natural samples, namely from seven needle extracts (soluble carbohydrates) from a Siberian study area. Both the δ18O amplitude and values of sucrose are considerably higher (32.1‰ to 40.1‰) compared to the δ18O amplitude and values of bulk needle extract (24.6‰ to 27.2‰). We found positive correlation (although

  20. Benchmark job – Watch out!

    CERN Multimedia

    Staff Association

    2017-01-01

    On 12 December 2016, in Echo No. 259, we already discussed at length the MERIT and benchmark jobs. Still, we find that a couple of issues warrant further discussion. Benchmark job – administrative decision on 1 July 2017 On 12 January 2017, the HR Department informed all staff members of a change to the effective date of the administrative decision regarding benchmark jobs. The benchmark job title of each staff member will be confirmed on 1 July 2017, instead of 1 May 2017 as originally announced in HR’s letter on 18 August 2016. Postponing the administrative decision by two months will leave a little more time to address the issues related to incorrect placement in a benchmark job. Benchmark job – discuss with your supervisor, at the latest during the MERIT interview In order to rectify an incorrect placement in a benchmark job, it is essential that the supervisor and the supervisee go over the assigned benchmark job together. In most cases, this placement has been done autom...

  1. Benchmark for Strategic Performance Improvement.

    Science.gov (United States)

    Gohlke, Annette

    1997-01-01

    Explains benchmarking, a total quality management tool used to measure and compare the work processes in a library with those in other libraries to increase library performance. Topics include the main groups of upper management, clients, and staff; critical success factors for each group; and benefits of benchmarking. (Author/LRW)

  2. Internal Benchmarking for Institutional Effectiveness

    Science.gov (United States)

    Ronco, Sharron L.

    2012-01-01

    Internal benchmarking is an established practice in business and industry for identifying best in-house practices and disseminating the knowledge about those practices to other groups in the organization. Internal benchmarking can be done with structures, processes, outcomes, or even individuals. In colleges or universities with multicampuses or a…

  3. Entropy-based benchmarking methods

    NARCIS (Netherlands)

    Temurshoev, Umed

    2012-01-01

    We argue that benchmarking sign-volatile series should be based on the principle of movement and sign preservation, which states that a bench-marked series should reproduce the movement and signs in the original series. We show that the widely used variants of Denton (1971) method and the growth pre

  4. Teaching Benchmark Strategy for Fifth-Graders in Taiwan

    Science.gov (United States)

    Yang, Der-Ching; Lai, M. L.

    2013-01-01

    The key purpose of this study was how we taught the use of benchmark strategy when comparing fraction for fifth-graders in Taiwan. 26 fifth graders from a public elementary in south Taiwan were selected to join this study. Results of this case study showed that students had a much progress on the use of benchmark strategy when comparing fraction…

  5. Benchmarking Mentoring Practices: A Case Study in Turkey

    Science.gov (United States)

    Hudson, Peter; Usak, Muhammet; Savran-Gencer, Ayse

    2010-01-01

    Throughout the world standards have been developed for teaching in particular key learning areas. These standards also present benchmarks that can assist to measure and compare results from one year to the next. There appears to be no benchmarks for mentoring. An instrument devised to measure mentees' perceptions of their mentoring in primary…

  6. ATOP - The Advanced Taiwan Ocean Prediction System Based on the mpiPOM. Part 1: Model Descriptions, Analyses and Results

    Directory of Open Access Journals (Sweden)

    Leo Oey

    2013-01-01

    Full Text Available A data-assimilated Taiwan Ocean Prediction (ATOP system is being developed at the National Central University, Taiwan. The model simulates sea-surface height, three-dimensional currents, temperature and salinity and turbulent mixing. The model has options for tracer and particle-tracking algorithms, as well as for wave-induced Stokes drift and wave-enhanced mixing and bottom drag. Two different forecast domains have been tested: a large-grid domain that encompasses the entire North Pacific Ocean at 0.1° × 0.1° horizontal resolution and 41 vertical sigma levels, and a smaller western North Pacific domain which at present also has the same horizontal resolution. In both domains, 25-year spin-up runs from 1988 - 2011 were first conducted, forced by six-hourly Cross-Calibrated Multi-Platform (CCMP and NCEP reanalysis Global Forecast System (GSF winds. The results are then used as initial conditions to conduct ocean analyses from January 2012 through February 2012, when updated hindcasts and real-time forecasts begin using the GFS winds. This paper describes the ATOP system and compares the forecast results against satellite altimetry data for assessing model skills. The model results are also shown to compare well with observations of (i the Kuroshio intrusion in the northern South China Sea, and (ii subtropical counter current. Review and comparison with other models in the literature of ¡§(i¡¨ are also given.

  7. Benchmarking & European Sustainable Transport Policies

    DEFF Research Database (Denmark)

    Gudmundsson, H.

    2003-01-01

    , Benchmarking is one of the management tools that have recently been introduced in the transport sector. It is rapidly being applied to a wide range of transport operations, services and policies. This paper is a contribution to the discussion of the role of benchmarking in the future efforts...... to support Sustainable European Transport Policies. The key message is that transport benchmarking has not yet been developed to cope with the challenges of this task. Rather than backing down completely, the paper suggests some critical conditions for applying and adopting benchmarking for this purpose. One...... way forward is to ensure a higher level of environmental integration in transport policy benchmarking. To this effect the paper will discuss the possible role of the socalled Transport and Environment Reporting Mechanism developed by the European Environment Agency. The paper provides an independent...

  8. Benchmarking and Sustainable Transport Policy

    DEFF Research Database (Denmark)

    Gudmundsson, Henrik; Wyatt, Andrew; Gordon, Lucy

    2004-01-01

    is generally not advised. Several other ways in which benchmarking and policy can support one another are identified in the analysis. This leads to a range of recommended initiatives to exploit the benefits of benchmarking in transport while avoiding some of the lurking pitfalls and dead ends......Order to learn from the best. In 2000 the European Commission initiated research to explore benchmarking as a tool to promote policies for ‘sustainable transport’. This paper reports findings and recommendations on how to address this challenge. The findings suggest that benchmarking is a valuable...... tool that may indeed help to move forward the transport policy agenda. However, there are major conditions and limitations. First of all it is not always so straightforward to delimit, measure and compare transport services in order to establish a clear benchmark. Secondly ‘sustainable transport...

  9. General benchmarks for quantum repeaters

    CERN Document Server

    Pirandola, Stefano

    2015-01-01

    Using a technique based on quantum teleportation, we simplify the most general adaptive protocols for key distribution, entanglement distillation and quantum communication over a wide class of quantum channels in arbitrary dimension. Thanks to this method, we bound the ultimate rates for secret key generation and quantum communication through single-mode Gaussian channels and several discrete-variable channels. In particular, we derive exact formulas for the two-way assisted capacities of the bosonic quantum-limited amplifier and the dephasing channel in arbitrary dimension, as well as the secret key capacity of the qubit erasure channel. Our results establish the limits of quantum communication with arbitrary systems and set the most general and precise benchmarks for testing quantum repeaters in both discrete- and continuous-variable settings.

  10. The Application of the PEBBED Code Suite to the PBMR-400 Coupled Code Benchmark - FY 2006 Annual Report

    Energy Technology Data Exchange (ETDEWEB)

    2006-09-01

    This document describes the recent developments of the PEBBED code suite and its application to the PBMR-400 Coupled Code Benchmark. This report addresses an FY2006 Level 2 milestone under the NGNP Design and Evaluation Methods Work Package. The milestone states "Complete a report describing the results of the application of the integrated PEBBED code package to the PBMR-400 coupled code benchmark". The report describes the current state of the PEBBED code suite, provides an overview of the Benchmark problems to which it was applied, discusses the code developments achieved in the past year, and states some of the results attained. Results of the steady state problems generated by the PEBBED fuel management code compare favorably to the preliminary results generated by codes from other participating institutions and to similar non-Benchmark analyses. Partial transient analysis capability has been achieved through the acquisition of the NEM-THERMIX code from Penn State University. Phase I of the task has been achieved through the development of a self-consistent set of tools for generating cross sections for design and transient analysis and in the successful execution of the steady state benchmark exercises.

  11. A new numerical benchmark of a freshwater lens

    Science.gov (United States)

    Stoeckl, L.; Walther, M.; Graf, T.

    2016-04-01

    A numerical benchmark for 2-D variable-density flow and solute transport in a freshwater lens is presented. The benchmark is based on results of laboratory experiments conducted by Stoeckl and Houben (2012) using a sand tank on the meter scale. This benchmark describes the formation and degradation of a freshwater lens over time as it can be found under real-world islands. An error analysis gave the appropriate spatial and temporal discretization of 1 mm and 8.64 s, respectively. The calibrated parameter set was obtained using the parameter estimation tool PEST. Comparing density-coupled and density-uncoupled results showed that the freshwater-saltwater interface position is strongly dependent on density differences. A benchmark that adequately represents saltwater intrusion and that includes realistic features of coastal aquifers or freshwater lenses was lacking. This new benchmark was thus developed and is demonstrated to be suitable to test variable-density groundwater models applied to saltwater intrusion investigations.

  12. Cleanroom Energy Efficiency: Metrics and Benchmarks

    Energy Technology Data Exchange (ETDEWEB)

    International SEMATECH Manufacturing Initiative; Mathew, Paul A.; Tschudi, William; Sartor, Dale; Beasley, James

    2010-07-07

    Cleanrooms are among the most energy-intensive types of facilities. This is primarily due to the cleanliness requirements that result in high airflow rates and system static pressures, as well as process requirements that result in high cooling loads. Various studies have shown that there is a wide range of cleanroom energy efficiencies and that facility managers may not be aware of how energy efficient their cleanroom facility can be relative to other cleanroom facilities with the same cleanliness requirements. Metrics and benchmarks are an effective way to compare one facility to another and to track the performance of a given facility over time. This article presents the key metrics and benchmarks that facility managers can use to assess, track, and manage their cleanroom energy efficiency or to set energy efficiency targets for new construction. These include system-level metrics such as air change rates, air handling W/cfm, and filter pressure drops. Operational data are presented from over 20 different cleanrooms that were benchmarked with these metrics and that are part of the cleanroom benchmark dataset maintained by Lawrence Berkeley National Laboratory (LBNL). Overall production efficiency metrics for cleanrooms in 28 semiconductor manufacturing facilities in the United States and recorded in the Fabs21 database are also presented.

  13. Benchmarking in academic pharmacy departments.

    Science.gov (United States)

    Bosso, John A; Chisholm-Burns, Marie; Nappi, Jean; Gubbins, Paul O; Ross, Leigh Ann

    2010-10-11

    Benchmarking in academic pharmacy, and recommendations for the potential uses of benchmarking in academic pharmacy departments are discussed in this paper. Benchmarking is the process by which practices, procedures, and performance metrics are compared to an established standard or best practice. Many businesses and industries use benchmarking to compare processes and outcomes, and ultimately plan for improvement. Institutions of higher learning have embraced benchmarking practices to facilitate measuring the quality of their educational and research programs. Benchmarking is used internally as well to justify the allocation of institutional resources or to mediate among competing demands for additional program staff or space. Surveying all chairs of academic pharmacy departments to explore benchmarking issues such as department size and composition, as well as faculty teaching, scholarly, and service productivity, could provide valuable information. To date, attempts to gather this data have had limited success. We believe this information is potentially important, urge that efforts to gather it should be continued, and offer suggestions to achieve full participation.

  14. Benchmarking biofuels; Biobrandstoffen benchmarken

    Energy Technology Data Exchange (ETDEWEB)

    Croezen, H.; Kampman, B.; Bergsma, G.

    2012-03-15

    A sustainability benchmark for transport biofuels has been developed and used to evaluate the various biofuels currently on the market. For comparison, electric vehicles, hydrogen vehicles and petrol/diesel vehicles were also included. A range of studies as well as growing insight are making it ever clearer that biomass-based transport fuels may have just as big a carbon footprint as fossil fuels like petrol or diesel, or even bigger. At the request of Greenpeace Netherlands, CE Delft has brought together current understanding on the sustainability of fossil fuels, biofuels and electric vehicles, with particular focus on the performance of the respective energy carriers on three sustainability criteria, with the first weighing the heaviest: (1) Greenhouse gas emissions; (2) Land use; and (3) Nutrient consumption [Dutch] Greenpeace Nederland heeft CE Delft gevraagd een duurzaamheidsmeetlat voor biobrandstoffen voor transport te ontwerpen en hierop de verschillende biobrandstoffen te scoren. Voor een vergelijk zijn ook elektrisch rijden, rijden op waterstof en rijden op benzine of diesel opgenomen. Door onderzoek en voortschrijdend inzicht blijkt steeds vaker dat transportbrandstoffen op basis van biomassa soms net zoveel of zelfs meer broeikasgassen veroorzaken dan fossiele brandstoffen als benzine en diesel. CE Delft heeft voor Greenpeace Nederland op een rijtje gezet wat de huidige inzichten zijn over de duurzaamheid van fossiele brandstoffen, biobrandstoffen en elektrisch rijden. Daarbij is gekeken naar de effecten van de brandstoffen op drie duurzaamheidscriteria, waarbij broeikasgasemissies het zwaarst wegen: (1) Broeikasgasemissies; (2) Landgebruik; en (3) Nutriëntengebruik.

  15. NASA Software Engineering Benchmarking Study

    Science.gov (United States)

    Rarick, Heather L.; Godfrey, Sara H.; Kelly, John C.; Crumbley, Robert T.; Wifl, Joel M.

    2013-01-01

    was its software assurance practices, which seemed to rate well in comparison to the other organizational groups and also seemed to include a larger scope of activities. An unexpected benefit of the software benchmarking study was the identification of many opportunities for collaboration in areas including metrics, training, sharing of CMMI experiences and resources such as instructors and CMMI Lead Appraisers, and even sharing of assets such as documented processes. A further unexpected benefit of the study was the feedback on NASA practices that was received from some of the organizations interviewed. From that feedback, other potential areas where NASA could improve were highlighted, such as accuracy of software cost estimation and budgetary practices. The detailed report contains discussion of the practices noted in each of the topic areas, as well as a summary of observations and recommendations from each of the topic areas. The resulting 24 recommendations from the topic areas were then consolidated to eliminate duplication and culled into a set of 14 suggested actionable recommendations. This final set of actionable recommendations, listed below, are items that can be implemented to improve NASA's software engineering practices and to help address many of the items that were listed in the NASA top software engineering issues. 1. Develop and implement standard contract language for software procurements. 2. Advance accurate and trusted software cost estimates for both procured and in-house software and improve the capture of actual cost data to facilitate further improvements. 3. Establish a consistent set of objectives and expectations, specifically types of metrics at the Agency level, so key trends and models can be identified and used to continuously improve software processes and each software development effort. 4. Maintain the CMMI Maturity Level requirement for critical NASA projects and use CMMI to measure organizations developing software for NASA. 5

  16. A benchmark for comparison of dental radiography analysis algorithms.

    Science.gov (United States)

    Wang, Ching-Wei; Huang, Cheng-Ta; Lee, Jia-Hong; Li, Chung-Hsing; Chang, Sheng-Wei; Siao, Ming-Jhih; Lai, Tat-Ming; Ibragimov, Bulat; Vrtovec, Tomaž; Ronneberger, Olaf; Fischer, Philipp; Cootes, Tim F; Lindner, Claudia

    2016-07-01

    Dental radiography plays an important role in clinical diagnosis, treatment and surgery. In recent years, efforts have been made on developing computerized dental X-ray image analysis systems for clinical usages. A novel framework for objective evaluation of automatic dental radiography analysis algorithms has been established under the auspices of the IEEE International Symposium on Biomedical Imaging 2015 Bitewing Radiography Caries Detection Challenge and Cephalometric X-ray Image Analysis Challenge. In this article, we present the datasets, methods and results of the challenge and lay down the principles for future uses of this benchmark. The main contributions of the challenge include the creation of the dental anatomy data repository of bitewing radiographs, the creation of the anatomical abnormality classification data repository of cephalometric radiographs, and the definition of objective quantitative evaluation for comparison and ranking of the algorithms. With this benchmark, seven automatic methods for analysing cephalometric X-ray image and two automatic methods for detecting bitewing radiography caries have been compared, and detailed quantitative evaluation results are presented in this paper. Based on the quantitative evaluation results, we believe automatic dental radiography analysis is still a challenging and unsolved problem. The datasets and the evaluation software will be made available to the research community, further encouraging future developments in this field. (http://www-o.ntust.edu.tw/~cweiwang/ISBI2015/).

  17. Communicating human biomonitoring results to ensure policy coherence with public health recommendations: analysing breastmilk whilst protecting, promoting and supporting breastfeeding

    Directory of Open Access Journals (Sweden)

    Arendt Maryse

    2008-01-01

    Full Text Available Abstract This article addresses the problem of how to ensure consistency in messages communicating public health recommendations on environmental health and on child health. The World Health Organization states that the protection, promotion and support of breastfeeding rank among the most effective interventions to improve child survival. International public health policy recommends exclusive breastfeeding for six months, followed by continued breastfeeding with the addition of safe and adequate complementary foods for two years and beyond. Biomonitoring of breastmilk is used as an indicator of environmental pollution ending up in mankind. This article will therefore present the biomonitoring results of concentrations of residues in breastmilk in a wider context. These results are the mirror that reflects the chemical substances accumulated in the bodies of both men and women in the course of a lifetime. The accumulated substances in our bodies may have an effect on male or female reproductive cells; they are present in the womb, directly affecting the environment of the fragile developing foetus; they are also present in breastmilk. Evidence of man-made chemical residues in breastmilk can provide a shock tactic to push for stronger laws to protect the environment. However, messages about chemicals detected in breastmilk can become dramatized by the media and cause a backlash against breastfeeding, thus contradicting the public health messages issued by the World Health Organization. Analyses of breastmilk show the presence of important nutritional components and live protective factors active in building up the immune system, in gastro intestinal maturation, in immune defence and in providing antiviral, antiparasitic and antibacterial activity. Through cohort studies researchers in environmental health have concluded that long-term breastfeeding counterbalances the effect of prenatal exposure to chemicals causing delay in mental and

  18. Interactions among Candidate Genes Selected by Meta-Analyses Resulting in Higher Risk of Ischemic Stroke in a Chinese Population.

    Directory of Open Access Journals (Sweden)

    Man Luo

    Full Text Available Ischemic stroke (IS is a multifactorial disorder caused by both genetic and environmental factors. The combined effects of multiple susceptibility genes might result in a higher risk for IS than a single gene. Therefore, we investigated whether interactions among multiple susceptibility genes were associated with an increased risk of IS by evaluating gene polymorphisms identified in previous meta-analyses, including methylenetetrahydrofolate reductase (MTHFR C677T, beta fibrinogen (FGB, β-FG A455G and T148C, apolipoprotein E (APOE ε2-4, angiotensin-converting enzyme (ACE insertion/deletion (I/D, and endothelial nitric oxide synthase (eNOS G894T. In order to examine these interactions, 712 patients with IS and 774 controls in a Chinese Han population were genotyped using the SNaPshot method, and multifactor dimensionality reduction analysis was used to detect potential interactions among the candidate genes. The results of this study found that ACE I/D and β-FG T148C were significant synergistic contributors to IS. In particular, the ACE DD + β-FG 148CC, ACE DD + β-FG 148CT, and ACE ID + β-FG 148CC genotype combinations resulted in higher risk of IS. After adjusting for potential confounding IS risk factors (age, gender, family history of IS, hypertension history and history of diabetes mellitus using a logistic analysis, a significant correlation between the genotype combinations and IS patients persisted (overall stroke: adjusted odds ratio [OR] = 1.57, 95% confidence interval [CI]: 1.22-2.02, P < 0.001, large artery atherosclerosis subtype: adjusted OR = 1.50, 95% CI: 1.08-2.07, P = 0.016, small-artery occlusion subtype: adjusted OR = 2.04, 95% CI: 1.43-2.91, P < 0.001. The results of this study indicate that the ACE I/D and β-FG T148C combination may result in significantly higher risk of IS in this Chinese population.

  19. Benchmarking in water project analysis

    Science.gov (United States)

    Griffin, Ronald C.

    2008-11-01

    The with/without principle of cost-benefit analysis is examined for the possible bias that it brings to water resource planning. Theory and examples for this question are established. Because benchmarking against the demonstrably low without-project hurdle can detract from economic welfare and can fail to promote efficient policy, improvement opportunities are investigated. In lieu of the traditional, without-project benchmark, a second-best-based "difference-making benchmark" is proposed. The project authorizations and modified review processes instituted by the U.S. Water Resources Development Act of 2007 may provide for renewed interest in these findings.

  20. Performance benchmarking and incentive regulation. Considerations of directing signals for electricity distribution companies

    Energy Technology Data Exchange (ETDEWEB)

    Honkapuro, S.

    2008-07-01

    After the restructuring process of the power supply industry, which for instance in Finland took place in the mid-1990s, free competition was introduced for the production and sale of electricity. Nevertheless, natural monopolies are found to be the most efficient form of production in the transmission and distribution of electricity, and therefore such companies remained franchised monopolies. To prevent the misuse of the monopoly position and to guarantee the rights of the customers, regulation of these monopoly companies is required. One of the main objectives of the restructuring process has been to increase the cost efficiency of the industry. Simultaneously, demands for the service quality are increasing. Therefore, many regulatory frameworks are being, or have been, reshaped so that companies are provided with stronger incentives for efficiency and quality improvements. Performance benchmarking has in many cases a central role in the practical implementation of such incentive schemes. Economic regulation with performance benchmarking attached to it provides companies with directing signals that tend to affect their investment and maintenance strategies. Since the asset lifetimes in the electricity distribution are typically many decades, investment decisions have far-reaching technical and economic effects. This doctoral thesis addresses the directing signals of incentive regulation and performance benchmarking in the field of electricity distribution. The theory of efficiency measurement and the most common regulation models are presented. The chief contributions of this work are (1) a new kind of analysis of the regulatory framework, so that the actual directing signals of the regulation and benchmarking for the electricity distribution companies are evaluated, (2) developing the methodology and a software tool for analysing the directing signals of the regulation and benchmarking in the electricity distribution sector, and (3) analysing the real

  1. Benchmark Evaluation of the NRAD Reactor LEU Core Startup Measurements

    Energy Technology Data Exchange (ETDEWEB)

    J. D. Bess; T. L. Maddock; M. A. Marshall

    2011-09-01

    The Neutron Radiography (NRAD) reactor is a 250-kW TRIGA-(Training, Research, Isotope Production, General Atomics)-conversion-type reactor at the Idaho National Laboratory; it is primarily used for neutron radiography analysis of irradiated and unirradiated fuels and materials. The NRAD reactor was converted from HEU to LEU fuel with 60 fuel elements and brought critical on March 31, 2010. This configuration of the NRAD reactor has been evaluated as an acceptable benchmark experiment and is available in the 2011 editions of the International Handbook of Evaluated Criticality Safety Benchmark Experiments (ICSBEP Handbook) and the International Handbook of Evaluated Reactor Physics Benchmark Experiments (IRPhEP Handbook). Significant effort went into precisely characterizing all aspects of the reactor core dimensions and material properties; detailed analyses of reactor parameters minimized experimental uncertainties. The largest contributors to the total benchmark uncertainty were the 234U, 236U, Er, and Hf content in the fuel; the manganese content in the stainless steel cladding; and the unknown level of water saturation in the graphite reflector blocks. A simplified benchmark model of the NRAD reactor was prepared with a keff of 1.0012 {+-} 0.0029 (1s). Monte Carlo calculations with MCNP5 and KENO-VI and various neutron cross section libraries were performed and compared with the benchmark eigenvalue for the 60-fuel-element core configuration; all calculated eigenvalues are between 0.3 and 0.8% greater than the benchmark value. Benchmark evaluations of the NRAD reactor are beneficial in understanding biases and uncertainties affecting criticality safety analyses of storage, handling, or transportation applications with LEU-Er-Zr-H fuel.

  2. BENCHMARKING ON-LINE SERVICES INDUSTRIES

    Institute of Scientific and Technical Information of China (English)

    John HAMILTON

    2006-01-01

    The Web Quality Analyser (WQA) is a new benchmarking tool for industry. It hasbeen extensively tested across services industries. Forty five critical success features are presented as measures that capture the user's perception of services industry websites. This tool differs to previous tools, in that it captures the information technology (IT) related driver sectors of website performance, along with the marketing-services related driver sectors. These driver sectors capture relevant structure, function and performance components.An 'on-off' switch measurement approach determines each component. Relevant component measures scale into a relative presence of the applicable feature, with a feature block delivering one of the sector drivers. Although it houses both measurable and a few subjective components, the WQA offers a proven and useful means to compare relevant websites.The WQA defines website strengths and weaknesses, thereby allowing for corrections to the website structure of the specific business. WQA benchmarking against services related business competitors delivers a position on the WQA index, facilitates specific website driver rating comparisons, and demonstrates where key competitive advantage may reside. This paper reports on the marketing-services driver sectors of this new benchmarking WQA tool.

  3. The Gaia FGK Benchmark Stars - High resolution spectral library

    CERN Document Server

    Blanco-Cuaresma, S; Jofré, P; Heiter, U

    2014-01-01

    Context. An increasing number of high resolution stellar spectra is available today thanks to many past and ongoing spectroscopic surveys. Consequently, numerous methods have been developed in order to perform an automatic spectral analysis on a massive amount of data. When reviewing published results, biases arise and they need to be addressed and minimized. Aims. We are providing a homogeneous library with a common set of calibration stars (known as the Gaia FGK Benchmark Stars) that will allow to assess stellar analysis methods and calibrate spectroscopic surveys. Methods. High resolution and signal-to-noise spectra were compiled from different instruments. We developed an automatic process in order to homogenize the observed data and assess the quality of the resulting library. Results. We built a high quality library that will facilitate the assessment of spectral analyses and the calibration of present and future spectroscopic surveys. The automation of the process minimizes the human subjectivity and e...

  4. Water Level Superseded Benchmark Sheets

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Images of National Coast & Geodetic Survey (now NOAA's National Geodetic Survey/NGS) tidal benchmarks which have been superseded by new markers or locations....

  5. Benchmark simulation models, quo vadis?

    DEFF Research Database (Denmark)

    Jeppsson, U.; Alex, J; Batstone, D. J.

    2013-01-01

    As the work of the IWA Task Group on Benchmarking of Control Strategies for wastewater treatment plants (WWTPs) is coming to an end, it is essential to disseminate the knowledge gained. For this reason, all authors of the IWA Scientific and Technical Report on benchmarking have come together to p...... already being done within the context of the benchmarking simulation models (BSMs) or applicable work in the wider literature. Of key importance is increasing capability, usability and transparency of the BSM package while avoiding unnecessary complexity. © IWA Publishing 2013....... and spatial extension, process modifications within the WWTP, the realism of models, control strategy extensions and the potential for new evaluation tools within the existing benchmark system. We find that there are major opportunities for application within all of these areas, either from existing work...

  6. Benchmarking and Sustainable Transport Policy

    DEFF Research Database (Denmark)

    Gudmundsson, Henrik; Wyatt, Andrew; Gordon, Lucy

    2004-01-01

    Order to learn from the best. In 2000 the European Commission initiated research to explore benchmarking as a tool to promote policies for ‘sustainable transport’. This paper reports findings and recommendations on how to address this challenge. The findings suggest that benchmarking is a valuable...... tool that may indeed help to move forward the transport policy agenda. However, there are major conditions and limitations. First of all it is not always so straightforward to delimit, measure and compare transport services in order to establish a clear benchmark. Secondly ‘sustainable transport......’ evokes a broad range of concerns that are hard to address fully at the level of specific practices. Thirdly policies are not directly comparable across space and context. For these reasons attempting to benchmark ‘sustainable transport policies’ against one another would be a highly complex task, which...

  7. Performance Targets and External Benchmarking

    DEFF Research Database (Denmark)

    Friis, Ivar; Hansen, Allan; Vámosi, Tamás S.

    Research on relative performance measures, transfer pricing, beyond budgeting initiatives, target costing, piece rates systems and value based management has for decades underlined the importance of external benchmarking in performance management. Research conceptualises external benchmarking...... the conditions upon which the market mechanism is performing within organizations. This paper aims to contribute to research by providing more insight to the conditions for the use of external benchmarking as an element in performance management in organizations. Our study explores a particular type of external...... towards the conditions for the use of the external benchmarks we provide more insights to some of the issues and challenges that are related to using this mechanism for performance management and advance competitiveness in organizations....

  8. Operating Room Efficiency before and after Entrance in a Benchmarking Program for Surgical Process Data.

    Science.gov (United States)

    Pedron, Sara; Winter, Vera; Oppel, Eva-Maria; Bialas, Enno

    2017-08-23

    Operating room (OR) efficiency continues to be a high priority for hospitals. In this context the concept of benchmarking has gained increasing importance as a means to improve OR performance. The aim of this study was to investigate whether and how participation in a benchmarking and reporting program for surgical process data was associated with a change in OR efficiency, measured through raw utilization, turnover times, and first-case tardiness. The main analysis is based on panel data from 202 surgical departments in German hospitals, which were derived from the largest database for surgical process data in Germany. Panel regression modelling was applied. Results revealed no clear and univocal trend of participation in a benchmarking and reporting program for surgical process data. The largest trend was observed for first-case tardiness. In contrast to expectations, turnover times showed a generally increasing trend during participation. For raw utilization no clear and statistically significant trend could be evidenced. Subgroup analyses revealed differences in effects across different hospital types and department specialties. Participation in a benchmarking and reporting program and thus the availability of reliable, timely and detailed analysis tools to support the OR management seemed to be correlated especially with an increase in the timeliness of staff members regarding first-case starts. The increasing trend in turnover time revealed the absence of effective strategies to improve this aspect of OR efficiency in German hospitals and could have meaningful consequences for the medium- and long-run capacity planning in the OR.

  9. Performance Targets and External Benchmarking

    DEFF Research Database (Denmark)

    Friis, Ivar; Hansen, Allan; Vámosi, Tamás S.

    as a market mechanism that can be brought inside the firm to provide incentives for continuous improvement and the development of competitive advances. However, whereas extant research primarily has focused on the importance and effects of using external benchmarks, less attention has been directed towards...... towards the conditions for the use of the external benchmarks we provide more insights to some of the issues and challenges that are related to using this mechanism for performance management and advance competitiveness in organizations....

  10. Benchmarking East Tennessee`s economic capacity

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-04-20

    This presentation is comprised of viewgraphs delineating major economic factors operating in 15 counties in East Tennessee. The purpose of the information presented is to provide a benchmark analysis of economic conditions for use in guiding economic growth in the region. The emphasis of the presentation is economic infrastructure, which is classified into six categories: human resources, technology, financial resources, physical infrastructure, quality of life, and tax and regulation. Data for analysis of key indicators in each of the categories are presented. Preliminary analyses, in the form of strengths and weaknesses and comparison to reference groups, are given.

  11. Research on computer systems benchmarking

    Science.gov (United States)

    Smith, Alan Jay (Principal Investigator)

    1996-01-01

    This grant addresses the topic of research on computer systems benchmarking and is more generally concerned with performance issues in computer systems. This report reviews work in those areas during the period of NASA support under this grant. The bulk of the work performed concerned benchmarking and analysis of CPUs, compilers, caches, and benchmark programs. The first part of this work concerned the issue of benchmark performance prediction. A new approach to benchmarking and machine characterization was reported, using a machine characterizer that measures the performance of a given system in terms of a Fortran abstract machine. Another report focused on analyzing compiler performance. The performance impact of optimization in the context of our methodology for CPU performance characterization was based on the abstract machine model. Benchmark programs are analyzed in another paper. A machine-independent model of program execution was developed to characterize both machine performance and program execution. By merging these machine and program characterizations, execution time can be estimated for arbitrary machine/program combinations. The work was continued into the domain of parallel and vector machines, including the issue of caches in vector processors and multiprocessors. All of the afore-mentioned accomplishments are more specifically summarized in this report, as well as those smaller in magnitude supported by this grant.

  12. FDNS CFD Code Benchmark for RBCC Ejector Mode Operation

    Science.gov (United States)

    Holt, James B.; Ruf, Joe

    1999-01-01

    Computational Fluid Dynamics (CFD) analysis results are compared with benchmark quality test data from the Propulsion Engineering Research Center's (PERC) Rocket Based Combined Cycle (RBCC) experiments to verify fluid dynamic code and application procedures. RBCC engine flowpath development will rely on CFD applications to capture the multi-dimensional fluid dynamic interactions and to quantify their effect on the RBCC system performance. Therefore, the accuracy of these CFD codes must be determined through detailed comparisons with test data. The PERC experiments build upon the well-known 1968 rocket-ejector experiments of Odegaard and Stroup by employing advanced optical and laser based diagnostics to evaluate mixing and secondary combustion. The Finite Difference Navier Stokes (FDNS) code was used to model the fluid dynamics of the PERC RBCC ejector mode configuration. Analyses were performed for both Diffusion and Afterburning (DAB) and Simultaneous Mixing and Combustion (SMC) test conditions. Results from both the 2D and the 3D models are presented.

  13. Benchmarking polish basic metal manufacturing companies

    Directory of Open Access Journals (Sweden)

    P. Pomykalski

    2014-01-01

    Full Text Available Basic metal manufacturing companies are undergoing substantial strategic changes resulting from global changes in demand. During such periods managers should closely monitor and benchmark the financial results of companies operating in their section. Proper and timely identification of the consequences of changes in these areas may be crucial as managers seek to exploit opportunities and avoid threats. The paper examines changes in financial ratios of basic metal manufacturing companies operating in Poland in the period 2006-2011.

  14. Numerical simulations of concrete flow: A benchmark comparison

    DEFF Research Database (Denmark)

    Roussel, Nicolas; Gram, Annika; Cremonesi, Massimiliano;

    2016-01-01

    First, we define in this paper two benchmark flows readily usable by anyone calibrating a numerical tool for concrete flow prediction. Such benchmark flows shall allow anyone to check the validity of their computational tools no matter the numerical methods and parameters they choose. Second, we...... compare numerical predictions of the concrete sample final shape for these two benchmark flows obtained by various research teams around the world using various numerical techniques. Our results show that all numerical techniques compared here give very similar results suggesting that numerical...

  15. Construction of a Benchmark for the User Experience Questionnaire (UEQ

    Directory of Open Access Journals (Sweden)

    Martin Schrepp

    2017-08-01

    Full Text Available Questionnaires are a cheap and highly efficient tool for achieving a quantitative measure of a product’s user experience (UX. However, it is not always easy to decide, if a questionnaire result can really show whether a product satisfies this quality aspect. So a benchmark is useful. It allows comparing the results of one product to a large set of other products. In this paper we describe a benchmark for the User Experience Questionnaire (UEQ, a widely used evaluation tool for interactive products. We also describe how the benchmark can be applied to the quality assurance process for concrete projects.

  16. Bioelectrochemical Systems Workshop:Standardized Analyses, Design Benchmarks, and Reporting

    Science.gov (United States)

    2012-01-01

    microbial nanowire networks. Nature Nanotechnology , 2011. 6(9): p. 573-579. 35. Butler, C.S. and R. Nerenberg, Performance and microbial ecology of...metals - Medium composition - Sterility - Buffer composition/capacity - Conductivity - TOC, DOC - COD (influent, effluent) - total, soluble - BOD

  17. [Short- and long-term results in operable pancreatic ductal adenocarcinomas from a cooperation between two departments gastroenterology-visceral surgery at non-university hospitals benchmarked to results of expert-centers].

    Science.gov (United States)

    Dippold, Wolfgang; Sivanathan, Visvakanth; Statt, Katharina; Roitman, Marc; Link, Karl Heinrich

    2017-02-01

    The only curative approach in pancreatic ductal adenocarcinoma (PDAC) is resection, which is possible only in 15 - 30 % of patients. Local tumor spread or distant metastases are contraindications for resection in the majority of patients. Surgical-oncological quality with short- and long-term results are varying tremendously, so that "expertise/quality" are associated to hospital- or surgeon's volume and/or center formation. The treatment results also depend, to a great extent, on the medical diagnostic quality. With our retrospective study, we aim to compare the results-quality of cooperative pancreatic cancer treatment based on an extensive preoperative diagnostic procedure for staging and risk estimation in a specialized GI-medical department and visceral surgical-oncological expertise in pancreatic cancer surgery at a general hospital with the results-quality of expert centers. Fifty-three patients with PDAC had diagnosis and resection of their cancer between 1/2002 and 12/2009. The 30 day hospital-mortality was 3.8 % and the median survival time after demission from the hospital was 23.1 months. The 5-year-survival rate of R0-resected patients, all of whom had received adjuvant chemotherapy, was high with 31 %. The survival data and the extraordinarily high resection rate of 98.1 % in the patient group, whose primary tumor stage was pT3 in 81 %, reflects the excellent cooperation of high standards in medical diagnostic processes, visceral pancreatic surgery, and adjuvant medical chemotherapy. The results are well comparable to those of "high volume centers". The responsible heads of the two departments have been trained at university expert centers. Expertise in the treatment of pancreatic cancer patients may be successfully transferred from an expert center to a general hospital, if the team has high expertise. © Georg Thieme Verlag KG Stuttgart · New York.

  18. Summary of Results from Analyses of Deposits of the Deep-Ocean Impact of the Eltanin Asteroid

    Science.gov (United States)

    Kyte, Frank T.; Kuhn, Gerhard; Gersonde, Rainer

    2005-01-01

    Deposits of the late Pliocene (2.5 Ma) Eltanin impact are unique in the known geological record. The only known example of a km-sized asteroid to impact a deep-ocean (5 km) basin, is the most meterorite-rich locality known. This was discovered as an Ir anomaly in sediments from three cores collected in 1965 by the USNS Eltanin. These cores contained mm-sized shock-melted asteroid materials and unmelted meteorite fragments. Mineral chemistry of meteorite fragments, and siderophole concentrations in melt rocks, indicate that the parent asteroid was a low-metal (4\\%) mesosiderite. A geological exploration of the impact in 1995 by Polarstern expedition ANT-XIV4 was near the Freeden Seamounts (57.3S, 90.5 W), and successfully collected three cores with impact deposits. Analyses showed that sediments as old as Eocene were eroded by the impact disturbance and redeposited in three distinct units. The lowermost is a chaotic assemblage of sediment fragments up to 50 cm in size. Above this is a laminated sand-rich unit that deposited as a turbulent flow, and this is overlain by a more fine-grained deposit of silts and clays that settled from a cloud of sediment suspended in the water column. Meteoritic ejecta particles were concentrated near the base of the uppermost unit, where coarse ejecta caught up with the disturbed sediment. Here we will present results from a new suite of cores collected on Polarstern expedition ANT-XVIIU5a. In 2001, the Polarstern returned to the impact area and explored a region of 80,000 sq-km., collecting at least 16 sediment cores with meteoritic ejecta. The known strewn field extends over a region 660 by 200 km. The meteoritic ejecta is most concentrated in cores on the Freeden seamounts, and in the basins to the north, where the amount of meteoritic material deposited on the ocean floor was as much as 3 g/sq-cm. These concentrations drop off to the north and the east to levels as low as approximately 0.1 g/sq-cm. We were unable to sample the

  19. Experiences in Benchmarking of Autonomic Systems

    Science.gov (United States)

    Etchevers, Xavier; Coupaye, Thierry; Vachet, Guy

    Autonomic computing promises improvements of systems quality of service in terms of availability, reliability, performance, security, etc. However, little research and experimental results have so far demonstrated this assertion, nor provided proof of the return on investment stemming from the efforts that introducing autonomic features requires. Existing works in the area of benchmarking of autonomic systems can be characterized by their qualitative and fragmented approaches. Still a crucial need is to provide generic (i.e. independent from business, technology, architecture and implementation choices) autonomic computing benchmarking tools for evaluating and/or comparing autonomic systems from a technical and, ultimately, an economical point of view. This article introduces a methodology and a process for defining and evaluating factors, criteria and metrics in order to qualitatively and quantitatively assess autonomic features in computing systems. It also discusses associated experimental results on three different autonomic systems.

  20. Benchmark of MEGA Code on Fast Ion Pressure Profile in the Large Helical Device

    Science.gov (United States)

    Seki, Ryosuke; Todo, Yasushi; Suzuki, Yasuhiro; Osakabe, Masaki

    2016-10-01

    As the first step for the analyses of energetic particle driven instabilities in the Large Helical Device (LHD) including the collisions of fast ions and the neutral beam injection, MEGA code is benchmarked on the classical fast ion pressure profile using the temperature and density profiles measured in the LHD experiments. In this benchmark, the MHD equilibrium is calculated with HINT code, and the beam deposition profile is calculated with HFREYA code. Since the equilibrium is not axisymmetric in LHD, the accuracy of orbit tracing is important for fast ion analyses. In the slowing down process of the MEGA code, the guiding center equation is numerically solved using the 4th order Runge-Kutta method and the linear interpolation. MEGA code is benchmarked against the results of MORH code, in which the 6th order Runge-Kutta and the 4th order spline interpolation are used. In LHD, the position of the loss boundary of fast ion is important because there are many ``re-entering fast ions'' which re-enter in plasma after they have once passed out of plasma. The effects of the position of the loss boundary on the fast ion pressure profile will be discussed, and a preliminary result of Alfven eigenmodes will be presented.

  1. A proposed benchmark for simulation in radiographic testing

    Energy Technology Data Exchange (ETDEWEB)

    Jaenisch, G.-R.; Deresch, A.; Bellon, C. [Federal Institute for Materials Research and Testing Unter den Eichen 87, 12205 Berlin (Germany); Schumm, A.; Guerin, P. [EDF R and D, 1 avenue du Général de Gaulle, 92141 Clamart (France)

    2014-02-18

    The purpose of this benchmark study is to compare simulation results predicted by various models of radiographic testing, in particular those that are capable of separately predicting primary and scatter radiation for specimens of arbitrary geometry.

  2. Benchmarking of human resources management

    Directory of Open Access Journals (Sweden)

    David M. Akinnusi

    2008-12-01

    Full Text Available This paper reviews the role of human resource management (HRM which, today, plays a strategic partnership role in management. The focus of the paper is on HRM in the public sector, where much hope rests on HRM as a means of transforming the public service and achieving much needed service delivery. However, a critical evaluation of HRM practices in the public sector reveals that these services leave much to be desired. The paper suggests the adoption of benchmarking as a process to revamp HRM in the public sector so that it is able to deliver on its promises. It describes the nature and process of benchmarking and highlights the inherent difficulties in applying benchmarking in HRM. It concludes with some suggestions for a plan of action. The process of identifying “best” practices in HRM requires the best collaborative efforts of HRM practitioners and academicians. If used creatively, benchmarking has the potential to bring about radical and positive changes in HRM in the public sector. The adoption of the benchmarking process is, in itself, a litmus test of the extent to which HRM in the public sector has grown professionally.

  3. Benchmark simulation models, quo vadis?

    Science.gov (United States)

    Jeppsson, U; Alex, J; Batstone, D J; Benedetti, L; Comas, J; Copp, J B; Corominas, L; Flores-Alsina, X; Gernaey, K V; Nopens, I; Pons, M-N; Rodríguez-Roda, I; Rosen, C; Steyer, J-P; Vanrolleghem, P A; Volcke, E I P; Vrecko, D

    2013-01-01

    As the work of the IWA Task Group on Benchmarking of Control Strategies for wastewater treatment plants (WWTPs) is coming to an end, it is essential to disseminate the knowledge gained. For this reason, all authors of the IWA Scientific and Technical Report on benchmarking have come together to provide their insights, highlighting areas where knowledge may still be deficient and where new opportunities are emerging, and to propose potential avenues for future development and application of the general benchmarking framework and its associated tools. The paper focuses on the topics of temporal and spatial extension, process modifications within the WWTP, the realism of models, control strategy extensions and the potential for new evaluation tools within the existing benchmark system. We find that there are major opportunities for application within all of these areas, either from existing work already being done within the context of the benchmarking simulation models (BSMs) or applicable work in the wider literature. Of key importance is increasing capability, usability and transparency of the BSM package while avoiding unnecessary complexity.

  4. TREAT Transient Analysis Benchmarking for the HEU Core

    Energy Technology Data Exchange (ETDEWEB)

    Kontogeorgakos, D. C. [Argonne National Lab. (ANL), Argonne, IL (United States); Connaway, H. M. [Argonne National Lab. (ANL), Argonne, IL (United States); Wright, A. E. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2014-05-01

    This work was performed to support the feasibility study on the potential conversion of the Transient Reactor Test Facility (TREAT) at Idaho National Laboratory from the use of high enriched uranium (HEU) fuel to the use of low enriched uranium (LEU) fuel. The analyses were performed by the GTRI Reactor Conversion staff at the Argonne National Laboratory (ANL). The objective of this study was to benchmark the transient calculations against temperature-limited transients performed in the final operating HEU TREAT core configuration. The MCNP code was used to evaluate steady-state neutronics behavior, and the point kinetics code TREKIN was used to determine core power and energy during transients. The first part of the benchmarking process was to calculate with MCNP all the neutronic parameters required by TREKIN to simulate the transients: the transient rod-bank worth, the prompt neutron generation lifetime, the temperature reactivity feedback as a function of total core energy, and the core-average temperature and peak temperature as a functions of total core energy. The results of these calculations were compared against measurements or against reported values as documented in the available TREAT reports. The heating of the fuel was simulated as an adiabatic process. The reported values were extracted from ANL reports, intra-laboratory memos and experiment logsheets and in some cases it was not clear if the values were based on measurements, on calculations or a combination of both. Therefore, it was decided to use the term “reported” values when referring to such data. The methods and results from the HEU core transient analyses will be used for the potential LEU core configurations to predict the converted (LEU) core’s performance.

  5. Analysing the performance of personal computers based on Intel microprocessors for sequence aligning bioinformatics applications.

    Science.gov (United States)

    Nair, Pradeep S; John, Eugene B

    2007-01-01

    Aligning specific sequences against a very large number of other sequences is a central aspect of bioinformatics. With the widespread availability of personal computers in biology laboratories, sequence alignment is now often performed locally. This makes it necessary to analyse the performance of personal computers for sequence aligning bioinformatics benchmarks. In this paper, we analyse the performance of a personal computer for the popular BLAST and FASTA sequence alignment suites. Results indicate that these benchmarks have a large number of recurring operations and use memory operations extensively. It seems that the performance can be improved with a bigger L1-cache.

  6. Characterization of addressability by simultaneous randomized benchmarking

    CERN Document Server

    Gambetta, Jay M; Merkel, S T; Johnson, B R; Smolin, John A; Chow, Jerry M; Ryan, Colm A; Rigetti, Chad; Poletto, S; Ohki, Thomas A; Ketchen, Mark B; Steffen, M

    2012-01-01

    The control and handling of errors arising from cross-talk and unwanted interactions in multi-qubit systems is an important issue in quantum information processing architectures. We introduce a benchmarking protocol that provides information about the amount of addressability present in the system and implement it on coupled superconducting qubits. The protocol consists of randomized benchmarking each qubit individually and then simultaneously, and the amount of addressability is related to the difference of the average gate fidelities of those experiments. We present the results on two similar samples with different amounts of cross-talk and unwanted interactions, which agree with predictions based on simple models for the amount of residual coupling.

  7. HPL and STREAM Benchmarks on SANAM Supercomputer

    KAUST Repository

    Bin Sulaiman, Riman A.

    2017-03-13

    SANAM supercomputer was jointly built by KACST and FIAS in 2012 ranking second that year in the Green500 list with a power efficiency of 2.3 GFLOPS/W (Rohr et al., 2014). It is a heterogeneous accelerator-based HPC system that has 300 compute nodes. Each node includes two Intel Xeon E5?2650 CPUs, two AMD FirePro S10000 dual GPUs and 128 GiB of main memory. In this work, the seven benchmarks of HPCC were installed and configured to reassess the performance of SANAM, as part of an unpublished master thesis, after it was reassembled in the Kingdom of Saudi Arabia. We present here detailed results of HPL and STREAM benchmarks.

  8. Active vibration control of nonlinear benchmark buildings

    Institute of Scientific and Technical Information of China (English)

    ZHOU Xing-de; CHEN Dao-zheng

    2007-01-01

    The present nonlinear model reduction methods unfit the nonlinear benchmark buildings as their vibration equations belong to a non-affine system. Meanwhile,the controllers designed directly by the nonlinear control strategy have a high order, and they are difficult to be applied actually. Therefore, a new active vibration control way which fits the nonlinear buildings is proposed. The idea of the proposed way is based on the model identification and structural model linearization, and exerting the control force to the built model according to the force action principle. This proposed way has a better practicability as the built model can be reduced by the balance reduction method based on the empirical Grammian matrix. A three-story benchmark structure is presented and the simulation results illustrate that the proposed method is viable for the civil engineering structures.

  9. Randomized benchmarking of multiqubit gates.

    Science.gov (United States)

    Gaebler, J P; Meier, A M; Tan, T R; Bowler, R; Lin, Y; Hanneke, D; Jost, J D; Home, J P; Knill, E; Leibfried, D; Wineland, D J

    2012-06-29

    We describe an extension of single-qubit gate randomized benchmarking that measures the error of multiqubit gates in a quantum information processor. This platform-independent protocol evaluates the performance of Clifford unitaries, which form a basis of fault-tolerant quantum computing. We implemented the benchmarking protocol with trapped ions and found an error per random two-qubit Clifford unitary of 0.162±0.008, thus setting the first benchmark for such unitaries. By implementing a second set of sequences with an extra two-qubit phase gate inserted after each step, we extracted an error per phase gate of 0.069±0.017. We conducted these experiments with transported, sympathetically cooled ions in a multizone Paul trap-a system that can in principle be scaled to larger numbers of ions.

  10. Operating Room Efficiency before and after Entrance in a Benchmarking Program for Surgical Process Data

    DEFF Research Database (Denmark)

    Pedron, Sara; Winter, Vera; Oppel, Eva-Maria

    2017-01-01

    a generally increasing trend during participation. For raw utilization no clear and statistically significant trend could be evidenced. Subgroup analyses revealed differences in effects across different hospital types and department specialties. Participation in a benchmarking and reporting program and thus...... program for surgical process data was associated with a change in OR efficiency, measured through raw utilization, turnover times, and first-case tardiness. The main analysis is based on panel data from 202 surgical departments in German hospitals, which were derived from the largest database for surgical...... process data in Germany. Panel regression modelling was applied. Results revealed no clear and univocal trend of participation in a benchmarking and reporting program for surgical process data. The largest trend was observed for first-case tardiness. In contrast to expectations, turnover times showed...

  11. Combining tumor genome simulation with crowdsourcing to benchmark somatic single-nucleotide-variant detection.

    Science.gov (United States)

    Ewing, Adam D; Houlahan, Kathleen E; Hu, Yin; Ellrott, Kyle; Caloian, Cristian; Yamaguchi, Takafumi N; Bare, J Christopher; P'ng, Christine; Waggott, Daryl; Sabelnykova, Veronica Y; Kellen, Michael R; Norman, Thea C; Haussler, David; Friend, Stephen H; Stolovitzky, Gustavo; Margolin, Adam A; Stuart, Joshua M; Boutros, Paul C

    2015-07-01

    The detection of somatic mutations from cancer genome sequences is key to understanding the genetic basis of disease progression, patient survival and response to therapy. Benchmarking is needed for tool assessment and improvement but is complicated by a lack of gold standards, by extensive resource requirements and by difficulties in sharing personal genomic information. To resolve these issues, we launched the ICGC-TCGA DREAM Somatic Mutation Calling Challenge, a crowdsourced benchmark of somatic mutation detection algorithms. Here we report the BAMSurgeon tool for simulating cancer genomes and the results of 248 analyses of three in silico tumors created with it. Different algorithms exhibit characteristic error profiles, and, intriguingly, false positives show a trinucleotide profile very similar to one found in human tumors. Although the three simulated tumors differ in sequence contamination (deviation from normal cell sequence) and in subclonality, an ensemble of pipelines outperforms the best individual pipeline in all cases. BAMSurgeon is available at https://github.com/adamewing/bamsurgeon/.

  12. Perspective: Selected benchmarks from commercial CFD codes

    Energy Technology Data Exchange (ETDEWEB)

    Freitas, C.J. [Southwest Research Inst., San Antonio, TX (United States). Computational Mechanics Section

    1995-06-01

    This paper summarizes the results of a series of five benchmark simulations which were completed using commercial Computational Fluid Dynamics (CFD) codes. These simulations were performed by the vendors themselves, and then reported by them in ASME`s CFD Triathlon Forum and CFD Biathlon Forum. The first group of benchmarks consisted of three laminar flow problems. These were the steady, two-dimensional flow over a backward-facing step, the low Reynolds number flow around a circular cylinder, and the unsteady three-dimensional flow in a shear-driven cubical cavity. The second group of benchmarks consisted of two turbulent flow problems. These were the two-dimensional flow around a square cylinder with periodic separated flow phenomena, and the stead, three-dimensional flow in a 180-degree square bend. All simulation results were evaluated against existing experimental data nd thereby satisfied item 10 of the Journal`s policy statement for numerical accuracy. The objective of this exercise was to provide the engineering and scientific community with a common reference point for the evaluation of commercial CFD codes.

  13. Perceptual hashing algorithms benchmark suite

    Institute of Scientific and Technical Information of China (English)

    Zhang Hui; Schmucker Martin; Niu Xiamu

    2007-01-01

    Numerous perceptual hashing algorithms have been developed for identification and verification of multimedia objects in recent years. Many application schemes have been adopted for various commercial objects. Developers and users are looking for a benchmark tool to compare and evaluate their current algorithms or technologies. In this paper, a novel benchmark platform is presented. PHABS provides an open framework and lets its users define their own test strategy, perform tests, collect and analyze test data. With PHABS, various performance parameters of algorithms can be tested, and different algorithms or algorithms with different parameters can be evaluated and compared easily.

  14. Closed-loop neuromorphic benchmarks

    CSIR Research Space (South Africa)

    Stewart

    2015-11-01

    Full Text Available Benchmarks   Terrence C. Stewart 1* , Travis DeWolf 1 , Ashley Kleinhans 2 , Chris Eliasmith 1   1 University of Waterloo, Canada, 2 Council for Scientific and Industrial Research, South Africa   Submitted to Journal:   Frontiers in Neuroscience   Specialty... the study was exempt from ethical approval procedures.) Did the study presented in the manuscript involve human or animal subjects: No I v i w 1Closed-loop Neuromorphic Benchmarks Terrence C. Stewart 1,∗, Travis DeWolf 1, Ashley Kleinhans 2 and Chris...

  15. Benchmarks of support in internal medicine residency training programs.

    Science.gov (United States)

    Wolfsthal, Susan D; Beasley, Brent W; Kopelman, Richard; Stickley, William; Gabryel, Timothy; Kahn, Marc J

    2002-01-01

    To identify benchmarks of financial and staff support in internal medicine residency training programs and their correlation with indicators of quality. A survey instrument to determine characteristics of support of residency training programs was mailed to each member program of the Association of Program Directors of Internal Medicine. Results were correlated with the three-year running average of the pass rates on the American Board of Internal Medicine certifying examination using bivariate and multivariate analyses. Of 394 surveys, 287 (73%) were completed: 74% of respondents were program directors and 20% were both chair and program director. The mean duration as program director was 7.5 years (median = 5), but it was significantly lower for women than for men (4.9 versus 8.1; p =.001). Respondents spent 62% of their time in educational and administrative duties, 30% in clinical activities, 5% in research, and 2% in other activities. Most chief residents were PGY4s, with 72% receiving compensation additional to base salary. On average, there was one associate program director for every 33 residents, one chief resident for every 27 residents, and one staff person for every 21 residents. Most programs provided trainees with incremental educational stipends, meals while oncall, travel and meeting expenses, and parking. Support from pharmaceutical companies was used for meals, books, and meeting expenses. Almost all programs provided meals for applicants, with 15% providing travel allowances and 37% providing lodging. The programs' board pass rates significantly correlated with the numbers of faculty fulltime equivalents (FTEs), the numbers of resident FTEs per office staff FTEs, and the numbers of categorical and preliminary applications received and ranked by the programs in 1998 and 1999. Regression analyses demonstrated three independent predictors of the programs' board pass rates: number of faculty (a positive predictor), percentage of clinical work

  16. Supply chain integration scales validation and benchmark values

    Directory of Open Access Journals (Sweden)

    Juan A. Marin-Garcia

    2013-06-01

    Full Text Available Purpose: The clarification of the constructs of the supply chain integration (clients, suppliers, external and internal, the creation of a measurement instrument based on a list of items taken from earlier papers, the validation of these scales and a preliminary benchmark to interpret the scales by percentiles based on a set of control variables (size of the plant, country, sector and degree of vertical integration. Design/methodology/approach: Our empirical analysis is based on the HPM project database (2005-2007 timeframe. The international sample is made up of 266 plants across ten countries: Austria, Canada, Finland, Germany, Italy, Japan, Korea, Spain, Sweden and the USA. In each country. We analized the descriptive statistics, internal consistency testing to purify the items (inter-item correlations, Cronbach’s alpha, squared multiple correlation, corrected item-total correlation, exploratory factor analysis, and finally, a confirmatory factor analysis to check the convergent and discriminant validity of the scales. The analyses will be done with the SPSS and EQS programme using the maximum likelihood parameter estimation method. Findings: The four proposed scales show excellent psychometric properties. Research limitations/implications: with a clearer and more concise designation of the supply chain integration measurement scales more reliable and accurate data could be taken to analyse the relations between these constructs with other variables of interest to the academic l fields. Practical implications: providing scales that are valid as a diagnostic tool for best practices, as well as providing a benchmark with which to compare the score for each individual plant against a collection of industrial companies from the machinery, electronics and transportation sectors. Originality/value: supply chain integration may be a major factor in explaining the performance of companies. The results are nevertheless inconclusive, the vast range

  17. The contextual benchmark method: benchmarking e-government services

    NARCIS (Netherlands)

    Jansen, Jurjen; Vries, de Sjoerd; Schaik, van Paul

    2010-01-01

    This paper offers a new method for benchmarking e-Government services. Government organizations no longer doubt the need to deliver their services on line. Instead, the question that is more relevant is how well the electronic services offered by a particular organization perform in comparison with

  18. Benchmark Simulation Model No 2 – finalisation of plant layout and default control strategy

    DEFF Research Database (Denmark)

    Nopens, I.; Benedetti, L.; Jeppsson, U.

    2010-01-01

    The COST/IWA Benchmark Simulation Model No 1 (BSM1) has been available for almost a decade. Its primary purpose has been to create a platform for control strategy benchmarking of activated sludge processes. The fact that the research work related to the benchmark simulation models has resulted...... in more than 300 publications worldwide demonstrates the interest in and need of such tools within the research community. Recent efforts within the IWA Task Group on “Benchmarking of control strategies for WWTPs” have focused on an extension of the benchmark simulation model. This extension aims...

  19. Developing scheduling benchmark tests for the Space Network

    Science.gov (United States)

    Moe, Karen L.; Happell, Nadine; Brady, Sean

    1993-01-01

    A set of benchmark tests were developed to analyze and measure Space Network scheduling characteristics and to assess the potential benefits of a proposed flexible scheduling concept. This paper discusses the role of the benchmark tests in evaluating alternative flexible scheduling approaches and defines a set of performance measurements. The paper describes the rationale for the benchmark tests as well as the benchmark components, which include models of the Tracking and Data Relay Satellite (TDRS), mission spacecraft, their orbital data, and flexible requests for communication services. Parameters which vary in the tests address the degree of request flexibility, the request resource load, and the number of events to schedule. Test results are evaluated based on time to process and schedule quality. Preliminary results and lessons learned are addressed.

  20. [Selection of a statistical model for evaluation of the reliability of the results of toxicological analyses. I. Discussion on selected statistical models for evaluation of the systems of control of the results of toxicological analyses].

    Science.gov (United States)

    Antczak, K; Wilczyńska, U

    1980-01-01

    2 statistical models for evaluation of toxicological studies results have been presented. Model I. after R. Hoschek and H. J. Schittke (2) involves: 1. Elimination of the values deviating from most results-by Grubbs' method (2). 2. Analysis of the differences between the results obtained by the participants of the action and tentatively assumed value. 3. Evaluation of significant differences between the reference value and average value for a given series of measurements. 4. Thorough evaluation of laboratories based on evaluation coefficient fx. Model II after Keppler et al. As a criterion for evaluating the results the authors assumed the median. Individual evaluation of laboratories was performed on the basis of: 1. Adjusted test "t" 2. Linear regression test.

  1. How to achieve and prove performance improvement - 15 years of experience in German wastewater benchmarking.

    Science.gov (United States)

    Bertzbach, F; Franz, T; Möller, K

    2012-01-01

    This paper shows the results of performance improvement, which have been achieved in benchmarking projects in the wastewater industry in Germany over the last 15 years. A huge number of changes in operational practice and also in achieved annual savings can be shown, induced in particular by benchmarking at process level. Investigation of this question produces some general findings for the inclusion of performance improvement in a benchmarking project and for the communication of its results. Thus, we elaborate on the concept of benchmarking at both utility and process level, which is still a necessary distinction for the integration of performance improvement into our benchmarking approach. To achieve performance improvement via benchmarking it should be made quite clear that this outcome depends, on one hand, on a well conducted benchmarking programme and, on the other, on the individual situation within each participating utility.

  2. Benchmarking of collimation tracking using RHIC beam loss data.

    Energy Technology Data Exchange (ETDEWEB)

    Robert-Demolaize,G.; Drees, A.

    2008-06-23

    State-of-the-art tracking tools were recently developed at CERN to study the cleaning efficiency of the Large Hadron Collider (LHC) collimation system. In order to estimate the prediction accuracy of these tools, benchmarking studies can be performed using actual beam loss measurements from a machine that already uses a similar multistage collimation system. This paper reviews the main results from benchmarking studies performed with specific data collected from operations at the Relativistic Heavy Ion Collider (RHIC).

  3. Benchmarking Internet of Things devices

    CSIR Research Space (South Africa)

    Kruger, CP

    2014-07-01

    Full Text Available International Conference on Industrial Informatics (INDIN), 27-30 July 2014 Benchmarking Internet of Things devices C.P. Kruger y and G.P. Hancke yz *Advanced Sensor Networks Research Group, Counsil for Scientific and Industrial Research, South...

  4. Benchmarked Library Websites Comparative Study

    KAUST Repository

    Ramli, Rindra M.

    2015-01-01

    This presentation provides an analysis of services provided by the benchmarked library websites. The exploratory study includes comparison of these websites against a list of criterion and presents a list of services that are most commonly deployed by the selected websites. In addition to that, the investigators proposed a list of services that could be provided via the KAUST library website.

  5. Benchmark Lisp And Ada Programs

    Science.gov (United States)

    Davis, Gloria; Galant, David; Lim, Raymond; Stutz, John; Gibson, J.; Raghavan, B.; Cheesema, P.; Taylor, W.

    1992-01-01

    Suite of nonparallel benchmark programs, ELAPSE, designed for three tests: comparing efficiency of computer processing via Lisp vs. Ada; comparing efficiencies of several computers processing via Lisp; or comparing several computers processing via Ada. Tests efficiency which computer executes routines in each language. Available for computer equipped with validated Ada compiler and/or Common Lisp system.

  6. Benchmark Solutions for Computational Aeroacoustics (CAA) Code Validation

    Science.gov (United States)

    Scott, James R.

    2004-01-01

    NASA has conducted a series of Computational Aeroacoustics (CAA) Workshops on Benchmark Problems to develop a set of realistic CAA problems that can be used for code validation. In the Third (1999) and Fourth (2003) Workshops, the single airfoil gust response problem, with real geometry effects, was included as one of the benchmark problems. Respondents were asked to calculate the airfoil RMS pressure and far-field acoustic intensity for different airfoil geometries and a wide range of gust frequencies. This paper presents the validated that have been obtained to the benchmark problem, and in addition, compares them with classical flat plate results. It is seen that airfoil geometry has a strong effect on the airfoil unsteady pressure, and a significant effect on the far-field acoustic intensity. Those parts of the benchmark problem that have not yet been adequately solved are identified and presented as a challenge to the CAA research community.

  7. New results from the analyses of the solid phase of the NASA Ames Titan Haze Simulation (THS) experiment

    Science.gov (United States)

    Sciamma-O'Brien, Ella; Upton, Kathleen T.; Beauchamp, Jesse L.; Salama, Farid

    2015-11-01

    In Titan’s atmosphere, a complex chemistry occurs at low temperature between N2 and CH4 that leads to the production of heavy organic molecules and subsequently solid aerosols. The Titan Haze Simulation (THS) experiment was developed at the NASA Ames COSmIC facility to study Titan’s atmospheric chemistry at low temperature. In the THS, the chemistry is simulated by plasma in the stream of a supersonic expansion. With this unique design, the gas is cooled to Titan-like temperature (~150K) before inducing the chemistry by plasma, and remains at low temperature in the plasma (~200K). Different N2-CH4-based gas mixtures can be injected in the plasma, with or without the addition of heavier molecules, in order to monitor the evolution of the chemical growth.Following a recent in situ mass spectrometry study of the gas phase that demonstrated that the THS is a unique tool to probe the first and intermediate steps of Titan’s atmospheric chemistry at low temperature (Sciamma-O’Brien et al., Icarus, 243, 325 (2014)), we have performed a complementary study of the solid phase. The findings are consistent with the chemical growth evolution observed in the gas phase. Grains and aggregates form in the gas phase and can be jet deposited onto various substrates for ex situ analyses. Scanning Electron Microscopy images show that more complex mixtures produce larger aggregates, and that different growth mechanisms seem to occur depending on the gas mixture. They also allow the determination of the size distribution of the THS solid grains. A Direct Analysis in Real Time mass spectrometry analysis coupled with Collision Induced Dissociation has detected the presence of aminoacetonitrile, a precursor of glycine, in the THS aerosols. X-ray Absorption Near Edge Structure (XANES) measurements also show the presence of imine and nitrile functional groups, showing evidence of nitrogen chemistry. Infrared and µIR spectra of samples deposited on KBr and Si substrates show the

  8. Who watches the watchmen? An appraisal of benchmarks for multiple sequence alignment.

    Science.gov (United States)

    Iantorno, Stefano; Gori, Kevin; Goldman, Nick; Gil, Manuel; Dessimoz, Christophe

    2014-01-01

    Multiple sequence alignment (MSA) is a fundamental and ubiquitous technique in bioinformatics used to infer related residues among biological sequences. Thus alignment accuracy is crucial to a vast range of analyses, often in ways difficult to assess in those analyses. To compare the performance of different aligners and help detect systematic errors in alignments, a number of benchmarking strategies have been pursued. Here we present an overview of the main strategies-based on simulation, consistency, protein structure, and phylogeny-and discuss their different advantages and associated risks. We outline a set of desirable characteristics for effective benchmarking, and evaluate each strategy in light of them. We conclude that there is currently no universally applicable means of benchmarking MSA, and that developers and users of alignment tools should base their choice of benchmark depending on the context of application-with a keen awareness of the assumptions underlying each benchmarking strategy.

  9. Performance Benchmarks for Screening Breast MR Imaging in Community Practice.

    Science.gov (United States)

    Lee, Janie M; Ichikawa, Laura; Valencia, Elizabeth; Miglioretti, Diana L; Wernli, Karen; Buist, Diana S M; Kerlikowske, Karla; Henderson, Louise M; Sprague, Brian L; Onega, Tracy; Rauscher, Garth H; Lehman, Constance D

    2017-10-01

    Purpose To compare screening magnetic resonance (MR) imaging performance in the Breast Cancer Surveillance Consortium (BCSC) with Breast Imaging Reporting and Data System (BI-RADS) benchmarks. Materials and Methods This study was approved by the institutional review board and compliant with HIPAA and included BCSC screening MR examinations collected between 2005 and 2013 from 5343 women (8387 MR examinations) linked to regional Surveillance, Epidemiology, and End Results program registries, state tumor registries, and pathologic information databases that identified breast cancer cases and tumor characteristics. Clinical, demographic, and imaging characteristics were assessed. Performance measures were calculated according to BI-RADS fifth edition and included cancer detection rate (CDR), positive predictive value of biopsy recommendation (PPV2), sensitivity, and specificity. Results The median patient age was 52 years; 52% of MR examinations were performed in women with a first-degree family history of breast cancer, 46% in women with a personal history of breast cancer, and 15% in women with both risk factors. Screening MR imaging depicted 146 cancers, and 35 interval cancers were identified (181 total-54 in situ, 125 invasive, and two status unknown). The CDR was 17 per 1000 screening examinations (95% confidence interval [CI]: 15, 20 per 1000 screening examinations; BI-RADS benchmark, 20-30 per 1000 screening examinations). PPV2 was 19% (95% CI: 16%, 22%; benchmark, 15%). Sensitivity was 81% (95% CI: 75%, 86%; benchmark, >80%), and specificity was 83% (95% CI: 82%, 84%; benchmark, 85%-90%). The median tumor size of invasive cancers was 10 mm; 88% were node negative. Conclusion The interpretative performance of screening MR imaging in the BCSC meets most BI-RADS benchmarks and approaches benchmark levels for remaining measures. Clinical practice performance data can inform ongoing benchmark development and help identify areas for quality improvement. (©) RSNA

  10. 42 CFR 440.385 - Delivery of benchmark and benchmark-equivalent coverage through managed care entities.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Delivery of benchmark and benchmark-equivalent...: GENERAL PROVISIONS Benchmark Benefit and Benchmark-Equivalent Coverage § 440.385 Delivery of benchmark and benchmark-equivalent coverage through managed care entities. In implementing benchmark or...

  11. Does bisphenol A induce superfeminization in Marisa cornuarietis? Part II: toxicity test results and requirements for statistical power analyses.

    Science.gov (United States)

    Forbes, Valery E; Aufderheide, John; Warbritton, Ryan; van der Hoeven, Nelly; Caspers, Norbert

    2007-03-01

    This study presents results of the effects of bisphenol A (BPA) on adult egg production, egg hatchability, egg development rates and juvenile growth rates in the freshwater gastropod, Marisa cornuarietis. We observed no adult mortality, substantial inter-snail variability in reproductive output, and no effects of BPA on reproduction during 12 weeks of exposure to 0, 0.1, 1.0, 16, 160 or 640 microg/L BPA. We observed no effects of BPA on egg hatchability or timing of egg hatching. Juveniles showed good growth in the control and all treatments, and there were no significant effects of BPA on this endpoint. Our results do not support previous claims of enhanced reproduction in Marisa cornuarietis in response to exposure to BPA. Statistical power analysis indicated high levels of inter-snail variability in the measured endpoints and highlighted the need for sufficient replication when testing treatment effects on reproduction in M. cornuarietis with adequate power.

  12. Incorporating Variational Local Analysis and Prediction System (vLAPS) Analyses with Nudging Data Assimilation: Methodology and Initial Results

    Science.gov (United States)

    2017-09-01

    thus the linearization and simplification applied in the inner loop can hinder the capability of 4DVAR to fully reach its potential. EnKF relies on...complete, we discovered that a software error resulted in profiler and ACARS observations not being included at 13, 19, and 21 UTC. Quality...labeled with that letter in the figure. Most of the components are standard WRF Preprocessing System (WPS) software , or altered versions of the

  13. A Multilingual Approach to Analysing Standardized Test Results: Immigrant Primary School Children and the Role of Languages Spoken in a Bi-/Multilingual Community

    Science.gov (United States)

    De Angelis, Gessica

    2014-01-01

    The present study adopts a multilingual approach to analysing the standardized test results of primary school immigrant children living in the bi-/multilingual context of South Tyrol, Italy. The standardized test results are from the Invalsi test administered across Italy in 2009/2010. In South Tyrol, several languages are spoken on a daily basis…

  14. SPICE benchmark for global tomographic methods

    Science.gov (United States)

    Qin, Yilong; Capdeville, Yann; Maupin, Valerie; Montagner, Jean-Paul; Lebedev, Sergei; Beucler, Eric

    2008-11-01

    The existing global tomographic methods result in different models due to different parametrization, scale resolution and theoretical approach. To test how current imaging techniques are limited by approximations in theory and by the inadequacy of data quality and coverage, it is necessary to perform a global-scale benchmark to understand the resolving properties of each specific imaging algorithm. In the framework of the Seismic wave Propagation and Imaging in Complex media: a European network (SPICE) project, it was decided to perform a benchmark experiment of global inversion algorithms. First, a preliminary benchmark with a simple isotropic model is carried out to check the feasibility in terms of acquisition geometry and numerical accuracy. Then, to fully validate tomographic schemes with a challenging synthetic data set, we constructed one complex anisotropic global model, which is characterized by 21 elastic constants and includes 3-D heterogeneities in velocity, anisotropy (radial and azimuthal anisotropy), attenuation, density, as well as surface topography and bathymetry. The intermediate-period (>32 s), high fidelity anisotropic modelling was performed by using state-of-the-art anisotropic anelastic modelling code, that is, coupled spectral element method (CSEM), on modern massively parallel computing resources. The benchmark data set consists of 29 events and three-component seismograms are recorded by 256 stations. Because of the limitation of the available computing power, synthetic seismograms have a minimum period of 32 s and a length of 10 500 s. The inversion of the benchmark data set demonstrates several well-known problems of classical surface wave tomography, such as the importance of crustal correction to recover the shallow structures, the loss of resolution with depth, the smearing effect, both horizontal and vertical, the inaccuracy of amplitude of isotropic S-wave velocity variation, the difficulty of retrieving the magnitude of azimuthal

  15. Benchmarking clinical photography services in the NHS.

    Science.gov (United States)

    Arbon, Giles

    2015-01-01

    Benchmarking is used in services across the National Health Service (NHS) using various benchmarking programs. Clinical photography services do not have a program in place and services have to rely on ad hoc surveys of other services. A trial benchmarking exercise was undertaken with 13 services in NHS Trusts. This highlights valuable data and comparisons that can be used to benchmark and improve services throughout the profession.

  16. Towards Systematic Benchmarking of Climate Model Performance

    Science.gov (United States)

    Gleckler, P. J.

    2014-12-01

    The process by which climate models are evaluated has evolved substantially over the past decade, with the Coupled Model Intercomparison Project (CMIP) serving as a centralizing activity for coordinating model experimentation and enabling research. Scientists with a broad spectrum of expertise have contributed to the CMIP model evaluation process, resulting in many hundreds of publications that have served as a key resource for the IPCC process. For several reasons, efforts are now underway to further systematize some aspects of the model evaluation process. First, some model evaluation can now be considered routine and should not require "re-inventing the wheel" or a journal publication simply to update results with newer models. Second, the benefit of CMIP research to model development has not been optimal because the publication of results generally takes several years and is usually not reproducible for benchmarking newer model versions. And third, there are now hundreds of model versions and many thousands of simulations, but there is no community-based mechanism for routinely monitoring model performance changes. An important change in the design of CMIP6 can help address these limitations. CMIP6 will include a small set standardized experiments as an ongoing exercise (CMIP "DECK": ongoing Diagnostic, Evaluation and Characterization of Klima), so that modeling groups can submit them at any time and not be overly constrained by deadlines. In this presentation, efforts to establish routine benchmarking of existing and future CMIP simulations will be described. To date, some benchmarking tools have been made available to all CMIP modeling groups to enable them to readily compare with CMIP5 simulations during the model development process. A natural extension of this effort is to make results from all CMIP simulations widely available, including the results from newer models as soon as the simulations become available for research. Making the results from routine

  17. Analysing the Hydraulic Actuator-based Knee Unit Kinematics and Correlating the Numerical Results and Walking Human Knee Joint Behavior

    Directory of Open Access Journals (Sweden)

    K. A. Trukhanov

    2014-01-01

    Full Text Available State-of-the-art machinery development enables people with lost lower limb to continue their previous life despite a loss. International companies dealing with this area pursue a minimization of human behaviour problems because of amputation. Researches to create an optimal design of the artificial knee joint are under way.The work task was to define analytical relationships of changing kinematic parameters of the human walking on the flat surface such as an angle of the knee joint, knee point (moment, definition of reduced knee actuator (A load, as well as to compare obtained results with experimental data.As an A in created design, the article proposes to use a controlled shock absorber based on the hydraulic cylinder.A knee unit is a kinematic two-tier mechanism. One of the mechanism links performs rotational motion, and the other is rotation-translational to provide a rotation of the first one.When studying the hydraulic actuator device dynamics, as a generalized coordinate a coordinate of the piston x (or ρ position is chosen while in the study of link movements an angle β is preferable.Experimental data are obtained for a human with the body weight of 57.6 kg walking on the flat surface to estimate a value of the knee joint angle, speed, acceleration, torque, and capacity in the knee joint and are taken from the published works of foreign authors.A trigonometric approximation was used for fitting the experimental data. The resulting dependence of the reduced load on the stock of A is necessary to perform the synthesis of A. The criterion for linear mechanisms mentioned in the D.N. Popov’s work is advisable to use as a possible criterion for optimization of A.The results obtained are as follows:1. Kinematics linkage mechanism is described using relationships for dependencies of its geometrical parameters, namely a cylinder piston stroke x (or ρ and a links angle β.2. Obtained polynomials of kinematic relationships allow a synthesis of

  18. The design of a scalable, fixed-time computer benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Gustafson, J.; Rover, D.; Elbert, S.; Carter, M.

    1990-10-01

    By using the principle of fixed time benchmarking, it is possible to compare a very wide range of computers, from a small personal computer to the most powerful parallel supercomputer, an a single scale. Fixed-time benchmarks promise far greater longevity than those based on a particular problem size, and are more appropriate for grand challenge'' capability comparison. We present the design of a benchmark, SLALOM{trademark}, that scales automatically to the computing power available, and corrects several deficiencies in various existing benchmarks: it is highly scalable, it solves a real problem, it includes input and output times, and it can be run on parallel machines of all kinds, using any convenient language. The benchmark provides a reasonable estimate of the size of problem solvable on scientific computers. Results are presented that span six orders of magnitude for contemporary computers of various architectures. The benchmarks also can be used to demonstrate a new source of superlinear speedup in parallel computers. 15 refs., 14 figs., 3 tabs.

  19. Barriers to the practice of benchmarking in South African restaurants

    Directory of Open Access Journals (Sweden)

    Carina Kleynhans

    2017-07-01

    Full Text Available The main purpose of this study is to find the barriers of benchmarking use in independent full-service restaurants in South Africa. The global restaurant industry entities operate in a highly competitive environment, and restaurateurs should have a visible ad¬vantage over competitors. A competitive advantage can be achieved only if the quality standards in terms of food and beverage products, service quality, relevant technology and price are comparable to the industry leaders. This study has deployed a descriptive, quantitative research design on the basis of a relatively large sample of restaurateurs. The data was collected through the SurveyMonkey website using a standardised questionnaire The questionnaire was mailed to 2699 restaurateurs, and 109 respondents returned fully completed answer sheets. Descriptive and inferential statistics were used to analyze the data. The main findings were as follows: 43% of respondents had never done benchmarking; only 5.5% respondents considered themselves as highly knowledgeable about benchmarking; respondents thought that the most significant barriers to benchmarking were difficulties with obtaining exemplar (benchmarking partner best-practice information and adapting the anomalous (own practices to derive a benefit from best practices. The results of this study should be used to shape the knowledge about benchmarking practices in order to develop suitable solutions for the problems in South African restaurants.

  20. The new revision of NPP Krsko decommissioning, radioactive waste and spent fuel management program: analyses and results

    Energy Technology Data Exchange (ETDEWEB)

    Zeleznik, Nadja; Kralj, Metka [ARAO, Parmova 53, 1000 Ljubljana (Slovenia); Lokner, Vladimir; Levanat, Ivica; Rapic, Andrea [APO, Savska 41, Zagreb (Croatia); Mele, Irena [IAEA, Vienna (Austria)

    2010-07-01

    The preparation of the new revision of the Decommissioning and Spent Fuel (SF) and Low and Intermediate level Waste (LILW) Disposal Program for the NPP Krsko (Program) started in September 2008 after the acceptance of the Term of Reference for the work by Intergovernmental Committee responsible for implementation of the Agreement between the governments of Slovenia and Croatia on the status and other legal issues related to investment, exploitation, and decommissioning of the Nuclear power plant Krsko. The responsible organizations, APO and ARAO together with NEK prepared all new technical and financial data and relevant inputs for the new revision in which several scenarios based on the accepted boundary conditions were investigated. The strategy of immediate dismantling was analyzed for planned and extended NPP life time together with linked radioactive waste and spent fuel management to calculate yearly annuity to be paid by the owners into the decommissioning funds in Slovenia and Croatia. The new Program incorporated among others new data on the LILW repository including the costs for siting, construction and operation of silos at the location Vrbina in Krsko municipality, the site specific Preliminary Decommissioning Plan for NPP Krsko which included besides dismantling and decontamination approaches also site specific activated and contaminated radioactive waste, and results from the referenced scenario for spent fuel disposal but at very early stage. Important inputs for calculations presented also new amounts of compensations to the local communities for different nuclear facilities which were taken from the supplemented Slovenian regulation and updated fiscal parameters (inflation, interest, discount factors) used in the financial model based on the current development in economical environment. From the obtained data the nominal and discounted costs for the whole nuclear program related to NPP Krsko which is jointly owned by Slovenia and Croatia have

  1. Benchmarking Danish Industries

    OpenAIRE

    Gammelgaard, Britta; Bentzen, Eric; Aagaard Andreassen, Mette

    2003-01-01

    This report is based on the survey "Industrial Companies in Denmark – Today and Tomorrow", section IV: Supply Chain Management - Practices and Performance, question number 4.9 on performance assessment. To our knowledge, this survey is unique, as we have not been able to find results from any compatible survey. The International Manufacturing Strategy Survey (IMSS) does bring up the question of supply chain management, but unfortunately, we did not have access to the database. ...

  2. Gaia FGK benchmark stars: opening the black box of stellar element abundance determination

    Science.gov (United States)

    Jofré, P.; Heiter, U.; Worley, C. C.; Blanco-Cuaresma, S.; Soubiran, C.; Masseron, T.; Hawkins, K.; Adibekyan, V.; Buder, S.; Casamiquela, L.; Gilmore, G.; Hourihane, A.; Tabernero, H.

    2017-05-01

    Gaia and its complementary spectroscopic surveys combined will yield the most comprehensive database of kinematic and chemical information of stars in the Milky Way. The Gaia FGK benchmark stars play a central role in this matter as they are calibration pillars for the atmospheric parameters and chemical abundances for various surveys. The spectroscopic analyses of the benchmark stars are done by combining different methods, and the results will be affected by the systematic uncertainties inherent in each method. In this paper, we explore some of these systematic uncertainties. We determined line abundances of Ca, Cr, Mn and Co for four benchmark stars using six different methods. We changed the default input parameters of the different codes in a systematic way and found, in some cases, significant differences between the results. Since there is no consensus on the correct values for many of these default parameters, we urge the community to raise discussions towards standard input parameters that could alleviate the difference in abundances obtained by different methods. In this work, we provide quantitative estimates of uncertainties in elemental abundances due to the effect of differing technical assumptions in spectrum modelling.

  3. The LDBC Social Network Benchmark: Interactive Workload

    NARCIS (Netherlands)

    Erling, O.; Averbuch, A.; Larriba-Pey, J.; Chafi, H.; Gubichev, A.; Prat, A.; Pham, M.D.; Boncz, P.A.

    2015-01-01

    The Linked Data Benchmark Council (LDBC) is now two years underway and has gathered strong industrial participation for its mission to establish benchmarks, and benchmarking practices for evaluating graph data management systems. The LDBC introduced a new choke-point driven methodology for developin

  4. How Benchmarking and Higher Education Came Together

    Science.gov (United States)

    Levy, Gary D.; Ronco, Sharron L.

    2012-01-01

    This chapter introduces the concept of benchmarking and how higher education institutions began to use benchmarking for a variety of purposes. Here, benchmarking is defined as a strategic and structured approach whereby an organization compares aspects of its processes and/or outcomes to those of another organization or set of organizations to…

  5. Proposal and analysis of the benchmark problem suite for reactor physics study of LWR next generation fuels

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2001-10-01

    In order to investigate the calculation accuracy of the nuclear characteristics of LWR next generation fuels, the Research Committee on Reactor Physics organized by JAERI has established the Working Party on Reactor Physics for LWR Next Generation Fuels. The next generation fuels mean the ones aiming for further extended burn-up such as 70 GWd/t over the current design. The Working Party has proposed six benchmark problems, which consists of pin-cell, PWR fuel assembly and BWR fuel assembly geometries loaded with uranium and MOX fuels, respectively. The specifications of the benchmark problem neglect some of the current limitations such as 5 wt% {sup 235}U to achieve the above-mentioned target. Eleven organizations in the Working Party have carried out the analyses of the benchmark problems. As a result, status of accuracy with the current data and method and some problems to be solved in the future were clarified. In this report, details of the benchmark problems, result by each organization, and their comparisons are presented. (author)

  6. Arsenic absorption by members of the Brassicacea family, analysed by neutron activation, k{sub 0}-method - preliminary results

    Energy Technology Data Exchange (ETDEWEB)

    Uemura, George; Matos, Ludmila Vieira da Silva; Silva, Maria Aparecida da; Ferreira, Alexandre Santos Martorano; Menezes, Maria Angela de Barros Correia [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN-CNEN/MG), Belo Horizonte, MG (Brazil)], e-mail: george@cdtn.br, e-mail: menezes@cdtn.br

    2009-07-01

    Natural arsenic contamination is a cause for concern in many countries of the world including Argentina, Bangladesh, Chile, China, India, Mexico, Thailand and the United States of America and also in Brazil, specially in the Iron Quadrangle area, where mining activities has been contributing to aggravate natural contamination. Brassicacea is a plant family with edible species (arugula, cabbage, cauliflower, cress, kale, mustard, radish), ornamental ones (alysssum, field pennycress, ornamental cabbages and kales) and some species are known as metal and metalloid accumulators (Indian mustard, field pennycress), like chromium, nickel, and arsenic. The present work aimed at studying other taxa of the Brassicaceae family to verify their capability in absorbing arsenic, under controlled conditions, for possible utilisation in remediation activities. The analytical method chosen was neutron activation analysis, k{sub 0} method, a routine technique at CDTN, and also very appropriate for arsenic studies. To avoid possible interference from solid substrates, like sand or vermiculite, attempts were carried out to keep the specimens in 1/4 Murashige and Skoog basal salt solution (M and S). Growth was stumped, plants withered and perished, showing that modifications in M and S had to be done. The addition of nickel and silicon allowed normal growth of the plant specimens, for periods longer than usually achieved (more than two months); yielding samples large enough for further studies with other techniques, like ICP-MS, and other targets, like speciation studies. The results of arsenic absorption are presented here and the need of nickel and silicon in the composition of M and S is discussed. (author)

  7. Pandemic influenza preparedness and health systems challenges in Asia: results from rapid analyses in 6 Asian countries

    Directory of Open Access Journals (Sweden)

    Putthasri Weerasak

    2010-06-01

    Full Text Available Abstract Background Since 2003, Asia-Pacific, particularly Southeast Asia, has received substantial attention because of the anticipation that it could be the epicentre of the next pandemic. There has been active investment but earlier review of pandemic preparedness plans in the region reveals that the translation of these strategic plans into operational plans is still lacking in some countries particularly those with low resources. The objective of this study is to understand the pandemic preparedness programmes, the health systems context, and challenges and constraints specific to the six Asian countries namely Cambodia, Indonesia, Lao PDR, Taiwan, Thailand, and Viet Nam in the prepandemic phase before the start of H1N1/2009. Methods The study relied on the Systemic Rapid Assessment (SYSRA toolkit, which evaluates priority disease programmes by taking into account the programmes, the general health system, and the wider socio-cultural and political context. The components under review were: external context; stewardship and organisational arrangements; financing, resource generation and allocation; healthcare provision; and information systems. Qualitative and quantitative data were collected in the second half of 2008 based on a review of published data and interviews with key informants, exploring past and current patterns of health programme and pandemic response. Results The study shows that health systems in the six countries varied in regard to the epidemiological context, health care financing, and health service provision patterns. For pandemic preparation, all six countries have developed national governance on pandemic preparedness as well as national pandemic influenza preparedness plans and Avian and Human Influenza (AHI response plans. However, the governance arrangements and the nature of the plans differed. In the five developing countries, the focus was on surveillance and rapid containment of poultry related transmission

  8. Benchmarking HIV health care

    DEFF Research Database (Denmark)

    Podlekareva, Daria; Reekie, Joanne; Mocroft, Amanda

    2012-01-01

    ABSTRACT: BACKGROUND: State-of-the-art care involving the utilisation of multiple health care interventions is the basis for an optimal long-term clinical prognosis for HIV-patients. We evaluated health care for HIV-patients based on four key indicators. METHODS: Four indicators of health care were...... assessed: Compliance with current guidelines on initiation of 1) combination antiretroviral therapy (cART), 2) chemoprophylaxis, 3) frequency of laboratory monitoring, and 4) virological response to cART (proportion of patients with HIV-RNA 90% of time on cART). RESULTS: 7097 Euro...... to North, patients from other regions had significantly lower odds of virological response; the difference was most pronounced for East and Argentina (adjusted OR 0.16[95%CI 0.11-0.23, p HIV health care utilization...

  9. Geothermal Heat Pump Benchmarking Report

    Energy Technology Data Exchange (ETDEWEB)

    None

    1997-01-17

    A benchmarking study was conducted on behalf of the Department of Energy to determine the critical factors in successful utility geothermal heat pump programs. A Successful program is one that has achieved significant market penetration. Successfully marketing geothermal heat pumps has presented some major challenges to the utility industry. However, select utilities have developed programs that generate significant GHP sales. This benchmarking study concludes that there are three factors critical to the success of utility GHP marking programs: (1) Top management marketing commitment; (2) An understanding of the fundamentals of marketing and business development; and (3) An aggressive competitive posture. To generate significant GHP sales, competitive market forces must by used. However, because utilities have functioned only in a regulated arena, these companies and their leaders are unschooled in competitive business practices. Therefore, a lack of experience coupled with an intrinsically non-competitive culture yields an industry environment that impedes the generation of significant GHP sales in many, but not all, utilities.

  10. Seismic analysis of the Mirror Fusion Test Facility: soil structure interaction analyses of the Axicell vacuum vessel. Revision 1

    Energy Technology Data Exchange (ETDEWEB)

    Maslenikov, O.R.; Mraz, M.J.; Johnson, J.J.

    1986-03-01

    This report documents the seismic analyses performed by SMA for the MFTF-B Axicell vacuum vessel. In the course of this study we performed response spectrum analyses, CLASSI fixed-base analyses, and SSI analyses that included interaction effects between the vessel and vault. The response spectrum analysis served to benchmark certain modeling differences between the LLNL and SMA versions of the vessel model. The fixed-base analysis benchmarked the differences between analysis techniques. The SSI analyses provided our best estimate of vessel response to the postulated seismic excitation for the MFTF-B facility, and included consideration of uncertainties in soil properties by calculating response for a range of soil shear moduli. Our results are presented in this report as tables of comparisons of specific member forces from our analyses and the analyses performed by LLNL. Also presented are tables of maximum accelerations and relative displacements and plots of response spectra at various selected locations.

  11. A Benchmark for Management Effectiveness

    OpenAIRE

    Zimmermann, Bill; Chanaron, Jean-Jacques; Klieb, Leslie

    2007-01-01

    International audience; This study presents a tool to gauge managerial effectiveness in the form of a questionnaire that is easy to administer and score. The instrument covers eight distinct areas of organisational climate and culture of management inside a company or department. Benchmark scores were determined by administering sample-surveys to a wide cross-section of individuals from numerous firms in Southeast Louisiana, USA. Scores remained relatively constant over a seven-year timeframe...

  12. Restaurant Energy Use Benchmarking Guideline

    Energy Technology Data Exchange (ETDEWEB)

    Hedrick, R.; Smith, V.; Field, K.

    2011-07-01

    A significant operational challenge for food service operators is defining energy use benchmark metrics to compare against the performance of individual stores. Without metrics, multiunit operators and managers have difficulty identifying which stores in their portfolios require extra attention to bring their energy performance in line with expectations. This report presents a method whereby multiunit operators may use their own utility data to create suitable metrics for evaluating their operations.

  13. Thermal Performance Benchmarking

    Energy Technology Data Exchange (ETDEWEB)

    Feng, Xuhui; Moreno, Gilbert; Bennion, Kevin

    2016-06-07

    The goal for this project is to thoroughly characterize the thermal performance of state-of-the-art (SOA) in-production automotive power electronics and electric motor thermal management systems. Information obtained from these studies will be used to: evaluate advantages and disadvantages of different thermal management strategies; establish baseline metrics for the thermal management systems; identify methods of improvement to advance the SOA; increase the publicly available information related to automotive traction-drive thermal management systems; help guide future electric drive technologies (EDT) research and development (R&D) efforts. The thermal performance results combined with component efficiency and heat generation information obtained by Oak Ridge National Laboratory (ORNL) may then be used to determine the operating temperatures for the EDT components under drive-cycle conditions. In FY16, the 2012 Nissan LEAF power electronics and 2014 Honda Accord Hybrid power electronics thermal management system were characterized. Comparison of the two power electronics thermal management systems was also conducted to provide insight into the various cooling strategies to understand the current SOA in thermal management for automotive power electronics and electric motors.

  14. Benchmarks for multicomponent diffusion and electrochemical migration

    DEFF Research Database (Denmark)

    Rasouli, Pejman; Steefel, Carl I.; Mayer, K. Ulrich

    2015-01-01

    In multicomponent electrolyte solutions, the tendency of ions to diffuse at different rates results in a charge imbalance that is counteracted by the electrostatic coupling between charged species leading to a process called “electrochemical migration” or “electromigration.” Although not commonly...... not been published to date. This contribution provides a set of three benchmark problems that demonstrate the effect of electric coupling during multicomponent diffusion and electrochemical migration and at the same time facilitate the intercomparison of solutions from existing reactive transport codes...

  15. ABM11 parton distributions and benchmarks

    Energy Technology Data Exchange (ETDEWEB)

    Alekhin, Sergey [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Institut Fiziki Vysokikh Ehnergij, Protvino (Russian Federation); Bluemlein, Johannes; Moch, Sven-Olaf [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany)

    2012-08-15

    We present a determination of the nucleon parton distribution functions (PDFs) and of the strong coupling constant {alpha}{sub s} at next-to-next-to-leading order (NNLO) in QCD based on the world data for deep-inelastic scattering and the fixed-target data for the Drell-Yan process. The analysis is performed in the fixed-flavor number scheme for n{sub f}=3,4,5 and uses the MS scheme for {alpha}{sub s} and the heavy quark masses. The fit results are compared with other PDFs and used to compute the benchmark cross sections at hadron colliders to the NNLO accuracy.

  16. Argonne Code Center: Benchmark problem book.

    Energy Technology Data Exchange (ETDEWEB)

    None, None

    1977-06-01

    This book is an outgrowth of activities of the Computational Benchmark Problems Committee of the Mathematics and Computation Division of the American Nuclear Society. This is the second supplement of the original benchmark book which was first published in February, 1968 and contained computational benchmark problems in four different areas. Supplement No. 1, which was published in December, 1972, contained corrections to the original benchmark book plus additional problems in three new areas. The current supplement. Supplement No. 2, contains problems in eight additional new areas. The objectives of computational benchmark work and the procedures used by the committee in pursuing the objectives are outlined in the original edition of the benchmark book (ANL-7416, February, 1968). The members of the committee who have made contributions to Supplement No. 2 are listed below followed by the contributors to the earlier editions of the benchmark book.

  17. An improved infrared carbon monoxide analyser for routine measurements aboard commercial airbus aircraft: Technical validation and first scientific results of the MOZAIC III programme

    Directory of Open Access Journals (Sweden)

    P. Nedelec

    2003-07-01

    Full Text Available The European-funded MOZAIC programme (Measurements of ozone and water vapour by Airbus in-service aircraft has been operational since 1994 aboard 5 commercial Airbus A340. It has gathered ozone and water vapour data between the ground and an altitude of 12 km from more than 20 000 long-range flights. A new infrared carbon monoxide analyser has been developed for installation on the MOZAIC equipped aircraft. Improvements in the basic characteristics of a commercial CO analysers have achieved performance suitable for routine aircraft measurements : ±5 ppbv, ±5% precision for a 30 s response time. The first year of operation on board 4 aircraft with more than 900 flights has proven the reliability and the usefulness of this CO analyser. The first scientific results are presented here, including UTLS exchange events and pollution within the boundary layer.

  18. Benchmarks

    Data.gov (United States)

    Earth Data Analysis Center, University of New Mexico — The National Flood Hazard Layer (NFHL) data incorporates all Digital Flood Insurance Rate Map(DFIRM) databases published by FEMA, and any Letters Of Map Revision...

  19. Benchmarking strategies for measuring the quality of healthcare: problems and prospects.

    Science.gov (United States)

    Lovaglio, Pietro Giorgio

    2012-01-01

    Over the last few years, increasing attention has been directed toward the problems inherent to measuring the quality of healthcare and implementing benchmarking strategies. Besides offering accreditation and certification processes, recent approaches measure the performance of healthcare institutions in order to evaluate their effectiveness, defined as the capacity to provide treatment that modifies and improves the patient's state of health. This paper, dealing with hospital effectiveness, focuses on research methods for effectiveness analyses within a strategy comparing different healthcare institutions. The paper, after having introduced readers to the principle debates on benchmarking strategies, which depend on the perspective and type of indicators used, focuses on the methodological problems related to performing consistent benchmarking analyses. Particularly, statistical methods suitable for controlling case-mix, analyzing aggregate data, rare events, and continuous outcomes measured with error are examined. Specific challenges of benchmarking strategies, such as the risk of risk adjustment (case-mix fallacy, underreporting, risk of comparing noncomparable hospitals), selection bias, and possible strategies for the development of consistent benchmarking analyses, are discussed. Finally, to demonstrate the feasibility of the illustrated benchmarking strategies, an application focused on determining regional benchmarks for patient satisfaction (using 2009 Lombardy Region Patient Satisfaction Questionnaire) is proposed.

  20. How Methodologic Differences Affect Results of Economic Analyses: A Systematic Review of Interferon Gamma Release Assays for the Diagnosis of LTBI

    OpenAIRE

    2013-01-01

    INTRODUCTION: Cost effectiveness analyses (CEA) can provide useful information on how to invest limited funds, however they are less useful if different analysis of the same intervention provide unclear or contradictory results. The objective of our study was to conduct a systematic review of methodologic aspects of CEA that evaluate Interferon Gamma Release Assays (IGRA) for the detection of Latent Tuberculosis Infection (LTBI), in order to understand how differences affect study results. ME...

  1. Transparency benchmarking on audio watermarks and steganography

    Science.gov (United States)

    Kraetzer, Christian; Dittmann, Jana; Lang, Andreas

    2006-02-01

    The evaluation of transparency plays an important role in the context of watermarking and steganography algorithms. This paper introduces a general definition of the term transparency in the context of steganography, digital watermarking and attack based evaluation of digital watermarking algorithms. For this purpose the term transparency is first considered individually for each of the three application fields (steganography, digital watermarking and watermarking algorithm evaluation). From the three results a general definition for the overall context is derived in a second step. The relevance and applicability of the definition given is evaluated in practise using existing audio watermarking and steganography algorithms (which work in time, frequency and wavelet domain) as well as an attack based evaluation suite for audio watermarking benchmarking - StirMark for Audio (SMBA). For this purpose selected attacks from the SMBA suite are modified by adding transparency enhancing measures using a psychoacoustic model. The transparency and robustness of the evaluated audio watermarking algorithms by using the original and modifid attacks are compared. The results of this paper show hat transparency benchmarking will lead to new information regarding the algorithms under observation and their usage. This information can result in concrete recommendations for modification, like the ones resulting from the tests performed here.

  2. Development of a Benchmark Example for Delamination Fatigue Growth Prediction

    Science.gov (United States)

    Krueger, Ronald

    2010-01-01

    The development of a benchmark example for cyclic delamination growth prediction is presented and demonstrated for a commercial code. The example is based on a finite element model of a Double Cantilever Beam (DCB) specimen, which is independent of the analysis software used and allows the assessment of the delamination growth prediction capabilities in commercial finite element codes. First, the benchmark result was created for the specimen. Second, starting from an initially straight front, the delamination was allowed to grow under cyclic loading in a finite element model of a commercial code. The number of cycles to delamination onset and the number of cycles during stable delamination growth for each growth increment were obtained from the analysis. In general, good agreement between the results obtained from the growth analysis and the benchmark results could be achieved by selecting the appropriate input parameters. Overall, the results are encouraging but further assessment for mixed-mode delamination is required.

  3. A Uranium Bioremediation Reactive Transport Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Yabusaki, Steven B.; Sengor, Sevinc; Fang, Yilin

    2015-06-01

    A reactive transport benchmark problem set has been developed based on in situ uranium bio-immobilization experiments that have been performed at a former uranium mill tailings site in Rifle, Colorado, USA. Acetate-amended groundwater stimulates indigenous microorganisms to catalyze the reduction of U(VI) to a sparingly soluble U(IV) mineral. The interplay between the flow, acetate loading periods and rates, microbially-mediated and geochemical reactions leads to dynamic behavior in metal- and sulfate-reducing bacteria, pH, alkalinity, and reactive mineral surfaces. The benchmark is based on an 8.5 m long one-dimensional model domain with constant saturated flow and uniform porosity. The 159-day simulation introduces acetate and bromide through the upgradient boundary in 14-day and 85-day pulses separated by a 10 day interruption. Acetate loading is tripled during the second pulse, which is followed by a 50 day recovery period. Terminal electron accepting processes for goethite, phyllosilicate Fe(III), U(VI), and sulfate are modeled using Monod-type rate laws. Major ion geochemistry modeled includes mineral reactions, as well as aqueous and surface complexation reactions for UO2++, Fe++, and H+. In addition to the dynamics imparted by the transport of the acetate pulses, U(VI) behavior involves the interplay between bioreduction, which is dependent on acetate availability, and speciation-controlled surface complexation, which is dependent on pH, alkalinity and available surface complexation sites. The general difficulty of this benchmark is the large number of reactions (74), multiple rate law formulations, a multisite uranium surface complexation model, and the strong interdependency and sensitivity of the reaction processes. Results are presented for three simulators: HYDROGEOCHEM, PHT3D, and PHREEQC.

  4. Benchmarking Analysis of Institutional University Autonomy in Denmark, Lithuania, Romania, Scotland, and Sweden

    DEFF Research Database (Denmark)

    This book presents a benchmark, comparative analysis of institutional university autonomy in Denmark, Lithuania, Romania, Scotland and Sweden. These countries are partners in a EU TEMPUS funded project 'Enhancing University Autonomy in Moldova' (EUniAM). This benchmark analysis was conducted...... by the EUniAM Lead Task Force team that collected and analysed secondary and primary data in each of these countries and produced four benchmark reports that are part of this book. For each dimension and interface of institutional university autonomy, the members of the Lead Task Force team identified...... respective evaluation criteria and searched for similarities and differences in approaches to higher education sectors and respective autonomy regimes in these countries. The consolidated report that precedes the benchmark reports summarises the process and key findings from the four benchmark reports...

  5. Benchmarking of thermal hydraulic loop models for Lead-Alloy Cooled Advanced Nuclear Energy System (LACANES), phase-I: Isothermal steady state forced convection

    Science.gov (United States)

    Cho, Jae Hyun; Batta, A.; Casamassima, V.; Cheng, X.; Choi, Yong Joon; Hwang, Il Soon; Lim, Jun; Meloni, P.; Nitti, F. S.; Dedul, V.; Kuznetsov, V.; Komlev, O.; Jaeger, W.; Sedov, A.; Kim, Ji Hak; Puspitarini, D.

    2011-08-01

    As highly promising coolant for new generation nuclear reactors, liquid Lead-Bismuth Eutectic has been extensively worldwide investigated. With high expectation about this advanced coolant, a multi-national systematic study on LBE was proposed in 2007, which covers benchmarking of thermal hydraulic prediction models for Lead-Alloy Cooled Advanced Nuclear Energy System (LACANES). This international collaboration has been organized by OECD/NEA, and nine organizations - ENEA, ERSE, GIDROPRESS, IAEA, IPPE, KIT/IKET, KIT/INR, NUTRECK, and RRC KI - contribute their efforts to LACANES benchmarking. To produce experimental data for LACANES benchmarking, thermal-hydraulic tests were conducted by using a 12-m tall LBE integral test facility, named as Heavy Eutectic liquid metal loop for integral test of Operability and Safety of PEACER (HELIOS) which has been constructed in 2005 at the Seoul National University in the Republic of Korea. LACANES benchmark campaigns consist of a forced convection (phase-I) and a natural circulation (phase-II). In the forced convection case, the predictions of pressure losses based on handbook correlations and that obtained by Computational Fluid Dynamics code simulation were compared with the measured data for various components of the HELIOS test facility. Based on comparative analyses of the predictions and the measured data, recommendations for the prediction methods of a pressure loss in LACANES were obtained. In this paper, results for the forced convection case (phase-I) of LACANES benchmarking are described.

  6. PageRank Pipeline Benchmark: Proposal for a Holistic System Benchmark for Big-Data Platforms

    CERN Document Server

    Dreher, Patrick; Hill, Chris; Gadepally, Vijay; Kuszmaul, Bradley; Kepner, Jeremy

    2016-01-01

    The rise of big data systems has created a need for benchmarks to measure and compare the capabilities of these systems. Big data benchmarks present unique scalability challenges. The supercomputing community has wrestled with these challenges for decades and developed methodologies for creating rigorous scalable benchmarks (e.g., HPC Challenge). The proposed PageRank pipeline benchmark employs supercomputing benchmarking methodologies to create a scalable benchmark that is reflective of many real-world big data processing systems. The PageRank pipeline benchmark builds on existing prior scalable benchmarks (Graph500, Sort, and PageRank) to create a holistic benchmark with multiple integrated kernels that can be run together or independently. Each kernel is well defined mathematically and can be implemented in any programming environment. The linear algebraic nature of PageRank makes it well suited to being implemented using the GraphBLAS standard. The computations are simple enough that performance predictio...

  7. Melcor benchmarking against integral severe fuel damage tests

    Energy Technology Data Exchange (ETDEWEB)

    Madni, I.K. [Brookhaven National Lab., Upton, NY (United States)

    1995-09-01

    MELCOR is a fully integrated computer code that models all phases of the progression of severe accidents in light water reactor nuclear power plants, and is being developed for the U.S. Nuclear Regulatory Commission (NRC) by Sandia National Laboratories (SNL). Brookhaven National Laboratory (BNL) has a program with the NRC to provide independent assessment of MELCOR, and a very important part of this program is to benchmark MELCOR against experimental data from integral severe fuel damage tests and predictions of that data from more mechanistic codes such as SCDAP or SCDAP/RELAP5. Benchmarking analyses with MELCOR have been carried out at BNL for five integral severe fuel damage tests, namely, PBF SFD 1-1, SFD 14, and NRU FLHT-2, analyses, and their role in identifying areas of modeling strengths and weaknesses in MELCOR.

  8. Methodical aspects of benchmarking using in Consumer Cooperatives trade enterprises activity

    OpenAIRE

    2013-01-01

    The aim of the article. The aim of this article is substantiation of benchmarking main types in Consumer Cooperatives trade enterprises activity; flashlighting of main advantages and drawbacks of benchmarking using; presentation of the authors view upon expediency of flashlighted forms of benchmarking organization using in Consumer Cooperatives in Ukraine trade enterprises activity.The results of the analysis. Under modern conditions of economic relations development and business globalizatio...

  9. Developing a benchmark for emotional analysis of music

    Science.gov (United States)

    Yang, Yi-Hsuan; Soleymani, Mohammad

    2017-01-01

    Music emotion recognition (MER) field rapidly expanded in the last decade. Many new methods and new audio features are developed to improve the performance of MER algorithms. However, it is very difficult to compare the performance of the new methods because of the data representation diversity and scarcity of publicly available data. In this paper, we address these problems by creating a data set and a benchmark for MER. The data set that we release, a MediaEval Database for Emotional Analysis in Music (DEAM), is the largest available data set of dynamic annotations (valence and arousal annotations for 1,802 songs and song excerpts licensed under Creative Commons with 2Hz time resolution). Using DEAM, we organized the ‘Emotion in Music’ task at MediaEval Multimedia Evaluation Campaign from 2013 to 2015. The benchmark attracted, in total, 21 active teams to participate in the challenge. We analyze the results of the benchmark: the winning algorithms and feature-sets. We also describe the design of the benchmark, the evaluation procedures and the data cleaning and transformations that we suggest. The results from the benchmark suggest that the recurrent neural network based approaches combined with large feature-sets work best for dynamic MER. PMID:28282400

  10. Developing a benchmark for emotional analysis of music.

    Science.gov (United States)

    Aljanaki, Anna; Yang, Yi-Hsuan; Soleymani, Mohammad

    2017-01-01

    Music emotion recognition (MER) field rapidly expanded in the last decade. Many new methods and new audio features are developed to improve the performance of MER algorithms. However, it is very difficult to compare the performance of the new methods because of the data representation diversity and scarcity of publicly available data. In this paper, we address these problems by creating a data set and a benchmark for MER. The data set that we release, a MediaEval Database for Emotional Analysis in Music (DEAM), is the largest available data set of dynamic annotations (valence and arousal annotations for 1,802 songs and song excerpts licensed under Creative Commons with 2Hz time resolution). Using DEAM, we organized the 'Emotion in Music' task at MediaEval Multimedia Evaluation Campaign from 2013 to 2015. The benchmark attracted, in total, 21 active teams to participate in the challenge. We analyze the results of the benchmark: the winning algorithms and feature-sets. We also describe the design of the benchmark, the evaluation procedures and the data cleaning and transformations that we suggest. The results from the benchmark suggest that the recurrent neural network based approaches combined with large feature-sets work best for dynamic MER.

  11. NASA Software Engineering Benchmarking Effort

    Science.gov (United States)

    Godfrey, Sally; Rarick, Heather

    2012-01-01

    Benchmarking was very interesting and provided a wealth of information (1) We did see potential solutions to some of our "top 10" issues (2) We have an assessment of where NASA stands with relation to other aerospace/defense groups We formed new contacts and potential collaborations (1) Several organizations sent us examples of their templates, processes (2) Many of the organizations were interested in future collaboration: sharing of training, metrics, Capability Maturity Model Integration (CMMI) appraisers, instructors, etc. We received feedback from some of our contractors/ partners (1) Desires to participate in our training; provide feedback on procedures (2) Welcomed opportunity to provide feedback on working with NASA

  12. Benchmarking of human resources management

    OpenAIRE

    David M. Akinnusi

    2008-01-01

    This paper reviews the role of human resource management (HRM) which, today, plays a strategic partnership role in management. The focus of the paper is on HRM in the public sector, where much hope rests on HRM as a means of transforming the public service and achieving much needed service delivery. However, a critical evaluation of HRM practices in the public sector reveals that these services leave much to be desired. The paper suggests the adoption of benchmarking as a process to revamp HR...

  13. BENCHMARKING LEARNER EDUCATION USING ONLINE BUSINESS SIMULATION

    Directory of Open Access Journals (Sweden)

    Alfred H. Miller

    2016-06-01

    Full Text Available For programmatic accreditation by the Accreditation Council of Business Schools and Programs (ACBSP, business programs are required to meet STANDARD #4, Measurement and Analysis of Student Learning and Performance. Business units must demonstrate that outcome assessment systems are in place using documented evidence that shows how the results are being used to further develop or improve the academic business program. The Higher Colleges of Technology, a 17 campus federal university in the United Arab Emirates, differentiates its applied degree programs through a ‘learning by doing ethos,’ which permeates the entire curricula. This paper documents benchmarking of education for managing innovation. Using business simulation for Bachelors of Business, Year 3 learners, in a business strategy class; learners explored through a simulated environment the following functional areas; research and development, production, and marketing of a technology product. Student teams were required to use finite resources and compete against other student teams in the same universe. The study employed an instrument developed in a 60-sample pilot study of business simulation learners against which subsequent learners participating in online business simulation could be benchmarked. The results showed incremental improvement in the program due to changes made in assessment strategies, including the oral defense.

  14. EVA Health and Human Performance Benchmarking Study

    Science.gov (United States)

    Abercromby, A. F.; Norcross, J.; Jarvis, S. L.

    2016-01-01

    Multiple HRP Risks and Gaps require detailed characterization of human health and performance during exploration extravehicular activity (EVA) tasks; however, a rigorous and comprehensive methodology for characterizing and comparing the health and human performance implications of current and future EVA spacesuit designs does not exist. This study will identify and implement functional tasks and metrics, both objective and subjective, that are relevant to health and human performance, such as metabolic expenditure, suit fit, discomfort, suited postural stability, cognitive performance, and potentially biochemical responses for humans working inside different EVA suits doing functional tasks under the appropriate simulated reduced gravity environments. This study will provide health and human performance benchmark data for humans working in current EVA suits (EMU, Mark III, and Z2) as well as shirtsleeves using a standard set of tasks and metrics with quantified reliability. Results and methodologies developed during this test will provide benchmark data against which future EVA suits, and different suit configurations (eg, varied pressure, mass, CG) may be reliably compared in subsequent tests. Results will also inform fitness for duty standards as well as design requirements and operations concepts for future EVA suits and other exploration systems.

  15. Implementing an Internal Development Process Benchmark Using PDM-Data

    OpenAIRE

    Roelofsen, J.; Fuchs, S. D.; Fuchs, D. K.; Lindemann, U.

    2009-01-01

    This paper introduces the concept for an internal development process benchmark using PDM-data. The analysis of the PDM-data at a company is used to compare development work at three different locations across Europe. The concept of a tool implemented at the company is shown as well as exemplary analyses carried out by this tool. The interpretation portfolio provided to support the interpretation of the generated charts is explained and different types of reports derived from the ...

  16. Methods and Techniques Used to Convey Total System Performance Assessment Analyses and Results for Site Recommendation at Yucca Mountain, Nevada, USA

    Energy Technology Data Exchange (ETDEWEB)

    P. D. Mattie; J. A. McNeish; D. S. Sevougian; R. W. Andrews

    2001-04-13

    Total System Performance Assessment (TSPA) is used as a key decision-making tool for the potential geologic repository for high level radioactive waste at Yucca Mountain, Nevada, USA. Because of the complexity and uncertainty involved in a post-closure performance assessment, an important goal is to produce a transparent document describing the assumptions, the intermediate steps, the results, and the conclusions of the analyses. An important objective for a TSPA analysis is to illustrate confidence in performance projections of the potential repository given a complex system of interconnected process models, data, and abstractions. The methods and techniques used for the recent TSPA analyses demonstrate an effective process to portray complex models and results with transparency and credibility.

  17. Methods and Techniques Used to Convey Total System Performance Assessment Analyses and Results for Site Recommendation at Yucca Mountain, Nevada, USA

    Energy Technology Data Exchange (ETDEWEB)

    Mattie, Patrick D.; McNeish, Jerry A.; Sevougian, S. David [Duke Engineering and Services, Las Vegas, NV (United States); Andrews, Robert W. [Bechtel SAIC Company, Las Vegas, NV (United States)

    2001-07-01

    Total System Performance Assessment (TSPA) is used as a key decision-making tool for the potential geologic repository of high level radioactive waste at Yucca Mountain, Nevada USA. Because of the complexity and uncertainty involved in a post-closure performance assessment, an important goal is to produce a transparent document describing the assumptions, the intermediate steps, the results, and the conclusions of the analyses. An important objective for a TSPA analysis is to illustrate confidence in performance projections of the potential repository given a complex system of interconnected process models, data, and abstractions. The methods and techniques used for the recent TSPA analyses demonstrate an effective process to portray complex models and results with transparency and credibility.

  18. Lessons learned for participation in recent OECD-NEA reactor physics and thermalhydraulic benchmarks

    Energy Technology Data Exchange (ETDEWEB)

    Novog, D.R.; Leung, K.H.; Ball, M. [McMaster Univ., Dept. of Engineering Physics, Hamilton, Ontario (Canada)

    2013-07-01

    Over the last 6 years the OECD-NEA has initiated a series of computational benchmarks in the fields of reactor physics and thermalhydraulics. Within this context McMaster university has been a key contributor and applied several state of the art tools including TSUNAMI, DRAGON, ASSERT, STAR-CCM+, RELAP and TRACE. Considering the tremendous amount of international participation in these benchmarks, there were many lessons of both technical and non-technical that should be shared. This paper presents a summary of the benchmarks, the results and contributions from McMaster, and the authors opinion on the overall conclusions gained from these extensive benchmarks. The benchmarks discussed in this paper include the Uncertainty Analysis in Modelling (UAM), the BWR fine mesh bundle test (BFBT), the PWR Subchannel Boiling Test (PSBT), the MATiS mixing experiment and the IAEA super critical water benchmarks on heat transfer and stability. (author)

  19. [Selection of a statistical model for the evaluation of the reliability of the results of toxicological analyses. II. Selection of our statistical model for the evaluation].

    Science.gov (United States)

    Antczak, K; Wilczyńska, U

    1980-01-01

    Part II presents a statistical model devised by the authors for evaluating toxicological analyses results. The model includes: 1. Establishment of a reference value, basing on our own measurements taken by two independent analytical methods. 2. Selection of laboratories -- basing on the deviation of the obtained values from reference ones. 3. On consideration of variance analysis, t-student's test and differences test, subsequent quality controls and particular laboratories have been evaluated.

  20. Discrepancies in Communication Versus Documentation of Weight-Management Benchmarks

    Science.gov (United States)

    Turer, Christy B.; Barlow, Sarah E.; Montaño, Sergio; Flores, Glenn

    2017-01-01

    To examine gaps in communication versus documentation of weight-management clinical practices, communication was recorded during primary care visits with 6- to 12-year-old overweight/obese Latino children. Communication/documentation content was coded by 3 reviewers using communication transcripts and health-record documentation. Discrepancies in communication/documentation content codes were resolved through consensus. Bivariate/multivariable analyses examined factors associated with discrepancies in benchmark communication/documentation. Benchmarks were neither communicated nor documented in up to 42% of visits, and communicated but not documented or documented but not communicated in up to 20% of visits. Lowest benchmark performance rates were for laboratory studies (35%) and nutrition/weight-management referrals (42%). In multivariable analysis, overweight (vs obesity) was associated with 1.6 more discrepancies in communication versus documentation (P = .03). Many weight-management benchmarks are not met, not documented, or performed without being communicated. Enhanced communication with families and documentation in health records may promote lifestyle changes in overweight children and higher quality care for overweight children in primary care.

  1. Benchmarking an Unstructured-Grid Model for Tsunami Current Modeling

    Science.gov (United States)

    Zhang, Yinglong J.; Priest, George; Allan, Jonathan; Stimely, Laura

    2016-12-01

    We present model results derived from a tsunami current benchmarking workshop held by the NTHMP (National Tsunami Hazard Mitigation Program) in February 2015. Modeling was undertaken using our own 3D unstructured-grid model that has been previously certified by the NTHMP for tsunami inundation. Results for two benchmark tests are described here, including: (1) vortex structure in the wake of a submerged shoal and (2) impact of tsunami waves on Hilo Harbor in the 2011 Tohoku event. The modeled current velocities are compared with available lab and field data. We demonstrate that the model is able to accurately capture the velocity field in the two benchmark tests; in particular, the 3D model gives a much more accurate wake structure than the 2D model for the first test, with the root-mean-square error and mean bias no more than 2 cm s-1 and 8 mm s-1, respectively, for the modeled velocity.

  2. VENUS-F: A fast lead critical core for benchmarking

    Energy Technology Data Exchange (ETDEWEB)

    Kochetkov, A.; Wagemans, J.; Vittiglio, G. [SCK.CEN, Boeretang 200, 2400 Mol (Belgium)

    2011-07-01

    The zero-power thermal neutron water-moderated facility VENUS at SCK-CEN has been extensively used for benchmarking in the past. In accordance with GEN-IV design tasks (fast reactor systems and accelerator driven systems), the VENUS facility was modified in 2007-2010 into the fast neutron facility VENUS-F with solid core components. This paper introduces the projects GUINEVERE and FREYA, which are being conducted at the VENUS-F facility, and it presents the measurement results obtained at the first critical core. Throughout the projects other fast lead benchmarks also will be investigated. The measurement results of the different configurations can all be used as fast neutron benchmarks. (authors)

  3. JACOB: a dynamic database for computational chemistry benchmarking.

    Science.gov (United States)

    Yang, Jack; Waller, Mark P

    2012-12-21

    JACOB (just a collection of benchmarks) is a database that contains four diverse benchmark studies, which in-turn included 72 data sets, with a total of 122,356 individual results. The database is constructed upon a dynamic web framework that allows users to retrieve data from the database via predefined categories. Additional flexibility is made available via user-defined text-based queries. Requested sets of results are then automatically presented as bar graphs, with parameters of the graphs being controllable via the URL. JACOB is currently available at www.wallerlab.org/jacob.

  4. Benchmarking Data Sets for the Evaluation of Virtual Ligand Screening Methods: Review and Perspectives.

    Science.gov (United States)

    Lagarde, Nathalie; Zagury, Jean-François; Montes, Matthieu

    2015-07-27

    Virtual screening methods are commonly used nowadays in drug discovery processes. However, to ensure their reliability, they have to be carefully evaluated. The evaluation of these methods is often realized in a retrospective way, notably by studying the enrichment of benchmarking data sets. To this purpose, numerous benchmarking data sets were developed over the years, and the resulting improvements led to the availability of high quality benchmarking data sets. However, some points still have to be considered in the selection of the active compounds, decoys, and protein structures to obtain optimal benchmarking data sets.

  5. An Effective Approach for Benchmarking Implementation

    OpenAIRE

    B. M. Deros; Tan, J.; M.N.A. Rahman; N. A.Q.M. Daud

    2011-01-01

    Problem statement: The purpose of this study is to present a benchmarking guideline, conceptual framework and computerized mini program to assists companies achieve better performance in terms of quality, cost, delivery, supply chain and eventually increase their competitiveness in the market. The study begins with literature review on benchmarking definition, barriers and advantages from the implementation and the study of benchmarking framework. Approach: Thirty res...

  6. Computational Chemistry Comparison and Benchmark Database

    Science.gov (United States)

    SRD 101 NIST Computational Chemistry Comparison and Benchmark Database (Web, free access)   The NIST Computational Chemistry Comparison and Benchmark Database is a collection of experimental and ab initio thermochemical properties for a selected set of molecules. The goals are to provide a benchmark set of molecules for the evaluation of ab initio computational methods and allow the comparison between different ab initio computational methods for the prediction of thermochemical properties.

  7. Benchmarking i eksternt regnskab og revision

    DEFF Research Database (Denmark)

    Thinggaard, Frank; Kiertzner, Lars

    2001-01-01

    løbende i en benchmarking-proces. Dette kapitel vil bredt undersøge, hvor man med nogen ret kan få benchmarking-begrebet knyttet til eksternt regnskab og revision. Afsnit 7.1 beskæftiger sig med det eksterne årsregnskab, mens afsnit 7.2 tager fat i revisionsområdet. Det sidste afsnit i kapitlet opsummerer...... betragtningerne om benchmarking i forbindelse med begge områder....

  8. Structural Benchmark Testing for Stirling Convertor Heater Heads

    Science.gov (United States)

    Krause, David L.; Kalluri, Sreeramesh; Bowman, Randy R.

    2007-01-01

    The National Aeronautics and Space Administration (NASA) has identified high efficiency Stirling technology for potential use on long duration Space Science missions such as Mars rovers, deep space missions, and lunar applications. For the long life times required, a structurally significant design limit for the Stirling convertor heater head is creep deformation induced even under relatively low stress levels at high material temperatures. Conventional investigations of creep behavior adequately rely on experimental results from uniaxial creep specimens, and much creep data is available for the proposed Inconel-718 (IN-718) and MarM-247 nickel-based superalloy materials of construction. However, very little experimental creep information is available that directly applies to the atypical thin walls, the specific microstructures, and the low stress levels. In addition, the geometry and loading conditions apply multiaxial stress states on the heater head components, far from the conditions of uniaxial testing. For these reasons, experimental benchmark testing is underway to aid in accurately assessing the durability of Stirling heater heads. The investigation supplements uniaxial creep testing with pneumatic testing of heater head test articles at elevated temperatures and with stress levels ranging from one to seven times design stresses. This paper presents experimental methods, results, post-test microstructural analyses, and conclusions for both accelerated and non-accelerated tests. The Stirling projects use the results to calibrate deterministic and probabilistic analytical creep models of the heater heads to predict their life times.

  9. Developing Benchmarks for Solar Radio Bursts

    Science.gov (United States)

    Biesecker, D. A.; White, S. M.; Gopalswamy, N.; Black, C.; Domm, P.; Love, J. J.; Pierson, J.

    2016-12-01

    Solar radio bursts can interfere with radar, communication, and tracking signals. In severe cases, radio bursts can inhibit the successful use of radio communications and disrupt a wide range of systems that are reliant on Position, Navigation, and Timing services on timescales ranging from minutes to hours across wide areas on the dayside of Earth. The White House's Space Weather Action Plan has asked for solar radio burst intensity benchmarks for an event occurrence frequency of 1 in 100 years and also a theoretical maximum intensity benchmark. The solar radio benchmark team was also asked to define the wavelength/frequency bands of interest. The benchmark team developed preliminary (phase 1) benchmarks for the VHF (30-300 MHz), UHF (300-3000 MHz), GPS (1176-1602 MHz), F10.7 (2800 MHz), and Microwave (4000-20000) bands. The preliminary benchmarks were derived based on previously published work. Limitations in the published work will be addressed in phase 2 of the benchmark process. In addition, deriving theoretical maxima requires additional work, where it is even possible to, in order to meet the Action Plan objectives. In this presentation, we will present the phase 1 benchmarks and the basis used to derive them. We will also present the work that needs to be done in order to complete the final, or phase 2 benchmarks.

  10. Benchmarking for controllere: Metoder, teknikker og muligheder

    DEFF Research Database (Denmark)

    Bukh, Per Nikolaj; Sandalgaard, Niels; Dietrichson, Lars

    2008-01-01

    Der vil i artiklen blive stillet skarpt på begrebet benchmarking ved at præsentere og diskutere forskellige facetter af det. Der vil blive redegjort for fire forskellige anvendelser af benchmarking for at vise begrebets bredde og væsentligheden af at klarlægge formålet med et benchmarkingprojekt......, inden man går i gang. Forskellen på resultatbenchmarking og procesbenchmarking vil blive behandlet, hvorefter brugen af intern hhv. ekstern benchmarking vil blive diskuteret. Endelig introduceres brugen af benchmarking i budgetlægning og budgetopfølgning....

  11. Establishing benchmarks and metrics for utilization management.

    Science.gov (United States)

    Melanson, Stacy E F

    2014-01-01

    The changing environment of healthcare reimbursement is rapidly leading to a renewed appreciation of the importance of utilization management in the clinical laboratory. The process of benchmarking of laboratory operations is well established for comparing organizational performance to other hospitals (peers) and for trending data over time through internal benchmarks. However, there are relatively few resources available to assist organizations in benchmarking for laboratory utilization management. This article will review the topic of laboratory benchmarking with a focus on the available literature and services to assist in managing physician requests for laboratory testing. © 2013.

  12. The level 1 and 2 specification for parallel benchmark and a benchmark test of scalar-parallel computer SP2 based on the specifications

    Energy Technology Data Exchange (ETDEWEB)

    Orii, Shigeo [Japan Atomic Energy Research Inst., Tokyo (Japan)

    1998-06-01

    A benchmark specification for performance evaluation of parallel computers for numerical analysis is proposed. Level 1 benchmark, which is a conventional type benchmark using processing time, measures performance of computers running a code. Level 2 benchmark proposed in this report is to give the reason of the performance. As an example, scalar-parallel computer SP2 is evaluated with this benchmark specification in case of a molecular dynamics code. As a result, the main causes to suppress the parallel performance are maximum band width and start-up time of communication between nodes. Especially the start-up time is proportional not only to the number of processors but also to the number of particles. (author)

  13. Benchmarking analogue models of brittle thrust wedges

    Science.gov (United States)

    Schreurs, Guido; Buiter, Susanne J. H.; Boutelier, Jennifer; Burberry, Caroline; Callot, Jean-Paul; Cavozzi, Cristian; Cerca, Mariano; Chen, Jian-Hong; Cristallini, Ernesto; Cruden, Alexander R.; Cruz, Leonardo; Daniel, Jean-Marc; Da Poian, Gabriela; Garcia, Victor H.; Gomes, Caroline J. S.; Grall, Céline; Guillot, Yannick; Guzmán, Cecilia; Hidayah, Triyani Nur; Hilley, George; Klinkmüller, Matthias; Koyi, Hemin A.; Lu, Chia-Yu; Maillot, Bertrand; Meriaux, Catherine; Nilfouroushan, Faramarz; Pan, Chang-Chih; Pillot, Daniel; Portillo, Rodrigo; Rosenau, Matthias; Schellart, Wouter P.; Schlische, Roy W.; Take, Andy; Vendeville, Bruno; Vergnaud, Marine; Vettori, Matteo; Wang, Shih-Hsien; Withjack, Martha O.; Yagupsky, Daniel; Yamada, Yasuhiro

    2016-11-01

    We performed a quantitative comparison of brittle thrust wedge experiments to evaluate the variability among analogue models and to appraise the reproducibility and limits of model interpretation. Fifteen analogue modeling laboratories participated in this benchmark initiative. Each laboratory received a shipment of the same type of quartz and corundum sand and all laboratories adhered to a stringent model building protocol and used the same type of foil to cover base and sidewalls of the sandbox. Sieve structure, sifting height, filling rate, and details on off-scraping of excess sand followed prescribed procedures. Our analogue benchmark shows that even for simple plane-strain experiments with prescribed stringent model construction techniques, quantitative model results show variability, most notably for surface slope, thrust spacing and number of forward and backthrusts. One of the sources of the variability in model results is related to slight variations in how sand is deposited in the sandbox. Small changes in sifting height, sifting rate, and scraping will result in slightly heterogeneous material bulk densities, which will affect the mechanical properties of the sand, and will result in lateral and vertical differences in peak and boundary friction angles, as well as cohesion values once the model is constructed. Initial variations in basal friction are inferred to play the most important role in causing model variability. Our comparison shows that the human factor plays a decisive role, and even when one modeler repeats the same experiment, quantitative model results still show variability. Our observations highlight the limits of up-scaling quantitative analogue model results to nature or for making comparisons with numerical models. The frictional behavior of sand is highly sensitive to small variations in material state or experimental set-up, and hence, it will remain difficult to scale quantitative results such as number of thrusts, thrust spacing

  14. Benchmarking Implementations of Functional Languages with ``Pseudoknot'', a Float-Intensive Benchmark

    NARCIS (Netherlands)

    Hartel, P.H.; Feeley, M.; Alt, M.; Augustsson, L.

    1996-01-01

    Over 25 implementations of different functional languages are benchmarked using the same program, a floatingpoint intensive application taken from molecular biology. The principal aspects studied are compile time and execution time for the various implementations that were benchmarked. An important

  15. The Zoo, Benchmarks & You: How To Reach the Oregon State Benchmarks with Zoo Resources.

    Science.gov (United States)

    2002

    This document aligns Oregon state educational benchmarks and standards with Oregon Zoo resources. Benchmark areas examined include English, mathematics, science, social studies, and career and life roles. Brief descriptions of the programs offered by the zoo are presented. (SOE)

  16. The Zoo, Benchmarks & You: How To Reach the Oregon State Benchmarks with Zoo Resources.

    Science.gov (United States)

    2002

    This document aligns Oregon state educational benchmarks and standards with Oregon Zoo resources. Benchmark areas examined include English, mathematics, science, social studies, and career and life roles. Brief descriptions of the programs offered by the zoo are presented. (SOE)

  17. Benchmarking Implementations of Functional Languages with "Pseudoknot", a float-intensive benchmark

    NARCIS (Netherlands)

    Hartel, Pieter H.; Feeley, M.; Alt, M.; Augustsson, L.

    Over 25 implementations of different functional languages are benchmarked using the same program, a floatingpoint intensive application taken from molecular biology. The principal aspects studied are compile time and execution time for the various implementations that were benchmarked. An important

  18. Numerical simulation of the RAMAC benchmark test

    Energy Technology Data Exchange (ETDEWEB)

    Leblanc, J.E.; Sugihara, M.; Fujiwara, T. [Nagoya Univ. (Japan). Dept. of Aerospace Engineering; Nusca, M. [Nagoya Univ. (Japan). Dept. of Aerospace Engineering; U.S. Army Research Lab., Ballistics and Weapons Concepts Div., AMSRL-WM-BE, Aberdeen Proving Ground, MD (United States); Wang, X. [Nagoya Univ. (Japan). Dept. of Aerospace Engineering; School of Mechanical and Production Engineering, Nanyang Technological Univ. (Singapore); Seiler, F. [Nagoya Univ. (Japan). Dept. of Aerospace Engineering; French-German Research Inst. of Saint-Louis, ISL, Saint-Louis (France)

    2000-11-01

    Numerical simulations of the same ramac geometry and boundary conditions by different numerical and physical models highlight the variety of solutions possible and the strong effect of the chemical kinetics model on the solution. The benchmark test was defined and announced within the community of ramac researchers. Three laboratories undertook the project. The numerical simulations include Navier-Stokes and Euler simulations with various levels of physical models and equations of state. The non-reactive part of the simulation produced similar steady state results in the three simulations. The chemically reactive part of the simulation produced widely different outcomes. The original experimental data and experimental conditions are presented. A description of each computer code and the resulting flowfield is included. A comparison between codes and results is achieved. The most critical choice for the simulation was the chemical kinetics model. (orig.)

  19. PORTFOLIO COMPOSITION WITH MINIMUM VARIANCE: COMPARISON WITH MARKET BENCHMARKS

    Directory of Open Access Journals (Sweden)

    Daniel Menezes Cavalcante

    2016-07-01

    Full Text Available Portfolio optimization strategies are advocated as being able to allow the composition of stocks portfolios that provide returns above market benchmarks. This study aims to determine whether, in fact, portfolios based on the minimum variance strategy, optimized by the Modern Portfolio Theory, are able to achieve earnings above market benchmarks in Brazil. Time series of 36 securities traded on the BM&FBOVESPA have been analyzed in a long period of time (1999-2012, with sample windows of 12, 36, 60 and 120 monthly observations. The results indicated that the minimum variance portfolio performance is superior to market benchmarks (CDI and IBOVESPA in terms of return and risk-adjusted return, especially in medium and long-term investment horizons.

  20. 2009 South American benchmarking study: natural gas transportation companies

    Energy Technology Data Exchange (ETDEWEB)

    Jordan, Nathalie [Gas TransBoliviano S.A. (Bolivia); Walter, Juliana S. [TRANSPETRO, Rio de Janeiro, RJ (Brazil)

    2009-07-01

    In the current business environment large corporations are constantly seeking to adapt their strategies. Benchmarking is an important tool for continuous improvement and decision-making. Benchmarking is a methodology that determines which aspects are the most important to be improved upon, and it proposes establishing a competitive parameter in an analysis of the best practices and processes, applying continuous improvement driven by the best organizations in their class. At the beginning of 2008, GTB (Gas TransBoliviano S.A.) contacted several South American gas transportation companies to carry out a regional benchmarking study in 2009. In this study, the key performance indicators of the South American companies, whose reality is similar, for example, in terms of prices, availability of labor, and community relations, will be compared. Within this context, a comparative study of the results, the comparative evaluation among natural gas transportation companies, is becoming an essential management instrument to help with decision-making. (author)

  1. Application of the results of pipe stress analyses into fracture mechanics defect analyses for welds of nuclear piping components; Uebernahme der Ergebnisse von Rohrsystemanalysen (Spannungsanalysen) fuer bruchmechanische Fehlerbewertungen fuer Schweissnaehte an Rohrleitungsbauteilen in kerntechnischen Anlagen

    Energy Technology Data Exchange (ETDEWEB)

    Dittmar, S.; Neubrech, G.E.; Wernicke, R. [TUeV Nord SysTec GmbH und Co.KG (Germany); Rieck, D. [IGN Ingenieurgesellschaft Nord mbH und Co.KG (Germany)

    2008-07-01

    For the fracture mechanical assessment of postulated or detected crack-like defects in welds of piping systems it is necessary to know the stresses in the un-cracked component normal to the crack plane. Results of piping stress analyses may be used if these are evaluated for the locations of the welds in the piping system. Using stress enhancing factors (stress indices, stress factors) the needed stress components are calculated from the component specific sectional loads (forces and moments). For this procedure the tabulated stress enhancing factors, given in the standards (ASME Code, German KTA regulations) for determination and limitation of the effective stresses, are not always and immediately adequate for the calculation of the stress component normal to the crack plane. The contribution shows fundamental possibilities and validity limits for adoption of the results of piping system analyses for the fracture mechanical evaluation of axial and circumferential defects in welded joints, with special emphasis on typical piping system components (straight pipe, elbow, pipe fitting, T-joint). The lecture is supposed to contribute to the standardization of a code compliant and task-related use of the piping system analysis results for fracture mechanical failure assessment. [German] Fuer die bruchmechanische Bewertung von postulierten oder bei der wiederkehrenden zerstoerungsfreien Pruefung detektierten rissartigen Fehlern in Schweissnaehten von Rohrsystemen werden die Spannungen in der ungerissenen Bauteilwand senkrecht zur Rissebene benoetigt. Hierfuer koennen die Ergebnisse von Rohrsystemanalysen (Spannungsanalysen) genutzt werden, wenn sie fuer die Orte der Schweissnaehte im Rohrsystem ausgewertet werden. Mit Hilfe von Spannungserhoehungsfaktoren (Spannungsindizes, Spannungsbeiwerten) werden aus den komponentenweise berechneten Schnittlasten (Kraefte und Momente) die benoetigten Spannungskomponenten berechnet. Dabei sind jedoch die in den Regelwerken (ASME

  2. Effects of exposure imprecision on estimation of the benchmark dose

    DEFF Research Database (Denmark)

    Budtz-Jørgensen, Esben; Keiding, Niels; Grandjean, Philippe

    2004-01-01

    approach is one of the most widely used methods for development of exposure limits. An important advantage of this approach is that it can be applied to observational data. However, in this type of data, exposure markers are seldom measured without error. It is shown that, if the exposure error is ignored......, then the benchmark approach produces results that are biased toward higher and less protective levels. It is therefore important to take exposure measurement error into account when calculating benchmark doses. Methods that allow this adjustment are described and illustrated in data from an epidemiological study...

  3. Benchmark of Different Electromagnetic Codes for the High Frequency Calculation

    Energy Technology Data Exchange (ETDEWEB)

    Kai Tian, Haipeng Wang, Frank Marhauser, Guangfeng Cheng, Chuandong Zhou

    2009-05-01

    In this paper, we present benchmarking results for highclass 3D electromagnetic (EM) codes in designing RF cavities today. These codes include Omega3P [1], VORPAL [2], CST Microwave Studio [3], Ansoft HFSS [4], and ANSYS [5]. Two spherical cavities are selected as the benchmark models. We have compared not only the accuracy of resonant frequencies, but also that of surface EM fields, which are critical for superconducting RF cavities. By removing degenerated modes, we calculate all the resonant modes up to 10 GHz with similar mesh densities, so that the geometry approximation and field interpolation error related to the wavelength can be observed.

  4. Atmospheric fluidized bed combustion (AFBC) plants: A performance benchmarking study

    Energy Technology Data Exchange (ETDEWEB)

    Fuller, J. A.; Beavers, H.; Bonk, D. [West Virginia University, College of Business and Economics, Division of Business Administration, Morgantown, WV (United States)

    2004-03-31

    Data from a fluidized bed boiler survey distributed during the spring of 2000 to gather data for developing atmospheric fluidized bed combustion (AFCB) performance benchmarks are analyzed. The survey was sent to members of the Council of Industrial Boiler Owners; 35 surveys were usable for analysis. A total of 18 benchmarks were considered. While the results were not such as to permit a definitive set of conclusions, the survey was successful in providing practical information to assist plant owners, operators and developers to understand their operations and to assess potential solutions or to establish preventative maintenance programs. 36 refs., 2 tabs.

  5. Corporate social responsibility benchmarking. The case of galician firms

    Directory of Open Access Journals (Sweden)

    Encarnación González Vázquez

    2011-12-01

    Full Text Available In this paper we review the concept of corporate social responsibility. Subsequently we analyze the possibilities and problems of the use of benchmarking in CSR by analyzing the latest research that had developed a method of benchmarking. From this analysis we propose a homogeneous indicator that assesses 68 aspects related to the various stakeholders involved in CSR. It also provides information on the importance attached by respondents to these aspects. The results for each of the 5 sectors considered show the areas in which the work in CSR is greatest and others where improvement is needed.

  6. On a new benchmark for the simulation of saltwater intrusion

    Science.gov (United States)

    Stoeckl, Leonard; Graf, Thomas

    2015-04-01

    To date, many different benchmark problems for density-driven flow are available. Benchmarks are necessary to validate numerical models. The benchmark by Henry (1964) measures a saltwater wedge, intruding into a freshwater aquifer in a rectangular model. The Henry (1964) problem of saltwater intrusion is one of the most applied benchmarks in hydrogeology. Modelling saltwater intrusion will be of major importance in the future because investigating the impact of groundwater overexploitation, climate change or sea level rise are of key concern. The worthiness of the Henry (1964) problem was questioned by Simpson and Clement (2003), who compared density-coupled and density-uncoupled simulations. Density-uncoupling was achieved by neglecting density effects in the governing equations, and by considering density effects only in the flow boundary conditions. As both of their simulations showed similar results, Simpson and Clement (2003) concluded that flow patterns of the Henry (1964) problem are largely dictated by the applied flow boundary conditions and density-dependent effects are not adequately represented in the Henry (1964) problem. In the present study, we compare numerical simulations of the physical benchmark of a freshwater lens by Stoeckl and Houben (2012) to the Henry (1964) problem. In this new benchmark, the development of a freshwater lens under an island is simulated by applying freshwater recharge to the model top. Results indicate that density-uncoupling significantly alters the flow patterns of fresh- and saltwater. This leads to the conclusion that next to the boundary conditions applied, density-dependent effects are important to correctly simulate the flow dynamics of a freshwater lens.

  7. Benchmarking ICRF simulations for ITER

    Energy Technology Data Exchange (ETDEWEB)

    R. V. Budny, L. Berry, R. Bilato, P. Bonoli, M. Brambilla, R.J. Dumont, A. Fukuyama, R. Harvey, E.F. Jaeger, E. Lerche, C.K. Phillips, V. Vdovin, J. Wright, and members of the ITPA-IOS

    2010-09-28

    Abstract Benchmarking of full-wave solvers for ICRF simulations is performed using plasma profiles and equilibria obtained from integrated self-consistent modeling predictions of four ITER plasmas. One is for a high performance baseline (5.3 T, 15 MA) DT H-mode plasma. The others are for half-field, half-current plasmas of interest for the pre-activation phase with bulk plasma ion species being either hydrogen or He4. The predicted profiles are used by seven groups to predict the ICRF electromagnetic fields and heating profiles. Approximate agreement is achieved for the predicted heating power partitions for the DT and He4 cases. Profiles of the heating powers and electromagnetic fields are compared.

  8. NASA Indexing Benchmarks: Evaluating Text Search Engines

    Science.gov (United States)

    Esler, Sandra L.; Nelson, Michael L.

    1997-01-01

    The current proliferation of on-line information resources underscores the requirement for the ability to index collections of information and search and retrieve them in a convenient manner. This study develops criteria for analytically comparing the index and search engines and presents results for a number of freely available search engines. A product of this research is a toolkit capable of automatically indexing, searching, and extracting performance statistics from each of the focused search engines. This toolkit is highly configurable and has the ability to run these benchmark tests against other engines as well. Results demonstrate that the tested search engines can be grouped into two levels. Level one engines are efficient on small to medium sized data collections, but show weaknesses when used for collections 100MB or larger. Level two search engines are recommended for data collections up to and beyond 100MB.

  9. Evolved Gas Analyses of the Murray Formation in Gale Crater, Mars: Results of the Curiosity Rover's Sample Analysis at Mars (SAM) Instrument

    Science.gov (United States)

    Sutter, B.; McAdam, A. C.; Rampe, E. B.; Thompson, L. M.; Ming, D. W.; Mahaffy, P. R.; Navarro-Gonzalez, R.; Stern, J. C.; Eigenbrode, J. L.; Archer, P. D.

    2017-01-01

    The Sample Analysis at Mars (SAM) instrument aboard the Mars Science Laboratory rover has analyzed 13 samples from Gale Crater. All SAM-evolved gas analyses have yielded a multitude of volatiles (e.g., H2O, SO2, H2S, CO2, CO, NO, O2, HCl) [1- 6]. The objectives of this work are to 1) Characterize recent evolved SO2, CO2, O2, and NO gas traces of the Murray formation mudstone, 2) Constrain sediment mineralogy/composition based on SAM evolved gas analysis (SAM-EGA), and 3) Discuss the implications of these results relative to understanding the geological history of Gale Crater.

  10. Evolved Gas Analyses of Sedimentary Materials in Gale Crater, Mars: Results of the Curiosity Rover's Sample Analysis at Mars (SAM) Instrument from Yellowknife Bay to the Stimson Formation

    Science.gov (United States)

    Sutter, B.; McAdam, A. C.; Rampe, E. B.; Ming, D. W.; Mahaffy, P. R.; Navarro-Gonzalez, R.; Stern, J. C.; Eigenbrode, J. L.; Archer, P. D.

    2016-01-01

    The Sample Analysis at Mars (SAM) instrument aboard the Mars Science Laboratory rover has analyzed 10 samples from Gale Crater. All SAM evolved gas analyses have yielded a multitude of volatiles (e.g, H2O, SO2, H2S, CO2, CO, NO, O2, HC1). The objectives of this work are to 1) Characterize the evolved H2O, SO2, CO2, and O2 gas traces of sediments analyzed by SAM through sol 1178, 2) Constrain sediment mineralogy/composition based on SAM evolved gas analysis (SAM-EGA), and 3) Discuss the implications of these results releative to understanding the geochemical history of Gale Crater.

  11. Methodical aspects of benchmarking using in Consumer Cooperatives trade enterprises activity

    Directory of Open Access Journals (Sweden)

    Yu.V. Dvirko

    2013-03-01

    Full Text Available The aim of the article. The aim of this article is substantiation of benchmarking main types in Consumer Cooperatives trade enterprises activity; flashlighting of main advantages and drawbacks of benchmarking using; presentation of the authors view upon expediency of flashlighted forms of benchmarking organization using in Consumer Cooperatives in Ukraine trade enterprises activity.The results of the analysis. Under modern conditions of economic relations development and business globalization big companies, enterprises, organizations realize the necessity of the thorough and profound research of the best achievements of market subjects relations with their further using in their own activity. Benchmarking is the process of competitive advantages borrowing and competitiveness increasing of Consumer Cooperatives trade enterprises at the expense of research leaning and adapting the best methods of business processes realization with the purpose to increase their functioning affectivity and best satisfaction of societal needs.The main goals of benchmarking using in Consumer Cooperatives are the following: increasing of needs satisfaction level at the expense of products quality increasing, transportation goods term shortening, service quality increasing; enterprise potential strengthening, competitiveness strengthening, image improvement; generation and new ideas and innovative decisions implementation in trade enterprise activity. The advantages of benchmarking using in Consumer Cooperatives trade enterprises activity are the following: adapting the parameters of enterprise functioning to market demands; gradual defining and removing inadequacies which obstacle enterprise development; borrowing the best methods of further enterprise development; competitive advantages gaining; technological innovations; employees motivation. Authors classification of benchmarking is represented by the following components: by cycle durability strategic, operative

  12. Benchmarking the performance of daily temperature homogenisation algorithms

    Science.gov (United States)

    Warren, Rachel; Bailey, Trevor; Jolliffe, Ian; Willett, Kate

    2015-04-01

    This work explores the creation of realistic synthetic data and its use as a benchmark for comparing the performance of different homogenisation algorithms on daily temperature data. Four different regions in the United States have been selected and three different inhomogeneity scenarios explored for each region. These benchmark datasets are beneficial as, unlike in the real world, the underlying truth is known a priori, thus allowing definite statements to be made about the performance of the algorithms run on them. Performance can be assessed in terms of the ability of algorithms to detect changepoints and also their ability to correctly remove inhomogeneities. The focus is on daily data, thus presenting new challenges in comparison to monthly data and pushing the boundaries of previous studies. The aims of this work are to evaluate and compare the performance of various homogenisation algorithms, aiding their improvement and enabling a quantification of the uncertainty remaining in the data even after they have been homogenised. An important outcome is also to evaluate how realistic the created benchmarks are. It is essential that any weaknesses in the benchmarks are taken into account when judging algorithm performance against them. This information in turn will help to improve future versions of the benchmarks. I intend to present a summary of this work including the method of benchmark creation, details of the algorithms run and some preliminary results. This work forms a three year PhD and feeds into the larger project of the International Surface Temperature Initiative which is working on a global scale and with monthly instead of daily data.

  13. HPC Analytics Support. Requirements for Uncertainty Quantification Benchmarks

    Energy Technology Data Exchange (ETDEWEB)

    Paulson, Patrick R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Purohit, Sumit [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Rodriguez, Luke R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2015-05-01

    This report outlines techniques for extending benchmark generation products so they support uncertainty quantification by benchmarked systems. We describe how uncertainty quantification requirements can be presented to candidate analytical tools supporting SPARQL. We describe benchmark data sets for evaluating uncertainty quantification, as well as an approach for using our benchmark generator to produce data sets for generating benchmark data sets.

  14. 42 CFR 440.330 - Benchmark health benefits coverage.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Benchmark health benefits coverage. 440.330 Section... SERVICES (CONTINUED) MEDICAL ASSISTANCE PROGRAMS SERVICES: GENERAL PROVISIONS Benchmark Benefit and Benchmark-Equivalent Coverage § 440.330 Benchmark health benefits coverage. Benchmark coverage is...

  15. Determining the sensitivity of Data Envelopment Analysis method used in airport benchmarking

    Directory of Open Access Journals (Sweden)

    Mircea BOSCOIANU

    2013-03-01

    Full Text Available In the last decade there were some important changes in the airport industry, caused by the liberalization of the air transportation market. Until recently airports were considered infrastructure elements, and they were evaluated only by traffic values or their maximum capacity. Gradual orientation towards commercial led to the need of finding another ways of evaluation, more efficiency oriented. The existing methods for assessing efficiency used for other production units were not suitable to be used in case of airports due to specific features and high complexity of airport operations. In the last years there were some papers that proposed the Data Envelopment Analysis as a method for assessing the operational efficiency in order to conduct the benchmarking. This method offers the possibility of dealing with a large number of variables of different types, which represents the main advantage of this method and also recommends it as a good benchmarking tool for the airports management. This paper goal is to determine the sensitivity of this method in relation with its inputs and outputs. A Data Envelopment Analysis is conducted for 128 airports worldwide, in both input- and output-oriented measures, and the results are analysed against some inputs and outputs variations. Possible weaknesses of using DEA for assessing airports performance are revealed and analysed against this method advantages.

  16. Synergetic effect of benchmarking competitive advantages

    Directory of Open Access Journals (Sweden)

    N.P. Tkachova

    2011-12-01

    Full Text Available It is analyzed the essence of synergistic competitive benchmarking. The classification of types of synergies is developed. It is determined the sources of synergies in conducting benchmarking of competitive advantages. It is proposed methodological framework definition of synergy in the formation of competitive advantage.

  17. Synergetic effect of benchmarking competitive advantages

    OpenAIRE

    N.P. Tkachova; P.G. Pererva

    2011-01-01

    It is analyzed the essence of synergistic competitive benchmarking. The classification of types of synergies is developed. It is determined the sources of synergies in conducting benchmarking of competitive advantages. It is proposed methodological framework definition of synergy in the formation of competitive advantage.

  18. Benchmarking set for domestic smart grid management

    NARCIS (Netherlands)

    Bosman, M.G.C.; Bakker, Vincent; Molderink, Albert; Hurink, Johann L.; Smit, Gerardus Johannes Maria

    2010-01-01

    In this paper we propose a benchmark for domestic smart grid management. It consists of an in-depth description of a domestic smart grid, in which local energy consumers, producers and buffers can be controlled. First, from this description a general benchmark framework is derived, which can be used

  19. Benchmark analysis of railway networks and undertakings

    NARCIS (Netherlands)

    Hansen, I.A.; Wiggenraad, P.B.L.; Wolff, J.W.

    2013-01-01

    Benchmark analysis of railway networks and companies has been stimulated by the European policy of deregulation of transport markets, the opening of national railway networks and markets to new entrants and separation of infrastructure and train operation. Recent international railway benchmarking s

  20. Benchmark Assessment for Improved Learning. AACC Report

    Science.gov (United States)

    Herman, Joan L.; Osmundson, Ellen; Dietel, Ronald

    2010-01-01

    This report describes the purposes of benchmark assessments and provides recommendations for selecting and using benchmark assessments--addressing validity, alignment, reliability, fairness and bias and accessibility, instructional sensitivity, utility, and reporting issues. We also present recommendations on building capacity to support schools'…

  1. Benchmark Two-Good Utility Functions

    NARCIS (Netherlands)

    de Jaegher, K.

    Benchmark two-good utility functions involving a good with zero income elasticity and unit income elasticity are well known. This paper derives utility functions for the additional benchmark cases where one good has zero cross-price elasticity, unit own-price elasticity, and zero own price

  2. Benchmark Two-Good Utility Functions

    NARCIS (Netherlands)

    de Jaegher, K.

    2007-01-01

    Benchmark two-good utility functions involving a good with zero income elasticity and unit income elasticity are well known. This paper derives utility functions for the additional benchmark cases where one good has zero cross-price elasticity, unit own-price elasticity, and zero own price elasticit

  3. Benchmarking Learning and Teaching: Developing a Method

    Science.gov (United States)

    Henderson-Smart, Cheryl; Winning, Tracey; Gerzina, Tania; King, Shalinie; Hyde, Sarah

    2006-01-01

    Purpose: To develop a method for benchmarking teaching and learning in response to an institutional need to validate a new program in Dentistry at the University of Sydney, Australia. Design/methodology/approach: After a collaborative partner, University of Adelaide, was identified, the areas of teaching and learning to be benchmarked, PBL…

  4. A Review of Flood Loss Models as Basis for Harmonization and Benchmarking.

    Directory of Open Access Journals (Sweden)

    Tina Gerl

    Full Text Available Risk-based approaches have been increasingly accepted and operationalized in flood risk management during recent decades. For instance, commercial flood risk models are used by the insurance industry to assess potential losses, establish the pricing of policies and determine reinsurance needs. Despite considerable progress in the development of loss estimation tools since the 1980s, loss estimates still reflect high uncertainties and disparities that often lead to questioning their quality. This requires an assessment of the validity and robustness of loss models as it affects prioritization and investment decision in flood risk management as well as regulatory requirements and business decisions in the insurance industry. Hence, more effort is needed to quantify uncertainties and undertake validations. Due to a lack of detailed and reliable flood loss data, first order validations are difficult to accomplish, so that model comparisons in terms of benchmarking are essential. It is checked if the models are informed by existing data and knowledge and if the assumptions made in the models are aligned with the existing knowledge. When this alignment is confirmed through validation or benchmarking exercises, the user gains confidence in the models. Before these benchmarking exercises are feasible, however, a cohesive survey of existing knowledge needs to be undertaken. With that aim, this work presents a review of flood loss-or flood vulnerability-relationships collected from the public domain and some professional sources. Our survey analyses 61 sources consisting of publications or software packages, of which 47 are reviewed in detail. This exercise results in probably the most complete review of flood loss models to date containing nearly a thousand vulnerability functions. These functions are highly heterogeneous and only about half of the loss models are found to be accompanied by explicit validation at the time of their proposal. This paper

  5. A Review of Flood Loss Models as Basis for Harmonization and Benchmarking.

    Science.gov (United States)

    Gerl, Tina; Kreibich, Heidi; Franco, Guillermo; Marechal, David; Schröter, Kai

    2016-01-01

    Risk-based approaches have been increasingly accepted and operationalized in flood risk management during recent decades. For instance, commercial flood risk models are used by the insurance industry to assess potential losses, establish the pricing of policies and determine reinsurance needs. Despite considerable progress in the development of loss estimation tools since the 1980s, loss estimates still reflect high uncertainties and disparities that often lead to questioning their quality. This requires an assessment of the validity and robustness of loss models as it affects prioritization and investment decision in flood risk management as well as regulatory requirements and business decisions in the insurance industry. Hence, more effort is needed to quantify uncertainties and undertake validations. Due to a lack of detailed and reliable flood loss data, first order validations are difficult to accomplish, so that model comparisons in terms of benchmarking are essential. It is checked if the models are informed by existing data and knowledge and if the assumptions made in the models are aligned with the existing knowledge. When this alignment is confirmed through validation or benchmarking exercises, the user gains confidence in the models. Before these benchmarking exercises are feasible, however, a cohesive survey of existing knowledge needs to be undertaken. With that aim, this work presents a review of flood loss-or flood vulnerability-relationships collected from the public domain and some professional sources. Our survey analyses 61 sources consisting of publications or software packages, of which 47 are reviewed in detail. This exercise results in probably the most complete review of flood loss models to date containing nearly a thousand vulnerability functions. These functions are highly heterogeneous and only about half of the loss models are found to be accompanied by explicit validation at the time of their proposal. This paper exemplarily presents

  6. Review of recent benchmark experiments on integral test for high energy nuclear data evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Nakashima, Hiroshi; Tanaka, Susumu; Konno, Chikara; Fukahori, Tokio; Hayashi, Katsumi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1998-11-01

    A survey work of recent benchmark experiments on an integral test for high energy nuclear data evaluation was carried out as one of the work of the Task Force on JENDL High Energy File Integral Evaluation (JHEFIE). In this paper the results are compiled and the status of recent benchmark experiments is described. (author)

  7. Benchmarking in Czech Higher Education: The Case of Schools of Economics

    Science.gov (United States)

    Placek, Michal; Ochrana, František; Pucek, Milan

    2015-01-01

    This article describes the use of benchmarking in universities in the Czech Republic and academics' experiences with it. It is based on research conducted among academics from economics schools in Czech public and private universities. The results identified several issues regarding the utilisation and understanding of benchmarking in the Czech…

  8. A Seafloor Benchmark for 3-dimensional Geodesy

    Science.gov (United States)

    Chadwell, C. D.; Webb, S. C.; Nooner, S. L.

    2014-12-01

    We have developed an inexpensive, permanent seafloor benchmark to increase the longevity of seafloor geodetic measurements. The benchmark provides a physical tie to the sea floor lasting for decades (perhaps longer) on which geodetic sensors can be repeatedly placed and removed with millimeter resolution. Global coordinates estimated with seafloor geodetic techniques will remain attached to the benchmark allowing for the interchange of sensors as they fail or become obsolete, or for the sensors to be removed and used elsewhere, all the while maintaining a coherent series of positions referenced to the benchmark. The benchmark has been designed to free fall from the sea surface with transponders attached. The transponder can be recalled via an acoustic command sent from the surface to release from the benchmark and freely float to the sea surface for recovery. The duration of the sensor attachment to the benchmark will last from a few days to a few years depending on the specific needs of the experiment. The recovered sensors are then available to be reused at other locations, or again at the same site in the future. Three pins on the sensor frame mate precisely and unambiguously with three grooves on the benchmark. To reoccupy a benchmark a Remotely Operated Vehicle (ROV) uses its manipulator arm to place the sensor pins into the benchmark grooves. In June 2014 we deployed four benchmarks offshore central Oregon. We used the ROV Jason to successfully demonstrate the removal and replacement of packages onto the benchmark. We will show the benchmark design and its operational capabilities. Presently models of megathrust slip within the Cascadia Subduction Zone (CSZ) are mostly constrained by the sub-aerial GPS vectors from the Plate Boundary Observatory, a part of Earthscope. More long-lived seafloor geodetic measures are needed to better understand the earthquake and tsunami risk associated with a large rupture of the thrust fault within the Cascadia subduction zone

  9. Udsættelser af lejere – Udvikling og benchmarking

    DEFF Research Database (Denmark)

    Christensen, Gunvor; Jeppesen, Anders Gade; Kjær, Agnete Aslaug;

    I denne rapport undersøges udviklingen i både fogedsager og effektive udsættelser fra 2007-2013. Rapporten indeholder desuden en benchmarking-analyse, der estimerer, om hver enkelt kommune har flere eller færre effektive udsættelser, end hvad man skulle forvente, når der tages højde for bl...... kommuner, der indgår i benchmarking-analysen har desuden flere effektive udsættelser i de almene boliger, end man kunne forvente, når der tages hensyn til kommunernes befolkningsgrundlag, det lokale boligmarked og kommunale forhold som fx størrelsen af kommunen. Rapporten er finansieret af Ministeriet...

  10. Benchmarking DNA Metabarcoding for Biodiversity-Based Monitoring and Assessment

    KAUST Repository

    Aylagas, Eva

    2016-06-10

    Characterization of biodiversity has been extensively used to confidently monitor and assess environmental status. Yet, visual morphology, traditionally and widely used for species identification in coastal and marine ecosystem communities, is tedious and entails limitations. Metabarcoding coupled with high-throughput sequencing (HTS) represents an alternative to rapidly, accurately, and cost-effectively analyze thousands of environmental samples simultaneously, and this method is increasingly used to characterize the metazoan taxonomic composition of a wide variety of environments. However, a comprehensive study benchmarking visual and metabarcoding-based taxonomic inferences that validates this technique for environmental monitoring is still lacking. Here, we compare taxonomic inferences of benthic macroinvertebrate samples of known taxonomic composition obtained using alternative metabarcoding protocols based on a combination of different DNA sources, barcodes of the mitochondrial cytochrome oxidase I gene and amplification conditions. Our results highlight the influence of the metabarcoding protocol in the obtained taxonomic composition and suggest the better performance of an alternative 313 bp length barcode to the traditionally 658 bp length one used for metazoan metabarcoding. Additionally, we show that a biotic index inferred from the list of macroinvertebrate taxa obtained using DNA-based taxonomic assignments is comparable to that inferred using morphological identification. Thus, our analyses prove metabarcoding valid for environmental status assessment and will contribute to accelerating the implementation of this technique to regular monitoring programs.

  11. Plasma Waves as a Benchmark Problem

    CERN Document Server

    Kilian, Patrick; Schreiner, Cedric; Spanier, Felix

    2016-01-01

    A large number of wave modes exist in a magnetized plasma. Their properties are determined by the interaction of particles and waves. In a simulation code, the correct treatment of field quantities and particle behavior is essential to correctly reproduce the wave properties. Consequently, plasma waves provide test problems that cover a large fraction of the simulation code. The large number of possible wave modes and the freedom to choose parameters make the selection of test problems time consuming and comparison between different codes difficult. This paper therefore aims to provide a selection of test problems, based on different wave modes and with well defined parameter values, that is accessible to a large number of simulation codes to allow for easy benchmarking and cross validation. Example results are provided for a number of plasma models. For all plasma models and wave modes that are used in the test problems, a mathematical description is provided to clarify notation and avoid possible misunderst...

  12. FRIB driver linac vacuum model and benchmarks

    CERN Document Server

    Durickovic, Bojan; Kersevan, Roberto; Machicoane, Guillaume

    2014-01-01

    The Facility for Rare Isotope Beams (FRIB) is a superconducting heavy-ion linear accelerator that is to produce rare isotopes far from stability for low energy nuclear science. In order to achieve this, its driver linac needs to achieve a very high beam current (up to 400 kW beam power), and this requirement makes vacuum levels of critical importance. Vacuum calculations have been carried out to verify that the vacuum system design meets the requirements. The modeling procedure was benchmarked by comparing models of an existing facility against measurements. In this paper, we present an overview of the methods used for FRIB vacuum calculations and simulation results for some interesting sections of the accelerator. (C) 2013 Elsevier Ltd. All rights reserved.

  13. Assessing reactor physics codes capabilities to simulate fast reactors on the example of the BN-600 benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Ivanov, Vladimir [Scientific and Engineering Centre for Nuclear and Radiation Safety (SES NRS), Moscow (Russian Federation); Bousquet, Jeremy [Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS) gGmbH, Garching (Germany)

    2016-11-15

    This work aims to assess the capabilities of reactor physics codes (initially validated for thermal reactors) to simulate fast sodium cooled reactors. The BFS-62-3A critical experiment from the BN-600 Hybrid Core Benchmark Analyses was chosen for the investigation. Monte-Carlo codes (KENO from SCALE and SERPENT 2.1.23) and the deterministic diffusion code DYN3D-MG are applied to calculate the neutronic parameters. It was found that the multiplication factor and reactivity effects calculated by KENO and SERPENT using the ENDF/B-VII.0 continuous energy library are in a good agreement with each other and with the measured benchmark values. Few-groups macroscopic cross sections, required for DYN3D-MG, were prepared in applying different methods implemented in SCALE and SERPENT. The DYN3D-MG results of a simplified benchmark show reasonable agreement with results from Monte-Carlo calculations and measured values. The former results are used to justify DYN3D-MG implementation for sodium cooled fast reactors coupled deterministic analysis.

  14. Evaluation of the applicability of the Benchmark approach to existing toxicological data. Framework: Chemical compounds in the working place

    NARCIS (Netherlands)

    Appel MJ; Bouman HGM; Pieters MN; Slob W; CSR

    2001-01-01

    Vijf stoffen in de werkomgeving waarvoor risico-evaluaties beschikbaar waren, werden geselecteerd voor analyse met de benchmark-benadering. De kritische studies werden voor elk van deze stoffen geanalyseerd. De onderzochte toxicologische parameters betroffen zowel continue als ordinale gegevens.

  15. Simulation of hydrogen deflagration experiment – Benchmark exercise with lumped-parameter codes

    Energy Technology Data Exchange (ETDEWEB)

    Kljenak, Ivo, E-mail: ivo.kljenak@ijs.si [Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia); Kuznetsov, Mikhail, E-mail: mike.kuznetsov@kit.edu [Karlsruhe Institute of Technology, Kaiserstraße 12, 76131 Karlsruhe (Germany); Kostka, Pal, E-mail: kostka@nubiki.hu [NUBIKI Nuclear Safety Research Institute, Konkoly-Thege Miklós út 29-33, 1121 Budapest (Hungary); Kubišova, Lubica, E-mail: lubica.kubisova@ujd.gov.sk [Nuclear Regulatory Authority of the Slovak Republic, Bajkalská 27, 82007 Bratislava (Slovakia); Maltsev, Mikhail, E-mail: maltsev_MB@aep.ru [JSC Atomenergoproekt, 1, st. Podolskykh Kursantov, Moscow (Russian Federation); Manzini, Giovanni, E-mail: giovanni.manzini@rse-web.it [Ricerca sul Sistema Energetico, Via Rubattino 54, 20134 Milano (Italy); Povilaitis, Mantas, E-mail: mantas.p@mail.lei.lt [Lithuania Energy Institute, Breslaujos g.3, 44403 Kaunas (Lithuania)

    2015-03-15

    Highlights: • Blind and open simulations of hydrogen combustion experiment in large-scale containment-like facility with different lumped-parameter codes. • Simulation of axial as well as radial flame propagation. • Confirmation of adequacy of lumped-parameter codes for safety analyses of actual nuclear power plants. - Abstract: An experiment on hydrogen deflagration (Upward Flame Propagation Experiment – UFPE) was proposed by the Jozef Stefan Institute (Slovenia) and performed in the HYKA A2 facility at the Karlsruhe Institute of Technology (Germany). The experimental results were used to organize a benchmark exercise for lumped-parameter codes. Six organizations (JSI, AEP, LEI, NUBIKI, RSE and UJD SR) participated in the benchmark exercise, using altogether four different computer codes: ANGAR, ASTEC, COCOSYS and ECART. Both blind and open simulations were performed. In general, all the codes provided satisfactory results of the pressure increase, whereas the results of the temperature show a wider dispersal. Concerning the flame axial and radial velocities, the results may be considered satisfactory, given the inherent simplification of the lumped-parameter description compared to the local instantaneous description.

  16. Performance Benchmarking of Tsunami-HySEA Model for NTHMP's Inundation Mapping Activities

    Science.gov (United States)

    Macías, Jorge; Castro, Manuel J.; Ortega, Sergio; Escalante, Cipriano; González-Vida, José Manuel

    2017-08-01

    The Tsunami-HySEA model is used to perform some of the numerical benchmark problems proposed and documented in the "Proceedings and results of the 2011 NTHMP Model Benchmarking Workshop". The final aim is to obtain the approval for Tsunami-HySEA to be used in projects funded by the National Tsunami Hazard Mitigation Program (NTHMP). Therefore, this work contains the numerical results and comparisons for the five benchmark problems (1, 4, 6, 7, and 9) required for such aim. This set of benchmarks considers analytical, laboratory, and field data test cases. In particular, the analytical solution of a solitary wave runup on a simple beach, and its laboratory counterpart, two more laboratory tests: the runup of a solitary wave on a conically shaped island and the runup onto a complex 3D beach (Monai Valley) and, finally, a field data benchmark based on data from the 1993 Hokkaido Nansei-Oki tsunami.

  17. A biosegmentation benchmark for evaluation of bioimage analysis methods

    Directory of Open Access Journals (Sweden)

    Kvilekval Kristian

    2009-11-01

    Full Text Available Abstract Background We present a biosegmentation benchmark that includes infrastructure, datasets with associated ground truth, and validation methods for biological image analysis. The primary motivation for creating this resource comes from the fact that it is very difficult, if not impossible, for an end-user to choose from a wide range of segmentation methods available in the literature for a particular bioimaging problem. No single algorithm is likely to be equally effective on diverse set of images and each method has its own strengths and limitations. We hope that our benchmark resource would be of considerable help to both the bioimaging researchers looking for novel image processing methods and image processing researchers exploring application of their methods to biology. Results Our benchmark consists of different classes of images and ground truth data, ranging in scale from subcellular, cellular to tissue level, each of which pose their own set of challenges to image analysis. The associated ground truth data can be used to evaluate the effectiveness of different methods, to improve methods and to compare results. Standard evaluation methods and some analysis tools are integrated into a database framework that is available online at http://bioimage.ucsb.edu/biosegmentation/. Conclusion This online benchmark will facilitate integration and comparison of image analysis methods for bioimages. While the primary focus is on biological images, we believe that the dataset and infrastructure will be of interest to researchers and developers working with biological image analysis, image segmentation and object tracking in general.

  18. Benchmarking in the Academic Departments using Data Envelopment Analysis

    Directory of Open Access Journals (Sweden)

    Mohammad M. Rayeni

    2010-01-01

    Full Text Available Problem statement: The purpose of this study is to analyze efficiency and benchmarking using Data Envelopment Analysis (DEA in departments of University. Benchmarking is a process of defining valid measures of performance comparison among peer decision making units (DMUs, using them to determine the relative positions of the peer DMUs and, ultimately, establishing a standard of excellence. Approach: DEA can be regarded as a benchmarking tool, because the frontier identified can be regarded as an empirical standard of excellence. Once the frontier is established, then one may compare a set of DMUs to the frontier. Results: We apply benchmarking to detect mistakes of inefficient departments to become efficient and to learn better managerial practice. Conclusion: The results indicated 9 departments are inefficient between 21 departments. The average inefficiency is 0.8516. Inefficient departments dont have excess in the number of teaching staff, but all of them have excess the number of registered student. The shortage of performed research works is the most important indicators of outputs in inefficient departments, which must be corrected.

  19. Ground truth and benchmarks for performance evaluation

    Science.gov (United States)

    Takeuchi, Ayako; Shneier, Michael; Hong, Tsai Hong; Chang, Tommy; Scrapper, Christopher; Cheok, Geraldine S.

    2003-09-01

    Progress in algorithm development and transfer of results to practical applications such as military robotics requires the setup of standard tasks, of standard qualitative and quantitative measurements for performance evaluation and validation. Although the evaluation and validation of algorithms have been discussed for over a decade, the research community still faces a lack of well-defined and standardized methodology. The range of fundamental problems include a lack of quantifiable measures of performance, a lack of data from state-of-the-art sensors in calibrated real-world environments, and a lack of facilities for conducting realistic experiments. In this research, we propose three methods for creating ground truth databases and benchmarks using multiple sensors. The databases and benchmarks will provide researchers with high quality data from suites of sensors operating in complex environments representing real problems of great relevance to the development of autonomous driving systems. At NIST, we have prototyped a High Mobility Multi-purpose Wheeled Vehicle (HMMWV) system with a suite of sensors including a Riegl ladar, GDRS ladar, stereo CCD, several color cameras, Global Position System (GPS), Inertial Navigation System (INS), pan/tilt encoders, and odometry . All sensors are calibrated with respect to each other in space and time. This allows a database of features and terrain elevation to be built. Ground truth for each sensor can then be extracted from the database. The main goal of this research is to provide ground truth databases for researchers and engineers to evaluate algorithms for effectiveness, efficiency, reliability, and robustness, thus advancing the development of algorithms.

  20. ICSBEP Benchmarks For Nuclear Data Applications

    Science.gov (United States)

    Briggs, J. Blair

    2005-05-01

    The International Criticality Safety Benchmark Evaluation Project (ICSBEP) was initiated in 1992 by the United States Department of Energy. The ICSBEP became an official activity of the Organization for Economic Cooperation and Development (OECD) — Nuclear Energy Agency (NEA) in 1995. Representatives from the United States, United Kingdom, France, Japan, the Russian Federation, Hungary, Republic of Korea, Slovenia, Serbia and Montenegro (formerly Yugoslavia), Kazakhstan, Spain, Israel, Brazil, Poland, and the Czech Republic are now participating. South Africa, India, China, and Germany are considering participation. The purpose of the ICSBEP is to identify, evaluate, verify, and formally document a comprehensive and internationally peer-reviewed set of criticality safety benchmark data. The work of the ICSBEP is published as an OECD handbook entitled "International Handbook of Evaluated Criticality Safety Benchmark Experiments." The 2004 Edition of the Handbook contains benchmark specifications for 3331 critical or subcritical configurations that are intended for use in validation efforts and for testing basic nuclear data. New to the 2004 Edition of the Handbook is a draft criticality alarm / shielding type benchmark that should be finalized in 2005 along with two other similar benchmarks. The Handbook is being used extensively for nuclear data testing and is expected to be a valuable resource for code and data validation and improvement efforts for decades to come. Specific benchmarks that are useful for testing structural materials such as iron, chromium, nickel, and manganese; beryllium; lead; thorium; and 238U are highlighted.

  1. Plans to update benchmarking tool.

    Science.gov (United States)

    Stokoe, Mark

    2013-02-01

    The use of the current AssetMark system by hospital health facilities managers and engineers (in Australia) has decreased to a point of no activity occurring. A number of reasons have been cited, including cost, time to do, slow process, and level of information required. Based on current levels of activity, it would not be of any value to IHEA, or to its members, to continue with this form of AssetMark. For AssetMark to remain viable, it needs to be developed as a tool seen to be of value to healthcare facilities managers, and not just healthcare facility engineers. Benchmarking is still a very important requirement in the industry, and AssetMark can fulfil this need provided that it remains abreast of customer needs. The proposed future direction is to develop an online version of AssetMark with its current capabilities regarding capturing of data (12 Key Performance Indicators), reporting, and user interaction. The system would also provide end-users with access to live reporting features via a user-friendly web nterface linked through the IHEA web page.

  2. Academic Benchmarks for Otolaryngology Leaders.

    Science.gov (United States)

    Eloy, Jean Anderson; Blake, Danielle M; D'Aguillo, Christine; Svider, Peter F; Folbe, Adam J; Baredes, Soly

    2015-08-01

    This study aimed to characterize current benchmarks for academic otolaryngologists serving in positions of leadership and identify factors potentially associated with promotion to these positions. Information regarding chairs (or division chiefs), vice chairs, and residency program directors was obtained from faculty listings and organized by degree(s) obtained, academic rank, fellowship training status, sex, and experience. Research productivity was characterized by (a) successful procurement of active grants from the National Institutes of Health and prior grants from the American Academy of Otolaryngology-Head and Neck Surgery Foundation Centralized Otolaryngology Research Efforts program and (b) scholarly impact, as measured by the h-index. Chairs had the greatest amount of experience (32.4 years) and were the least likely to have multiple degrees, with 75.8% having an MD degree only. Program directors were the most likely to be fellowship trained (84.8%). Women represented 16% of program directors, 3% of chairs, and no vice chairs. Chairs had the highest scholarly impact (as measured by the h-index) and the greatest external grant funding. This analysis characterizes the current picture of leadership in academic otolaryngology. Chairs, when compared to their vice chair and program director counterparts, had more experience and greater research impact. Women were poorly represented among all academic leadership positions. © The Author(s) 2015.

  3. Benchmarking Measures of Network Influence

    Science.gov (United States)

    Bramson, Aaron; Vandermarliere, Benjamin

    2016-01-01

    Identifying key agents for the transmission of diseases (ideas, technology, etc.) across social networks has predominantly relied on measures of centrality on a static base network or a temporally flattened graph of agent interactions. Various measures have been proposed as the best trackers of influence, such as degree centrality, betweenness, and k-shell, depending on the structure of the connectivity. We consider SIR and SIS propagation dynamics on a temporally-extruded network of observed interactions and measure the conditional marginal spread as the change in the magnitude of the infection given the removal of each agent at each time: its temporal knockout (TKO) score. We argue that this TKO score is an effective benchmark measure for evaluating the accuracy of other, often more practical, measures of influence. We find that none of the network measures applied to the induced flat graphs are accurate predictors of network propagation influence on the systems studied; however, temporal networks and the TKO measure provide the requisite targets for the search for effective predictive measures. PMID:27670635

  4. Influences of Biodynamic and Conventional Farming Systems on Quality of Potato (Solanum Tuberosum L. Crops: Results from Multivariate Analyses of Two Long-Term Field Trials in Sweden

    Directory of Open Access Journals (Sweden)

    Lars Kjellenberg

    2015-09-01

    Full Text Available The aim of this paper was to present results from two long term field experiments comparing potato samples from conventional farming systems with samples from biodynamic farming systems. The principal component analyses (PCA, consistently exhibited differences between potato samples from the two farming systems. According to the PCA, potato samples treated with inorganic fertilizers exhibited a variation positively related to amounts of crude protein, yield, cooking or tissue discoloration and extract decomposition. Potato samples treated according to biodynamic principles, with composted cow manure, were more positively related to traits such as Quality- and EAA-indices, dry matter content, taste quality, relative proportion of pure protein and biocrystallization value. Distinctions between years, crop rotation and cultivars used were sometimes more significant than differences between manuring systems. Grown after barley the potato crop exhibited better quality traits compared to when grown after ley in both the conventional and the biodynamic farming system.

  5. Developing integrated benchmarks for DOE performance measurement

    Energy Technology Data Exchange (ETDEWEB)

    Barancik, J.I.; Kramer, C.F.; Thode, Jr. H.C.

    1992-09-30

    The objectives of this task were to describe and evaluate selected existing sources of information on occupational safety and health with emphasis on hazard and exposure assessment, abatement, training, reporting, and control identifying for exposure and outcome in preparation for developing DOE performance benchmarks. Existing resources and methodologies were assessed for their potential use as practical performance benchmarks. Strengths and limitations of current data resources were identified. Guidelines were outlined for developing new or improved performance factors, which then could become the basis for selecting performance benchmarks. Data bases for non-DOE comparison populations were identified so that DOE performance could be assessed relative to non-DOE occupational and industrial groups. Systems approaches were described which can be used to link hazards and exposure, event occurrence, and adverse outcome factors, as needed to generate valid, reliable, and predictive performance benchmarks. Data bases were identified which contain information relevant to one or more performance assessment categories . A list of 72 potential performance benchmarks was prepared to illustrate the kinds of information that can be produced through a benchmark development program. Current information resources which may be used to develop potential performance benchmarks are limited. There is need to develop an occupational safety and health information and data system in DOE, which is capable of incorporating demonstrated and documented performance benchmarks prior to, or concurrent with the development of hardware and software. A key to the success of this systems approach is rigorous development and demonstration of performance benchmark equivalents to users of such data before system hardware and software commitments are institutionalized.

  6. The Safety Attitudes Questionnaire: psychometric properties, benchmarking data, and emerging research

    Directory of Open Access Journals (Sweden)

    Boyden James

    2006-04-01

    Full Text Available Abstract Background There is widespread interest in measuring healthcare provider attitudes about issues relevant to patient safety (often called safety climate or safety culture. Here we report the psychometric properties, establish benchmarking data, and discuss emerging areas of research with the University of Texas Safety Attitudes Questionnaire. Methods Six cross-sectional surveys of health care providers (n = 10,843 in 203 clinical areas (including critical care units, operating rooms, inpatient settings, and ambulatory clinics in three countries (USA, UK, New Zealand. Multilevel factor analyses yielded results at the clinical area level and the respondent nested within clinical area level. We report scale reliability, floor/ceiling effects, item factor loadings, inter-factor correlations, and percentage of respondents who agree with each item and scale. Results A six factor model of provider attitudes fit to the data at both the clinical area and respondent nested within clinical area levels. The factors were: Teamwork Climate, Safety Climate, Perceptions of Management, Job Satisfaction, Working Conditions, and Stress Recognition. Scale reliability was 0.9. Provider attitudes varied greatly both within and among organizations. Results are presented to allow benchmarking among organizations and emerging research is discussed. Conclusion The Safety Attitudes Questionnaire demonstrated good psychometric properties. Healthcare organizations can use the survey to measure caregiver attitudes about six patient safety-related domains, to compare themselves with other organizations, to prompt interventions to improve safety attitudes and to measure the effectiveness of these interventions.

  7. How methodologic differences affect results of economic analyses: a systematic review of interferon gamma release assays for the diagnosis of LTBI.

    Directory of Open Access Journals (Sweden)

    Olivia Oxlade

    Full Text Available INTRODUCTION: Cost effectiveness analyses (CEA can provide useful information on how to invest limited funds, however they are less useful if different analysis of the same intervention provide unclear or contradictory results. The objective of our study was to conduct a systematic review of methodologic aspects of CEA that evaluate Interferon Gamma Release Assays (IGRA for the detection of Latent Tuberculosis Infection (LTBI, in order to understand how differences affect study results. METHODS: A systematic review of studies was conducted with particular focus on study quality and the variability in inputs used in models used to assess cost-effectiveness. A common decision analysis model of the IGRA versus Tuberculin Skin Test (TST screening strategy was developed and used to quantify the impact on predicted results of observed differences of model inputs taken from the studies identified. RESULTS: Thirteen studies were ultimately included in the review. Several specific methodologic issues were identified across studies, including how study inputs were selected, inconsistencies in the costing approach, the utility of the QALY (Quality Adjusted Life Year as the effectiveness outcome, and how authors choose to present and interpret study results. When the IGRA versus TST test strategies were compared using our common decision analysis model predicted effectiveness largely overlapped. IMPLICATIONS: Many methodologic issues that contribute to inconsistent results and reduced study quality were identified in studies that assessed the cost-effectiveness of the IGRA test. More specific and relevant guidelines are needed in order to help authors standardize modelling approaches, inputs, assumptions and how results are presented and interpreted.

  8. Benchmark models, planes lines and points for future SUSY searches at the LHC

    Energy Technology Data Exchange (ETDEWEB)

    AbdusSalam, S.S. [The Abdus Salam International Centre for Theoretical Physics, Trieste (Italy); Allanach, B.C. [Cambridge Univ. (United Kingdom). Dept. of Applied Mathematics and Theoretical Physics; Dreiner, H.K. [Bonn Univ. (DE). Bethe Center for Theoretical Physics and Physikalisches Inst.] (and others)

    2012-03-15

    We define benchmark models for SUSY searches at the LHC, including the CMSSM, NUHM, mGMSB, mAMSB, MM-AMSB and p19MSSM, as well as models with R-parity violation and the NMSSM. Within the parameter spaces of these models, we propose benchmark subspaces, including planes, lines and points along them. The planes may be useful for presenting results of the experimental searches in different SUSY scenarios, while the specific benchmark points may serve for more detailed detector performance tests and comparisons. We also describe algorithms for defining suitable benchmark points along the proposed lines in the parameter spaces, and we define a few benchmark points motivated by recent fits to existing experimental data.

  9. BSMBench: a flexible and scalable supercomputer benchmark from computational particle physics

    CERN Document Server

    Bennett, Ed; Del Debbio, Luigi; Jordan, Kirk; Patella, Agostino; Pica, Claudio; Rago, Antonio

    2016-01-01

    Benchmarking plays a central role in the evaluation of High Performance Computing architectures. Several benchmarks have been designed that allow users to stress various components of supercomputers. In order for the figures they provide to be useful, benchmarks need to be representative of the most common real-world scenarios. In this work, we introduce BSMBench, a benchmarking suite derived from Monte Carlo code used in computational particle physics. The advantage of this suite (which can be freely downloaded from http://www.bsmbench.org/) over others is the capacity to vary the relative importance of computation and communication. This enables the tests to simulate various practical situations. To showcase BSMBench, we perform a wide range of tests on various architectures, from desktop computers to state-of-the-art supercomputers, and discuss the corresponding results. Possible future directions of development of the benchmark are also outlined.

  10. Encoding color information for visual tracking: Algorithms and benchmark.

    Science.gov (United States)

    Liang, Pengpeng; Blasch, Erik; Ling, Haibin

    2015-12-01

    While color information is known to provide rich discriminative clues for visual inference, most modern visual trackers limit themselves to the grayscale realm. Despite recent efforts to integrate color in tracking, there is a lack of comprehensive understanding of the role color information can play. In this paper, we attack this problem by conducting a systematic study from both the algorithm and benchmark perspectives. On the algorithm side, we comprehensively encode 10 chromatic models into 16 carefully selected state-of-the-art visual trackers. On the benchmark side, we compile a large set of 128 color sequences with ground truth and challenge factor annotations (e.g., occlusion). A thorough evaluation is conducted by running all the color-encoded trackers, together with two recently proposed color trackers. A further validation is conducted on an RGBD tracking benchmark. The results clearly show the benefit of encoding color information for tracking. We also perform detailed analysis on several issues, including the behavior of various combinations between color model and visual tracker, the degree of difficulty of each sequence for tracking, and how different challenge factors affect the tracking performance. We expect the study to provide the guidance, motivation, and benchmark for future work on encoding color in visual tracking.

  11. QFD Based Benchmarking Logic Using TOPSIS and Suitability Index

    Directory of Open Access Journals (Sweden)

    Jaeho Cho

    2015-01-01

    Full Text Available Users’ satisfaction on quality is a key that leads successful completion of the project in relation to decision-making issues in building design solutions. This study proposed QFD (quality function deployment based benchmarking logic of market products for building envelope solutions. Benchmarking logic is composed of QFD-TOPSIS and QFD-SI. QFD-TOPSIS assessment model is able to evaluate users’ preferences on building envelope solutions that are distributed in the market and may allow quick achievement of knowledge. TOPSIS (Technique for Order of Preference by Similarity to Ideal Solution provides performance improvement criteria that help defining users’ target performance criteria. SI (Suitability Index allows analysis on suitability of the building envelope solution based on users’ required performance criteria. In Stage 1 of the case study, QFD-TOPSIS was used to benchmark the performance criteria of market envelope products. In Stage 2, a QFD-SI assessment was performed after setting user performance targets. The results of this study contribute to confirming the feasibility of QFD based benchmarking in the field of Building Envelope Performance Assessment (BEPA.

  12. Identifying best practice through benchmarking and outcome measurement.

    Science.gov (United States)

    Lanier, Lynne

    2004-01-01

    Collecting and analyzing various types of data are essential to identifying areas for improvement. Data collection and analysis are routinely performed in hospitals and are even required by some regulatory agencies. Realization of the full benefits, which may be achieved through collection and analysis of data, should be actively pursued to prevent a meaningless exercise in paperwork. Internal historical comparison of data may be helpful but does not achieve the ultimate goal of identifying external benchmarks in order to determine best practice. External benchmarks provide a means of comparison with similar facilities, allowing the identification of processes needing improvement. The specialty of ophthalmology presents unique practice situations that are not comparable with other specialties, making it imperative to benchmark against other facilities where quick surgical case time, efficient surgical turnover times, low infection rates, and cost containment are essential and standard operations. Important data to benchmark include efficiency data, financial data, and quality or patient outcome data. After identifying facilities that excel in certain aspects of performance, it is necessary to analyze how their procedures help them achieve these favorable results. Careful data collection and analysis lead to improved practice and patient care.

  13. Benchmarking the financial performance of local councils in Ireland

    Directory of Open Access Journals (Sweden)

    Robbins Geraldine

    2016-05-01

    Full Text Available It was over a quarter of a century ago that information from the financial statements was used to benchmark the efficiency and effectiveness of local government in the US. With the global adoption of New Public Management ideas, benchmarking practice spread to the public sector and has been employed to drive reforms aimed at improving performance and, ultimately, service delivery and local outcomes. The manner in which local authorities in OECD countries compare and benchmark their performance varies widely. The methodology developed in this paper to rate the relative financial performance of Irish city and county councils is adapted from an earlier assessment tool used to measure the financial condition of small cities in the US. Using our financial performance framework and the financial data in the audited annual financial statements of Irish local councils, we calculate composite scores for each of the thirty-four local authorities for the years 2007–13. This paper contributes composite scores that measure the relative financial performance of local councils in Ireland, as well as a full set of yearly results for a seven-year period in which local governments witnessed significant changes in their financial health. The benchmarking exercise is useful in highlighting those councils that, in relative financial performance terms, are the best/worst performers.

  14. Benchmarking Diagnostic Algorithms on an Electrical Power System Testbed

    Science.gov (United States)

    Kurtoglu, Tolga; Narasimhan, Sriram; Poll, Scott; Garcia, David; Wright, Stephanie

    2009-01-01

    Diagnostic algorithms (DAs) are key to enabling automated health management. These algorithms are designed to detect and isolate anomalies of either a component or the whole system based on observations received from sensors. In recent years a wide range of algorithms, both model-based and data-driven, have been developed to increase autonomy and improve system reliability and affordability. However, the lack of support to perform systematic benchmarking of these algorithms continues to create barriers for effective development and deployment of diagnostic technologies. In this paper, we present our efforts to benchmark a set of DAs on a common platform using a framework that was developed to evaluate and compare various performance metrics for diagnostic technologies. The diagnosed system is an electrical power system, namely the Advanced Diagnostics and Prognostics Testbed (ADAPT) developed and located at the NASA Ames Research Center. The paper presents the fundamentals of the benchmarking framework, the ADAPT system, description of faults and data sets, the metrics used for evaluation, and an in-depth analysis of benchmarking results obtained from testing ten diagnostic algorithms on the ADAPT electrical power system testbed.

  15. A proposed benchmark problem for cargo nuclear threat monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Holmes, Thomas Wesley, E-mail: twholmes@ncsu.edu [Center for Engineering Applications of Radioisotopes, Nuclear Engineering Department, North Carolina State University, Raleigh, NC 27695-7909 (United States); Calderon, Adan; Peeples, Cody R.; Gardner, Robin P. [Center for Engineering Applications of Radioisotopes, Nuclear Engineering Department, North Carolina State University, Raleigh, NC 27695-7909 (United States)

    2011-10-01

    There is currently a great deal of technical and political effort focused on reducing the risk of potential attacks on the United States involving radiological dispersal devices or nuclear weapons. This paper proposes a benchmark problem for gamma-ray and X-ray cargo monitoring with results calculated using MCNP5, v1.51. The primary goal is to provide a benchmark problem that will allow researchers in this area to evaluate Monte Carlo models for both speed and accuracy in both forward and inverse calculational codes and approaches for nuclear security applications. A previous benchmark problem was developed by one of the authors (RPG) for two similar oil well logging problems (Gardner and Verghese, 1991, ). One of those benchmarks has recently been used by at least two researchers in the nuclear threat area to evaluate the speed and accuracy of Monte Carlo codes combined with variance reduction techniques. This apparent need has prompted us to design this benchmark problem specifically for the nuclear threat researcher. This benchmark consists of conceptual design and preliminary calculational results using gamma-ray interactions on a system containing three thicknesses of three different shielding materials. A point source is placed inside the three materials lead, aluminum, and plywood. The first two materials are in right circular cylindrical form while the third is a cube. The entire system rests on a sufficiently thick lead base so as to reduce undesired scattering events. The configuration was arranged in such a manner that as gamma-ray moves from the source outward it first passes through the lead circular cylinder, then the aluminum circular cylinder, and finally the wooden cube before reaching the detector. A 2 in.x4 in.x16 in. box style NaI (Tl) detector was placed 1 m from the point source located in the center with the 4 in.x16 in. side facing the system. The two sources used in the benchmark are {sup 137}Cs and {sup 235}U.

  16. Probabilistic evaluation of scenarios in long-term safety analyses. Results of the project ISIBEL; Probabilistische Bewertung von Szenarien in Langzeitsicherheitsanalysen. Ergebnisse des Vorhabens ISIBEL

    Energy Technology Data Exchange (ETDEWEB)

    Buhmann, Dieter; Becker, Dirk-Alexander; Laggiard, Eduardo; Ruebel, Andre; Spiessl, Sabine; Wolf, Jens

    2016-07-15

    In the frame of the project ISIBEL deterministic analyses on the radiological consequences of several possible developments of the final repository were performed (VSG: preliminary safety analysis of the site Gorleben). The report describes the probabilistic evaluation of the VSG scenarios using uncertainty and sensitivity analyses. It was shown that probabilistic analyses are important to evaluate the influence of uncertainties. The transfer of the selected scenarios in computational cases and the used modeling parameters are discussed.

  17. Clear detection of ADIPOQ locus as the major gene for plasma adiponectin: results of genome-wide association analyses including 4659 European individuals

    Science.gov (United States)

    Heid, Iris M.; Henneman, Peter; Hicks, Andrew; Coassin, Stefan; Winkler, Thomas; Aulchenko, Yurii S.; Fuchsberger, Christian; Song, Kijoung; Hivert, Marie-France; Waterworth, Dawn M.; Timpson, Nicholas J.; Richards, J. Brent; Perry, John R.B.; Tanaka, Toshiko; Amin, Najaf; Kollerits, Barbara; Pichler, Irene; Oostra, Ben A.; Thorand, Barbara; Frants, Rune R.; Illig, Thomas; Dupuis, Josée; Glaser, Beate; Spector, Tim; Guralnik, Jack; Egan, Josephine M.; Florez, Jose C.; Evans, David M.; Soranzo, Nicole; Bandinelli, Stefania; Carlson, Olga D.; Frayling, Timothy M.; Burling, Keith; Smith, George Davey; Mooser, Vincent; Ferrucci, Luigi; Meigs, James B.; Vollenweider, Peter; van Dijk, Ko Willems; Pramstaller, Peter; Kronenberg, Florian; van Duijn, Cornelia M.

    2009-01-01

    Objective Plasma adiponectin is strongly associated with various components of metabolic syndrome, type 2 diabetes and cardiovascular outcomes. Concentrations are highly heritable and differ between men and women. We therefore aimed to investigate the genetics of plasma adiponectin in men and women. Methods We combined genome-wide association scans of three population-based studies including 4659 persons. For the replication stage in 13795 subjects, we selected the 20 top signals of the combined analysis, as well as the 10 top signals with p-values less than 1.0*10-4 for each the men- and the women-specific analyses. We further selected 73 SNPs that were consistently associated with metabolic syndrome parameters in previous genome-wide association studies to check for their association with plasma adiponectin. Results The ADIPOQ locus showed genome-wide significant p-values in the combined (p=4.3*10-24) as well as in both women- and men-specific analyses (p=8.7*10-17 and p=2.5*10-11, respectively). None of the other 39 top signal SNPs showed evidence for association in the replication analysis. None of 73 SNPs from metabolic syndrome loci exhibited association with plasma adiponectin (p>0.01). Conclusions We demonstrated the ADIPOQ gene as the only major gene for plasma adiponectin, which explains 6.7% of the phenotypic variance. We further found that neither this gene nor any of the metabolic syndrome loci explained the sex differences observed for plasma adiponectin. Larger studies are needed to identify more moderate genetic determinants of plasma adiponectin. PMID:20018283

  18. The Primary Results of Analyses on The Archaeal and Bacterial Diversity of Active Cave Environments Settled in Limestones at Southern Turkey

    Science.gov (United States)

    Tok, Ezgi; Kurt, Halil; Tunga Akarsubasi, A.

    2016-04-01

    The microbial diversity of cave sediments which are obtained from three different caves named Insuyu, Balatini and Altınbeşik located at Southern Turkey has been investigated using molecular methods for biomineralization . The total number of 22 samples were taken in duplicates from the critical zones of the caves at where the water activity is observed all year round. Microbial communities were monitored by 16S rRNA gene based PCR-DGGE (Polymerase Chain Reaction - Denaturating Gradient Gel Electrophoresis) methodology. DNA were extracted from the samples by The PowerSoil® DNA Isolation Kit (MO BIO Laboratories inc., CA) with the modifications on the producer's protocol. The synthetic DNA molecule poly-dIdC was used to increase the yield of PCR amplification via blocking the reaction between CaCO3 and DNA molecules. Thereafter samples were amplified by using both Archaeal and Bacterial universal primers (ref). Subsequently, archaeal and bacterial diversities in cave sediments, were investigated to be able to compare with respect to their similarities by using DGGE. DGGE patterns were analysed with BioNumerics software 5.1. Similarity matrix and dendograms of the DGGE profiles were generated based on the Dice correlation coefficient (band-based) and unweighted pair-group method with arithmetic mean (UPGMA). The structural diversity of the microbial community was examined by the Shannon index of general diversity (H). Similtaneously, geochemical analyses of the sediment samples were performed within the scope of this study. Total organic carbon (TOC), x-ray diffraction spectroscopy (XRD) and x-ray fluorescence spectroscopy (XRF) analysis of sediments were also implemented. The extensive results will be obtained at the next stages of the study currently carried on.

  19. Benchmarking management of sewer systems: more to learn than cost effectiveness.

    Science.gov (United States)

    Beenen, A S

    2005-01-01

    Thirty-nine municipalities in the Netherlands conducted a pilot study to develop and try out a methodology to compare the quality of their sewerage management. The participants chose a multidimensional benchmarking with an emphasis on the aim of improving the working processes within sewerage management. A second goal was accountability to the stakeholders. The benchmarking methodology was based as well on analysing data within a "balanced-score-card" system as on intensive exchange of knowledge and experiences. The pilot resulted in a state of the art overview of the quality of sewerage management in the Netherlands. However, above all, it resulted in the shocking fact that the work is carried out in many different ways which cannot be explained by technical reasons or local circumstances. To pinpoint best practices and actually implement these improvements the learning process must continue after the analysis and presentation of the data. A start has been made to form regional specialist networks for further discussion and exchange of experiences.

  20. Gaia FGK benchmark stars: abundances of alpha and iron-peak elements

    CERN Document Server

    Jofré, P; Soubiran, C; Blanco-Cuaresma, S; Masseron, T; Nordlander, T; Chemin, L; Worley, C C; Van Eck, S; Hourihane, A; Gilmore, G; Adibekyan, V; Bergemann, M; Cantat-Gaudin, T; Delgado-Mena, E; Hernández, J I González; Guiglion, G; Lardo, C; de Laverny, P; Lind, K; Magrini, L; Mikolaitis, S; Montes, D; Pancino, E; Recio-Blanco, A; Sordo, R; Sousa, S; Tabernero, H M; Vallenari, A

    2015-01-01

    In the current era of large spectroscopic surveys of the Milky Way, reference stars for calibrating astrophysical parameters and chemical abundances are of paramount importance. We determine elemental abundances of Mg, Si, Ca, Sc, Ti, V, Cr, Mn, Co and Ni for our predefined set of Gaia FGK benchmark stars. By analysing high-resolution and high-signal to noise spectra taken from several archive datasets, we combined results of eight different methods to determine abundances on a line-by-line basis. We perform a detailed homogeneous analysis of the systematic uncertainties, such as differential versus absolute abundance analysis, as well as we assess errors due to NLTE and the stellar parameters in our final abundances. Our results are provided by listing final abundances and the different sources of uncertainties, as well as line-by-line and method-by-method abundances. The Gaia FGK benchmark stars atmospheric parameters are already being widely used for calibration of several pipelines applied to different su...

  1. Results of Water and Sediment Toxicity Tests and Chemical Analyses Conducted at the Central Shops Burning Rubble Pit Waste Unit, January 1999

    Energy Technology Data Exchange (ETDEWEB)

    Specht, W.L.

    1999-06-02

    The Central Shops Burning Rubble Pit Operable Unit consists of two inactive rubble pits (631-1G and 631-3G) that have been capped, and one active burning rubble pit (631-2G), where wooden pallets and other non-hazardous debris are periodically burned. The inactive rubble pits may have received hazardous materials, such as asbestos, batteries, and paint cans, as well as non-hazardous materials, such as ash, paper, and glass. In an effort to determine if long term surface water flows of potentially contaminated water from the 631-1G, 631-3G, and 631-2G areas have resulted in an accumulation of chemical constituents at toxic levels in the vicinity of the settling basin and wetlands area, chemical analyses for significant ecological preliminary constituents of concern (pCOCs) were performed on aqueous and sediment samples. In addition, aquatic and sediment toxicity tests were performed in accordance with U.S. EPA methods (U.S. EPA 1989, 1994). Based on the results of the chemical analyses, unfiltered water samples collected from a wetland and settling basins located adjacent to the CSBRP Operable Unit exceed Toxicity Reference Values (TRVs) for aluminum, barium, chromium, copper, iron, lead, and vanadium at one or more of the four locations that were sampled. The water contained very high concentrations of clay particles that were present as suspended solids. A substantial portion of the metals were present as filterable particulates, bound to the clay particles, and were therefore not biologically available. Based on dissolved metal concentrations, the wetland and settling basin exceeded TRVs for aluminum and barium. However, the background reference location also exceeded the TRV for barium, which suggests that this value may be too low, based on local geochemistry. The detection limits for both total and dissolved mercury were higher than the TRV, so it was not possible to determine if the TRV for mercury was exceeded. Dissolved metal levels of chromium, copper

  2. Benchmark Cea - AREVA NP - EDF of the corrosion facilities for VHTR material testing

    Energy Technology Data Exchange (ETDEWEB)

    Cabet, C. [CEA Saclay, Dept. de Physico-Chimie (DEN/DPC/SCCME), 91 - Gif sur Yvette (France); Terlain, A.; Seran, J.L.; Girardin, G.; Kaczorowski, D. [CEA Saclay, Dept. des Materiaux pour le Nucleaire (DEN/DMN), 91 - Gif-sur-Yvette (France); Blat, M. [AREVA NP - NTC-F, Technical Center Le Creusot, 71 - Le Creusot (France); Dubiez Le Goff, S. [Electricite de France (EDF R and D), Chemistry and Corrosion group, MMC Dept., 77 - Moret sur Loing (France)

    2007-07-01

    Within the framework of the ANTARES program, the French Cea, AREVA-NP and EDF have launched a joint program on metallic materials for application in innovative Very High Temperature Reactors (VHTR). Since corrosion is highly sensitive to environmental conditions, material studies require dedicated facilities that permit a strict control of the metallic specimen environment throughout the entire exposure. Cea, AREVA-NP and EDF have developed experimental setups respectively under the names CORALLINE and CORINTH, the Chemistry Loop and ESTEREL; these high temperature helium flow systems are fitted with hygrometers and gas analyzers. A benchmarking procedure was defined to inter-validate these lab devices. It is composed of two tests. The joint protocol has set the operating parameters. Process atmospheres are made of helium with 200 {mu}bar H{sub 2}, 20 {mu}bar CH{sub 4}; the CO content reaches 50 {mu}bar for test 1 while it is reduced to 5 {mu}bar CO in test 2. The residual water vapor concentration shall be lower than 3{mu}bar. Corrosion is assessed by mass change associated to observations and analyses of the corroded coupons considering the surface scales (nature, morphology and thickness), the internal oxidation (nature, distribution and depth) and the possible carburization/decarburization (type and depth). For benchmark test 1, Cea, AREVA-NP and EDF produced similar results in terms of operation of the tests as well as about the Inconel 617 corrosion criteria. On the other hand, benchmark test 2 showed a difference in the residual water vapor level between CORALLINE and the Chemistry Loop that was shown to strongly influence the specimen behavior.

  3. Development and Application of Benchmark Examples for Mode II Static Delamination Propagation and Fatigue Growth Predictions

    Science.gov (United States)

    Krueger, Ronald

    2011-01-01

    The development of benchmark examples for static delamination propagation and cyclic delamination onset and growth prediction is presented and demonstrated for a commercial code. The example is based on a finite element model of an End-Notched Flexure (ENF) specimen. The example is independent of the analysis software used and allows the assessment of the automated delamination propagation, onset and growth prediction capabilities in commercial finite element codes based on the virtual crack closure technique (VCCT). First, static benchmark examples were created for the specimen. Second, based on the static results, benchmark examples for cyclic delamination growth were created. Third, the load-displacement relationship from a propagation analysis and the benchmark results were compared, and good agreement could be achieved by selecting the appropriate input parameters. Fourth, starting from an initially straight front, the delamination was allowed to grow under cyclic loading. The number of cycles to delamination onset and the number of cycles during delamination growth for each growth increment were obtained from the automated analysis and compared to the benchmark examples. Again, good agreement between the results obtained from the growth analysis and the benchmark results could be achieved by selecting the appropriate input parameters. The benchmarking procedure proved valuable by highlighting the issues associated with choosing the input parameters of the particular implementation. Selecting the appropriate input parameters, however, was not straightforward and often required an iterative procedure. Overall the results are encouraging, but further assessment for mixed-mode delamination is required.

  4. Development of Benchmark Examples for Quasi-Static Delamination Propagation and Fatigue Growth Predictions

    Science.gov (United States)

    Krueger, Ronald

    2012-01-01

    The development of benchmark examples for quasi-static delamination propagation and cyclic delamination onset and growth prediction is presented and demonstrated for Abaqus/Standard. The example is based on a finite element model of a Double-Cantilever Beam specimen. The example is independent of the analysis software used and allows the assessment of the automated delamination propagation, onset and growth prediction capabilities in commercial finite element codes based on the virtual crack closure technique (VCCT). First, a quasi-static benchmark example was created for the specimen. Second, based on the static results, benchmark examples for cyclic delamination growth were created. Third, the load-displacement relationship from a propagation analysis and the benchmark results were compared, and good agreement could be achieved by selecting the appropriate input parameters. Fourth, starting from an initially straight front, the delamination was allowed to grow under cyclic loading. The number of cycles to delamination onset and the number of cycles during delamination growth for each growth increment were obtained from the automated analysis and compared to the benchmark examples. Again, good agreement between the results obtained from the growth analysis and the benchmark results could be achieved by selecting the appropriate input parameters. The benchmarking procedure proved valuable by highlighting the issues associated with choosing the input parameters of the particular implementation. Selecting the appropriate input parameters, however, was not straightforward and often required an iterative procedure. Overall the results are encouraging, but further assessment for mixed-mode delamination is required.

  5. Development of Benchmark Examples for Static Delamination Propagation and Fatigue Growth Predictions

    Science.gov (United States)

    Kruger, Ronald

    2011-01-01

    The development of benchmark examples for static delamination propagation and cyclic delamination onset and growth prediction is presented and demonstrated for a commercial code. The example is based on a finite element model of an End-Notched Flexure (ENF) specimen. The example is independent of the analysis software used and allows the assessment of the automated delamination propagation, onset and growth prediction capabilities in commercial finite element codes based on the virtual crack closure technique (VCCT). First, static benchmark examples were created for the specimen. Second, based on the static results, benchmark examples for cyclic delamination growth were created. Third, the load-displacement relationship from a propagation analysis and the benchmark results were compared, and good agreement could be achieved by selecting the appropriate input parameters. Fourth, starting from an initially straight front, the delamination was allowed to grow under cyclic loading. The number of cycles to delamination onset and the number of cycles during stable delamination growth for each growth increment were obtained from the automated analysis and compared to the benchmark examples. Again, good agreement between the results obtained from the growth analysis and the benchmark results could be achieved by selecting the appropriate input parameters. The benchmarking procedure proved valuable by highlighting the issues associated with the input parameters of the particular implementation. Selecting the appropriate input parameters, however, was not straightforward and often required an iterative procedure. Overall, the results are encouraging but further assessment for mixed-mode delamination is required.

  6. Benchmarks for dynamic multi-objective optimisation

    CSIR Research Space (South Africa)

    Helbig, M

    2013-06-01

    Full Text Available When algorithms solve dynamic multi-objective optimisation problems (DMOOPs), benchmark functions should be used to determine whether the algorithm can overcome specific difficulties that can occur in real-world problems. However, for dynamic multi...

  7. Medicare Contracting - Redacted Benchmark Metric Reports

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Centers for Medicare and Medicaid Services has compiled aggregate national benchmark cost and workload metrics using data submitted to CMS by the AB MACs and the...

  8. XWeB: The XML Warehouse Benchmark

    Science.gov (United States)

    Mahboubi, Hadj; Darmont, Jérôme

    With the emergence of XML as a standard for representing business data, new decision support applications are being developed. These XML data warehouses aim at supporting On-Line Analytical Processing (OLAP) operations that manipulate irregular XML data. To ensure feasibility of these new tools, important performance issues must be addressed. Performance is customarily assessed with the help of benchmarks. However, decision support benchmarks do not currently support XML features. In this paper, we introduce the XML Warehouse Benchmark (XWeB), which aims at filling this gap. XWeB derives from the relational decision support benchmark TPC-H. It is mainly composed of a test data warehouse that is based on a unified reference model for XML warehouses and that features XML-specific structures, and its associate XQuery decision support workload. XWeB's usage is illustrated by experiments on several XML database management systems.

  9. XWeB: the XML Warehouse Benchmark

    CERN Document Server

    Mahboubi, Hadj

    2011-01-01

    With the emergence of XML as a standard for representing business data, new decision support applications are being developed. These XML data warehouses aim at supporting On-Line Analytical Processing (OLAP) operations that manipulate irregular XML data. To ensure feasibility of these new tools, important performance issues must be addressed. Performance is customarily assessed with the help of benchmarks. However, decision support benchmarks do not currently support XML features. In this paper, we introduce the XML Warehouse Benchmark (XWeB), which aims at filling this gap. XWeB derives from the relational decision support benchmark TPC-H. It is mainly composed of a test data warehouse that is based on a unified reference model for XML warehouses and that features XML-specific structures, and its associate XQuery decision support workload. XWeB's usage is illustrated by experiments on several XML database management systems.

  10. Professional Performance and Bureaucratic Benchmarking Information

    DEFF Research Database (Denmark)

    Schneider, Melanie L.; Mahlendorf, Matthias D.; Schäffer, Utz

    provision to the chief physician of the respective department. Professional performance is publicly disclosed due to regulatory requirements. At the same time, chief physicians typically receive bureaucratic benchmarking information from the administration. We find that more frequent bureaucratic...

  11. Verification of the code DYN3D/R with the help of international benchmarks

    Energy Technology Data Exchange (ETDEWEB)

    Grundmann, U.; Rohde, U.

    1997-10-01

    Different benchmarks for reactors with quadratic fuel assemblies were calculated with the code DYN3D/R. In this report comparisons with the results of the reference solutions are carried out. The results of DYN3D/R and the reference calculation for the eigenvalue k{sub eff} and the power distribution are shown for the steady-state 3-dimensional IAEA-Benchmark. The results of NEACRP-Benchmarks on control rod ejections in a standard PWR were compared with the reference solutions published by the NEA Data Bank. For assessing the accuracy of DYN3D/R results in comparison to other codes the deviations to the reference solutions are considered. Detailed comparisons with the published reference solutions of the NEA-NSC Benchmarks on uncontrolled withdrawal of control rods are made. The influence of the axial nodalization is also investigated. All in all, a good agreement of the DYN3D/R results with the reference solutions can be seen for the considered benchmark problems. (orig.) [Deutsch] Verschiedene Benchmarks fuer Reaktoren mit quadratischen Brennelementen wurden mit dem Code DYN3D/R berechnet. In diesem Bericht erfolgen Vergleiche mit den Ergebnissen der Referenzloesungen. Die Ergebnisse von DYN3D/R und der Referenzrechnung fuer Eigenwert k{sub eff} und Leistungsverteilung des stationaeren 3-dimensionalen IAEA-Benchmarks werden dargestellt. Die Ergebnisse der NEACRP-Benchmarks fuer die Auswuerfe von Steuerstaeben in einem typischen DWR werden mit den von der NEA Data Bank veroeffentlichten Referenzloesungen verglichen. Zur Einschaetzung der Genauigkeit der DYN3D/R Resultate im Vergleich zu anderen Rechenprogrammen werden die Abweichungen zu den Referenzloesungen betrachtet. Detaillierte Vergleiche mit den Referenzloesungen erfolgen fuer die NEA-NSC Benchmarks zum unkontrollierten Ausfahren von Steuerstaeben. Dabei wird der Einfluss der axialen Nodalisierung untersucht. Insgesamt wird eine gute Uebereinstimmung der DYN3D/R Resultate mit den Referenzloesungen fuer die

  12. Benchmarking Attosecond Physics with Atomic Hydrogen

    Science.gov (United States)

    2015-05-25

    Final 3. DATES COVERED (From - To) 12 Mar 12 – 11 Mar 15 4. TITLE AND SUBTITLE Benchmarking attosecond physics with atomic hydrogen 5a...AND SUBTITLE Benchmarking attosecond physics with atomic hydrogen 5a. CONTRACT NUMBER FA2386-12-1-4025 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER...attosecond physics with atomic hydrogen ” May 25, 2015 PI information: David Kielpinski, dave.kielpinski@gmail.com Griffith University Centre

  13. Aerodynamic Benchmarking of the Deepwind Design

    DEFF Research Database (Denmark)

    Bedona, Gabriele; Schmidt Paulsen, Uwe; Aagaard Madsen, Helge;

    2015-01-01

    The aerodynamic benchmarking for the DeepWind rotor is conducted comparing different rotor geometries and solutions and keeping the comparison as fair as possible. The objective for the benchmarking is to find the most suitable configuration in order to maximize the power production and minimize...... NACA airfoil family. (C) 2015 Published by Elsevier Ltd. This is an open access article under the CC BY-NC-ND license...

  14. Implementation of NAS Parallel Benchmarks in Java

    Science.gov (United States)

    Frumkin, Michael; Schultz, Matthew; Jin, Hao-Qiang; Yan, Jerry

    2000-01-01

    A number of features make Java an attractive but a debatable choice for High Performance Computing (HPC). In order to gauge the applicability of Java to the Computational Fluid Dynamics (CFD) we have implemented NAS Parallel Benchmarks in Java. The performance and scalability of the benchmarks point out the areas where improvement in Java compiler technology and in Java thread implementation would move Java closer to Fortran in the competition for CFD applications.

  15. Simple Benchmark Specifications for Space Radiation Protection

    Science.gov (United States)

    Singleterry, Robert C. Jr.; Aghara, Sukesh K.

    2013-01-01

    This report defines space radiation benchmark specifications. This specification starts with simple, monoenergetic, mono-directional particles on slabs and progresses to human models in spacecraft. This report specifies the models and sources needed to what the team performing the benchmark needs to produce in a report. Also included are brief descriptions of how OLTARIS, the NASA Langley website for space radiation analysis, performs its analysis.

  16. [Individual patient data meta-analyses of randomized trials for the treatment of non-metastatic head and neck squamous cell carcinomas: Principles, results and perspectives].

    Science.gov (United States)

    Blanchard, P; Bourhis, J; Lacas, B; Le Teuff, G; Michiels, S; Pignon, J-P

    2015-05-01

    Meta-analyses are considered as an important pillar of evidence-based medicine. The aim of this review is to describe the main principles of a meta-analysis and to use examples of head and neck oncology to demonstrate their clinical impact and methodological interest. The major role of individual patient data is outlined, as well as the superiority of individual patient data over meta-analyses based on published summary data. The major clinical breakthrough of head and neck meta-analyses are summarized, regarding concomitant chemotherapy, altered fractionated chemotherapy, new regimens of induction chemotherapy or the use of radioprotectants. Recent methodological developments are described, including network meta-analyses, the validation of surrogate markers. Lastly, the future of meta-analyses is discussed in the context of personalized medicine.

  17. The implementation of benchmarking process in marketing education services by Ukrainian universities

    Directory of Open Access Journals (Sweden)

    G.V. Okhrimenko

    2016-03-01

    Full Text Available The aim of the article. The consideration of theoretical and practical aspects of benchmarking at universities is the main task of this research. At first, the researcher identified the essence of benchmarking. It involves comparing the characteristics of college or university leading competitors in the industry and copying of proven designs. Benchmarking tries to eliminate the fundamental problem of comparison – the impossibility of being better than the one from whom they borrow solution. Benchmarking involves, therefore, self-evaluation including systematic collection of data and information with the view to making relevant comparisons of strengths and weaknesses of performance aspects. Benchmarking identifies gaps in performance, seeks new approaches for improvements, monitors progress, reviews benefits and assures adoption of good practices. The results of the analysis. There are five types of benchmarking: internal, competitive, functional, procedural and general. Benchmarking is treated as a process of systematically applied and has specific stages: 1 identification of study object; 2 identification of businesses for comparison; 3 selection of data collection methods; 4 determining variations in terms of efficiency and determination of the levels of future results; 5 communicating of the results of benchmarking; 6 development of implementation plan, initiating the implementation, monitoring implementation; 7 new benchmarks definition. The researcher gave the results of practical use of the benchmarking algorithm at universities. In particular, the monitoring and SWOT-analysis were identified competitive practices used at Ukrainian universities. The main criteria for determining the potential for benchmarking of universities were: 1 the presence of new teaching methods at universities; 2 the involvement of foreign lecturers, partners of other universities for cooperation; 3 promoting education services for target groups; 4 violation of

  18. FDNS CFD Code Benchmark for RBCC Ejector Mode Operation: Continuing Toward Dual Rocket Effects

    Science.gov (United States)

    West, Jeff; Ruf, Joseph H.; Turner, James E. (Technical Monitor)

    2000-01-01

    Computational Fluid Dynamics (CFD) analysis results are compared with benchmark quality test data from the Propulsion Engineering Research Center's (PERC) Rocket Based Combined Cycle (RBCC) experiments to verify fluid dynamic code and application procedures. RBCC engine flowpath development will rely on CFD applications to capture the multi -dimensional fluid dynamic interactions and to quantify their effect on the RBCC system performance. Therefore, the accuracy of these CFD codes must be determined through detailed comparisons with test data. The PERC experiments build upon the well-known 1968 rocket-ejector experiments of Odegaard and Stroup by employing advanced optical and laser based diagnostics to evaluate mixing and secondary combustion. The Finite Difference Navier Stokes (FDNS) code [2] was used to model the fluid dynamics of the PERC RBCC ejector mode configuration. Analyses were performed for the Diffusion and Afterburning (DAB) test conditions at the 200-psia thruster operation point, Results with and without downstream fuel injection are presented.

  19. A SPITZER IRAC IMAGING SURVEY FOR T DWARF COMPANIONS AROUND M, L, AND T DWARFS: OBSERVATIONS, RESULTS, AND MONTE CARLO POPULATION ANALYSES

    Energy Technology Data Exchange (ETDEWEB)

    Carson, J. C. [Department of Physics and Astronomy, College of Charleston, 58 Coming St., Charleston, SC 29424 (United States); Marengo, M. [Department of Physics and Astronomy, Iowa State University, A313E Zaffarano, Ames, IA 50011 (United States); Patten, B. M.; Hora, J. L.; Schuster, M. T.; Fazio, G. G. [Harvard-Smithsonian Center for Astrophysics, 60 Garden St., Cambridge, MA 02138 (United States); Luhman, K. L. [Department of Astronomy and Astrophysics, Pennsylvania State University, 525 Davey Lab, University Park, PA 16802 (United States); Sonnett, S. M. [Institute for Astronomy, University of Hawaii, 2680 Woodlawn Dr., Honolulu, HI 96822 (United States); Allen, P. R. [Department of Physics and Astronomy, Franklin and Marshall College, Lancaster, PA 17604 (United States); Stauffer, J. R. [Spitzer Science Center, 1200 E California Blvd., Pasadena, CA 91106 (United States); Schnupp, C. [Max-Planck-Institut fuer Astronomie, Koenigstuhl 17, 69117 Heidelberg (Germany)

    2011-12-20

    We report observational techniques, results, and Monte Carlo population analyses from a Spitzer Infrared Array Camera imaging survey for substellar companions to 117 nearby M, L, and T dwarf systems (median distance of 10 pc, mass range of 0.6 to {approx}0.05 M{sub Sun }). The two-epoch survey achieves typical detection sensitivities to substellar companions of [4.5 {mu}m] {<=} 17.2 mag for angular separations between about 7'' and 165''. Based on common proper motion analysis, we find no evidence for new substellar companions. Using Monte Carlo orbital simulations (assuming random inclination, random eccentricity, and random longitude of pericenter), we conclude that the observational sensitivities translate to an ability to detect 600-1100 K brown dwarf companions at semimajor axes {approx}>35 AU and to detect 500-600 K companions at semimajor axes {approx}>60 AU. The simulations also estimate a 600-1100 K T dwarf companion fraction of <3.4% for 35-1200 AU separations and <12.4% for the 500-600 K companions for 60-1000 AU separations.

  20. Benchmarking and tuning the MILC code on clusters and supercomputers

    CERN Document Server

    Gottlieb, S

    2002-01-01

    Recently, we have benchmarked and tuned the MILC code on a number of architectures including Intel Itanium and Pentium IV (PIV), dual-CPU Athlon, and the latest Compaq Alpha nodes. Results will be presented for many of these, and we shall discuss some simple code changes that can result in a very dramatic speedup of the KS conjugate gradient on processors with more advanced memory systems such as PIV, IBM SP and Alpha.

  1. International benchmarking of specialty hospitals. A series of case studies on comprehensive cancer centres

    Directory of Open Access Journals (Sweden)

    van Lent Wineke AM

    2010-08-01

    Full Text Available Abstract Background Benchmarking is one of the methods used in business that is applied to hospitals to improve the management of their operations. International comparison between hospitals can explain performance differences. As there is a trend towards specialization of hospitals, this study examines the benchmarking process and the success factors of benchmarking in international specialized cancer centres. Methods Three independent international benchmarking studies on operations management in cancer centres were conducted. The first study included three comprehensive cancer centres (CCC, three chemotherapy day units (CDU were involved in the second study and four radiotherapy departments were included in the final study. Per multiple case study a research protocol was used to structure the benchmarking process. After reviewing the multiple case studies, the resulting description was used to study the research objectives. Results We adapted and evaluated existing benchmarking processes through formalizing stakeholder involvement and verifying the comparability of the partners. We also devised a framework to structure the indicators to produce a coherent indicator set and better improvement suggestions. Evaluating the feasibility of benchmarking as a tool to improve hospital processes led to mixed results. Case study 1 resulted in general recommendations for the organizations involved. In case study 2, the combination of benchmarking and lean management led in one CDU to a 24% increase in bed utilization and a 12% increase in productivity. Three radiotherapy departments of case study 3, were considering implementing the recommendations. Additionally, success factors, such as a well-defined and small project scope, partner selection based on clear criteria, stakeholder involvement, simple and well-structured indicators, analysis of both the process and its results and, adapt the identified better working methods to the own setting, were found

  2. Kvalitative analyser ..

    DEFF Research Database (Denmark)

    Boolsen, Merete Watt

    bogen forklarer de fundamentale trin i forskningsprocessen og applikerer dem på udvalgte kvalitative analyser: indholdsanalyse, Grounded Theory, argumentationsanalyse og diskursanalyse......bogen forklarer de fundamentale trin i forskningsprocessen og applikerer dem på udvalgte kvalitative analyser: indholdsanalyse, Grounded Theory, argumentationsanalyse og diskursanalyse...

  3. How are functionally similar code clones syntactically different? An empirical study and a benchmark

    Directory of Open Access Journals (Sweden)

    Stefan Wagner

    2016-03-01

    Full Text Available Background. Today, redundancy in source code, so-called “clones” caused by copy&paste can be found reliably using clone detection tools. Redundancy can arise also independently, however, not caused by copy&paste. At present, it is not clear how only functionally similar clones (FSC differ from clones created by copy&paste. Our aim is to understand and categorise the syntactical differences in FSCs that distinguish them from copy&paste clones in a way that helps clone detection research. Methods. We conducted an experiment using known functionally similar programs in Java and C from coding contests. We analysed syntactic similarity with traditional detection tools and explored whether concolic clone detection can go beyond syntax. We ran all tools on 2,800 programs and manually categorised the differences in a random sample of 70 program pairs. Results. We found no FSCs where complete files were syntactically similar. We could detect a syntactic similarity in a part of the files in <16% of the program pairs. Concolic detection found 1 of the FSCs. The differences between program pairs were in the categories algorithm, data structure, OO design, I/O and libraries. We selected 58 pairs for an openly accessible benchmark representing these categories. Discussion. The majority of differences between functionally similar clones are beyond the capabilities of current clone detection approaches. Yet, our benchmark can help to drive further clone detection research.

  4. Evaluating the Effect of Labeled Benchmarks on Children's Number Line Estimation Performance and Strategy Use.

    Science.gov (United States)

    Peeters, Dominique; Sekeris, Elke; Verschaffel, Lieven; Luwel, Koen

    2017-01-01

    Some authors argue that age-related improvements in number line estimation (NLE) performance result from changes in strategy use. More specifically, children's strategy use develops from only using the origin of the number line, to using the origin and the endpoint, to eventually also relying on the midpoint of the number line. Recently, Peeters et al. (unpublished) investigated whether the provision of additional unlabeled benchmarks at 25, 50, and 75% of the number line, positively affects third and fifth graders' NLE performance and benchmark-based strategy use. It was found that only the older children benefitted from the presence of these benchmarks at the quartiles of the number line (i.e., 25 and 75%), as they made more use of these benchmarks, leading to more accurate estimates. A possible explanation for this lack of improvement in third graders might be their inability to correctly link the presented benchmarks with their corresponding numerical values. In the present study, we investigated whether labeling these benchmarks with their corresponding numerical values, would have a positive effect on younger children's NLE performance and quartile-based strategy use as well. Third and sixth graders were assigned to one of three conditions: (a) a control condition with an empty number line bounded by 0 at the origin and 1,000 at the endpoint, (b) an unlabeled condition with three additional external benchmarks without numerical labels at 25, 50, and 75% of the number line, and (c) a labeled condition in which these benchmarks were labeled with 250, 500, and 750, respectively. Results indicated that labeling the benchmarks has a positive effect on third graders' NLE performance and quartile-based strategy use, whereas sixth graders already benefited from the mere provision of unlabeled benchmarks. These findings imply that children's benchmark-based strategy use can be stimulated by adding additional externally provided benchmarks on the number line, but that

  5. Clean Energy Manufacturing Analysis Center Benchmark Report: Framework and Methodologies

    Energy Technology Data Exchange (ETDEWEB)

    Sandor, Debra [National Renewable Energy Lab. (NREL), Golden, CO (United States); Chung, Donald [National Renewable Energy Lab. (NREL), Golden, CO (United States); Keyser, David [National Renewable Energy Lab. (NREL), Golden, CO (United States); Mann, Margaret [National Renewable Energy Lab. (NREL), Golden, CO (United States); Engel-Cox, Jill [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2017-05-23

    This report documents the CEMAC methodologies for developing and reporting annual global clean energy manufacturing benchmarks. The report reviews previously published manufacturing benchmark reports and foundational data, establishes a framework for benchmarking clean energy technologies, describes the CEMAC benchmark analysis methodologies, and describes the application of the methodologies to the manufacturing of four specific clean energy technologies.

  6. Storage-Intensive Supercomputing Benchmark Study

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, J; Dossa, D; Gokhale, M; Hysom, D; May, J; Pearce, R; Yoo, A

    2007-10-30

    Critical data science applications requiring frequent access to storage perform poorly on today's computing architectures. This project addresses efficient computation of data-intensive problems in national security and basic science by exploring, advancing, and applying a new form of computing called storage-intensive supercomputing (SISC). Our goal is to enable applications that simply cannot run on current systems, and, for a broad range of data-intensive problems, to deliver an order of magnitude improvement in price/performance over today's data-intensive architectures. This technical report documents much of the work done under LDRD 07-ERD-063 Storage Intensive Supercomputing during the period 05/07-09/07. The following chapters describe: (1) a new file I/O monitoring tool iotrace developed to capture the dynamic I/O profiles of Linux processes; (2) an out-of-core graph benchmark for level-set expansion of scale-free graphs; (3) an entity extraction benchmark consisting of a pipeline of eight components; and (4) an image resampling benchmark drawn from the SWarp program in the LSST data processing pipeline. The performance of the graph and entity extraction benchmarks was measured in three different scenarios: data sets residing on the NFS file server and accessed over the network; data sets stored on local disk; and data sets stored on the Fusion I/O parallel NAND Flash array. The image resampling benchmark compared performance of software-only to GPU-accelerated. In addition to the work reported here, an additional text processing application was developed that used an FPGA to accelerate n-gram profiling for language classification. The n-gram application will be presented at SC07 at the High Performance Reconfigurable Computing Technologies and Applications Workshop. The graph and entity extraction benchmarks were run on a Supermicro server housing the NAND Flash 40GB parallel disk array, the Fusion-io. The Fusion system specs are as follows

  7. Quality benchmarking methodology: Case study of finance and culture industries in Latvia

    Directory of Open Access Journals (Sweden)

    Ieva Zemīte

    2011-01-01

    Full Text Available Political, socio-economic and cultural changes that have taken place in the world during the last years have influenced all the spheres. Constant improvements are necessary to sustain in rival and shrinking markets. This sets high quality standards for the service industries. Therefore it is important to conduct comparison of quality criteria to ascertain which practices are achieving superior performance levels. At present companies in Latvia do not carry out mutual benchmarking, and as a result of that do not know how they rank against their peers in terms of quality, as well as they do not see benefits in sharing of information and in benchmarking.The purpose of this paper is to determine the criteria of qualitative benchmarking, and to investigate the use of the benchmarking quality in service industries, particularly: finance and culture sectors in Latvia in order to determine the key driving factors of quality, to explore internal and foreign benchmarks, and to reveal the full potential of inputs’ reduction and efficiency growth for the aforementioned industries.Case study and other tools are used to define the readiness of the company for benchmarking. Certain key factors are examined for their impact on quality criteria. The results are based on the research conducted in professional associations in defined fields (insurance and theatre.Originality/value – this is the first study that adopts the benchmarking models for measuring quality criteria and readiness for mutual comparison in insurance and theatre industries in Latvia.

  8. Reactor based plutonium disposition - physics and fuel behaviour benchmark studies of an OECD/NEA experts group

    Energy Technology Data Exchange (ETDEWEB)

    D' Hondt, P. [SCK.CEN, Mol (Belgium); Gehin, J. [ORNL, Oak Ridge, TN (United States); Na, B.C.; Sartori, E. [Organisation for Economic Co-Operation and Development, Nuclear Energy Agency, 92 - Issy les Moulineaux (France); Wiesenack, W. [Organisation for Economic Co-Operation and Development/HRP, Halden (Norway)

    2001-07-01

    One of the options envisaged for disposing of weapons grade plutonium, declared surplus for national defence in the Russian Federation and Usa, is to burn it in nuclear power reactors. The scientific/technical know-how accumulated in the use of MOX as a fuel for electricity generation is of great relevance for the plutonium disposition programmes. An Expert Group of the OECD/Nea is carrying out a series of benchmarks with the aim of facilitating the use of this know-how for meeting this objective. This paper describes the background that led to establishing the Expert Group, and the present status of results from these benchmarks. The benchmark studies cover a theoretical reactor physics benchmark on a VVER-1000 core loaded with MOX, two experimental benchmarks on MOX lattices and a benchmark concerned with MOX fuel behaviour for both solid and hollow pellets. First conclusions are outlined as well as future work. (author)

  9. Toxicological benchmarks for potential contaminants of concern for effects on soil and litter invertebrates and heterotrophic process

    Energy Technology Data Exchange (ETDEWEB)

    Will, M.E.; Suter, G.W. II

    1995-09-01

    An important step in ecological risk assessments is screening the chemicals occur-ring on a site for contaminants of potential concern. Screening may be accomplished by comparing reported ambient concentrations to a set of toxicological benchmarks. Multiple endpoints for assessing risks posed by soil-borne contaminants to organisms directly impacted by them have been established. This report presents benchmarks for soil invertebrates and microbial processes and addresses only chemicals found at United States Department of Energy (DOE) sites. No benchmarks for pesticides are presented. After discussing methods, this report presents the results of the literature review and benchmark derivation for toxicity to earthworms (Sect. 3), heterotrophic microbes and their processes (Sect. 4), and other invertebrates (Sect. 5). The final sections compare the benchmarks to other criteria and background and draw conclusions concerning the utility of the benchmarks.

  10. Observer-based FDI for Gain Fault Detection in Ship Propulsion Benchmark:a Geometric Approach

    OpenAIRE

    Lootsma, T.F.; Izadi-Zamanabadi, Roozbeh; Nijmeijer, H.

    2001-01-01

    A geometric approach for input-affine nonlinear systems is briefly described and then applied to a ship propulsion benchmark. The obtained results are used to design a diagnostic nonlinear observer for successful FDI of the diesel engine gain fault

  11. Test Nationally, Benchmark Locally: Using Local DIBELS Benchmarks to Predict Performance on the Pssa

    Science.gov (United States)

    Ferchalk, Matthew R.

    2013-01-01

    The Dynamic Indicators of Basic Early Literacy Skills (DIBELS) benchmarks are frequently used to make important decision regarding student performance. More information, however, is needed to understand if the nationally-derived benchmarks created by the DIBELS system provide the most accurate criterion for evaluating reading proficiency. The…

  12. The Concepts "Benchmarks and Benchmarking" Used in Education Planning: Teacher Education as Example

    Science.gov (United States)

    Steyn, H. J.

    2015-01-01

    Planning in education is a structured activity that includes several phases and steps that take into account several kinds of information (Steyn, Steyn, De Waal & Wolhuter, 2002: 146). One of the sets of information that are usually considered is the (so-called) "benchmarks" and "benchmarking" regarding the focus of a…

  13. Sensitivity Test for Benchmark Analysis of EBR-II SHRT-17 using MARS-LMR

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Chiwoong; Ha, Kwiseok [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2014-05-15

    This study was conducted as a part of the IAEA Coordinated Research Project (CRP), 'Benchmark Analyses of an EBR-II Shutdown Heat Removal Test (SHRT)'. EBR-II SHRT 17 (Loss of flow) was analyzed with MARS-LMR, which is a safety analysis code for a Prototype GEN-IV Sodium-cooled Fast Reactor (PGSFR) has developed in KAERI. The current stage of the CRP is comparing blind test results with opened experimental data. Some influential parameters are selected for the sensitivity test of the EBR-II SHRT-17. The major goal of this study is to understand the behaviors of physical parameters and to make the modeling strategy for better estimation.

  14. Benchmarking of FLOWTRAN with Mark-22 mockup flow excursion test data from Babcock Wilcox

    Energy Technology Data Exchange (ETDEWEB)

    Chen, K.F.

    1991-11-01

    Version 16.2 of the FLOWTRAN code with a Savannah River Site (SRS) working criterion (St=0.00455) for the onset of significant void (OSV) was benchmarked against power and flow excursion data derived from tests at the Babcock Wilcox Alliance Research Center test facility. The analyses show that FLOWTRAN accurately predicts the mockup test assembly thermal-hydraulic behavior during the steady state and LOCA transient conditions, and that FLOWTRAN with a Savannah River Site (SRS) working limits criterion (St=0.00455) conservatively predicts the OFI power. Results for LOCA simulations which include a power decay transient for a safety rod SCRAM are shown below. For all of these tests, the calculated test assembly initial power or operating power limit was at least 15% below the initial power level for which the test assembly went into flow instability. These calculations were made using the SRS LOCA FI limits methodology ada ted to the test assembly.

  15. Benchmarking of FLOWTRAN with Mark-22 mockup flow excursion test data from Babcock & Wilcox

    Energy Technology Data Exchange (ETDEWEB)

    Chen, K.F.

    1991-11-01

    Version 16.2 of the FLOWTRAN code with a Savannah River Site (SRS) working criterion (St=0.00455) for the onset of significant void (OSV) was benchmarked against power and flow excursion data derived from tests at the Babcock & Wilcox Alliance Research Center test facility. The analyses show that FLOWTRAN accurately predicts the mockup test assembly thermal-hydraulic behavior during the steady state and LOCA transient conditions, and that FLOWTRAN with a Savannah River Site (SRS) working limits criterion (St=0.00455) conservatively predicts the OFI power. Results for LOCA simulations which include a power decay transient for a safety rod SCRAM are shown below. For all of these tests, the calculated test assembly initial power or operating power limit was at least 15% below the initial power level for which the test assembly went into flow instability. These calculations were made using the SRS LOCA FI limits methodology ada ted to the test assembly.

  16. Criticality safety benchmark evaluation project: Recovering the past

    Energy Technology Data Exchange (ETDEWEB)

    Trumble, E.F.

    1997-06-01

    A very brief summary of the Criticality Safety Benchmark Evaluation Project of the Westinghouse Savannah River Company is provided in this paper. The purpose of the project is to provide a source of evaluated criticality safety experiments in an easily usable format. Another project goal is to search for any experiments that may have been lost or contain discrepancies, and to determine if they can be used. Results of evaluated experiments are being published as US DOE handbooks.

  17. Simulator for SUPO, a Benchmark Aqueous Homogeneous Reactor (AHR)

    Energy Technology Data Exchange (ETDEWEB)

    Klein, Steven Karl [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Determan, John C. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-10-14

    A simulator has been developed for SUPO (Super Power) an aqueous homogeneous reactor (AHR) that operated at Los Alamos National Laboratory (LANL) from 1951 to 1974. During that period SUPO accumulated approximately 600,000 kWh of operation. It is considered the benchmark for steady-state operation of an AHR. The SUPO simulator was developed using the process that resulted in a simulator for an accelerator-driven subcritical system, which has been previously reported.

  18. Portfolio selection and asset pricing under a benchmark approach

    Science.gov (United States)

    Platen, Eckhard

    2006-10-01

    The paper presents classical and new results on portfolio optimization, as well as the fair pricing concept for derivative pricing under the benchmark approach. The growth optimal portfolio is shown to be a central object in a market model. It links asset pricing and portfolio optimization. The paper argues that the market portfolio is a proxy of the growth optimal portfolio. By choosing the drift of the discounted growth optimal portfolio as parameter process, one obtains a realistic theoretical market dynamics.

  19. Toxicological benchmarks for wildlife: 1994 Revision

    Energy Technology Data Exchange (ETDEWEB)

    Opresko, D.M.; Sample, B.E.; Suter, G.W. II

    1994-09-01

    The process by which ecological risks of environmental contaminants are evaluated is two-tiered. The first tier is a screening assessment where concentrations of contaminants in the environment are compared to toxicological benchmarks which represent concentrations of chemicals in environmental media (water, sediment, soil, food, etc.) that are presumed to be nonhazardous to the surrounding biota. The second tier is a baseline ecological risk assessment where toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. The report presents toxicological benchmarks for assessment of effects of 76 chemicals on 8 representative mammalian wildlife species and 31 chemicals on 9 avian wildlife species. The chemicals are some of those that occur at United States Department of Energy waste sites; the wildlife species were chosen because they are widely distributed and provide a representative range of body sizes and diets. Further descriptions of the chosen wildlife species and chemicals are provided in the report. The benchmarks presented in this report represent values believed to be nonhazardous for the listed wildlife species. These benchmarks only consider contaminant exposure through oral ingestion of contaminated media; exposure through inhalation or direct dermal exposure are not considered in this report.

  20. Semi-Analytical Benchmarks for MCNP6

    Energy Technology Data Exchange (ETDEWEB)

    Grechanuk, Pavel Aleksandrovi [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-11-07

    Code verification is an extremely important process that involves proving or disproving the validity of code algorithms by comparing them against analytical results of the underlying physics or mathematical theory on which the code is based. Monte Carlo codes such as MCNP6 must undergo verification and testing upon every release to ensure that the codes are properly simulating nature. Specifically, MCNP6 has multiple sets of problems with known analytic solutions that are used for code verification. Monte Carlo codes primarily specify either current boundary sources or a volumetric fixed source, either of which can be very complicated functions of space, energy, direction and time. Thus, most of the challenges with modeling analytic benchmark problems in Monte Carlo codes come from identifying the correct source definition to properly simulate the correct boundary conditions. The problems included in this suite all deal with mono-energetic neutron transport without energy loss, in a homogeneous material. The variables that differ between the problems are source type (isotropic/beam), medium dimensionality (infinite/semi-infinite), etc.

  1. Statistical benchmark for BosonSampling

    Science.gov (United States)

    Walschaers, Mattia; Kuipers, Jack; Urbina, Juan-Diego; Mayer, Klaus; Tichy, Malte Christopher; Richter, Klaus; Buchleitner, Andreas

    2016-03-01

    Boson samplers—set-ups that generate complex many-particle output states through the transmission of elementary many-particle input states across a multitude of mutually coupled modes—promise the efficient quantum simulation of a classically intractable computational task, and challenge the extended Church-Turing thesis, one of the fundamental dogmas of computer science. However, as in all experimental quantum simulations of truly complex systems, one crucial problem remains: how to certify that a given experimental measurement record unambiguously results from enforcing the claimed dynamics, on bosons, fermions or distinguishable particles? Here we offer a statistical solution to the certification problem, identifying an unambiguous statistical signature of many-body quantum interference upon transmission across a multimode, random scattering device. We show that statistical analysis of only partial information on the output state allows to characterise the imparted dynamics through particle type-specific features of the emerging interference patterns. The relevant statistical quantifiers are classically computable, define a falsifiable benchmark for BosonSampling, and reveal distinctive features of many-particle quantum dynamics, which go much beyond mere bunching or anti-bunching effects.

  2. Observer-based FDI for Gain Fault Detection in Ship Propulsion Benchmark

    DEFF Research Database (Denmark)

    Lootsma, T.F.; Izadi-Zamanabadi, Roozbeh; Nijmeijer, H.

    2001-01-01

    A geometric approach for input-affine nonlinear systems is briefly described and then applied to a ship propulsion benchmark. The obtained results are used to design a diagnostic nonlinear observer for successful FDI of the diesel engine gain fault......A geometric approach for input-affine nonlinear systems is briefly described and then applied to a ship propulsion benchmark. The obtained results are used to design a diagnostic nonlinear observer for successful FDI of the diesel engine gain fault...

  3. Observer-based FDI for Gain Fault Detection in Ship Propulsion Benchmark

    DEFF Research Database (Denmark)

    Lootsma, T.F.; Izadi-Zamanabadi, Roozbeh; Nijmeijer, H.

    2001-01-01

    A geometric approach for input-affine nonlinear systems is briefly described and then applied to a ship propulsion benchmark. The obtained results are used to design a diagnostic nonlinear observer for successful FDI of the diesel engine gain fault.......A geometric approach for input-affine nonlinear systems is briefly described and then applied to a ship propulsion benchmark. The obtained results are used to design a diagnostic nonlinear observer for successful FDI of the diesel engine gain fault....

  4. Comprehensive immunohistochemical analysis of Her-2/neu oncoprotein overexpression in breast cancer: HercepTest (Dako) for manual testing and Her-2/neuTest 4B5 (Ventana) for Ventana BenchMark automatic staining system with correlation to results of fluorescence in situ hybridization (FISH).

    Science.gov (United States)

    Mayr, Doris; Heim, Sibylle; Werhan, Cedric; Zeindl-Eberhart, Evelyn; Kirchner, Thomas

    2009-03-01

    Overexpression of Her-2/neu-oncoprotein is used as marker for Herceptin therapy. To investigate the sensitivity and specificity of automatic immunohistochemistry (Benchmark, Ventana), we compared the results to the manual testing (Dako) in 130 breast carcinomas and validated the results by fluorescence in situ hybridization (FISH). Manual and automatic immunohistochemistry of Her-2/neu-oncoprotein using two different antibodies (HercepTest, Her-2/neuTest 4B5) was analyzed. FISH was performed in all cases with uncertain or strong overexpression in either immunohistochemical stainings or with different immunohistochemical results. Same immunohistochemical results were seen in 73.8%. Two cases with overexpression, detected with Her-2/neuTest 4B5 and confirmed by FISH, showed no overexpression using HercepTest. From 21 cases with 2+ by Her-2/neuTest 4B5, 15 cases had no gene amplification (two of them with 3+ HercepTest); three cases showed a gene amplification (one of them with failing overexpression by HercepTest); two other cases were polysomic; one could not be analyzed. Ventana immunohistochemistry seems to be of same reliability like Dako with a little better concordance to FISH in our study.

  5. Reviewing and Benchmarking Adventure Therapy Outcomes: Applications of Meta-Analysis.

    Science.gov (United States)

    Neill, James T.

    2003-01-01

    Findings from meta-analyses of outdoor education, psychotherapy, and educational innovations are presented to help determine the relative efficacy of adventure therapy programs. While adventure therapy effects are stronger than those of outdoor education, they are not nearly as strong as those of individual psychotherapy. Benchmarks are derived…

  6. Designing a Supply Chain Management Academic Curriculum Using QFD and Benchmarking

    Science.gov (United States)

    Gonzalez, Marvin E.; Quesada, Gioconda; Gourdin, Kent; Hartley, Mark

    2008-01-01

    Purpose: The purpose of this paper is to utilize quality function deployment (QFD), Benchmarking analyses and other innovative quality tools to develop a new customer-centered undergraduate curriculum in supply chain management (SCM). Design/methodology/approach: The researchers used potential employers as the source for data collection. Then,…

  7. Designing a Supply Chain Management Academic Curriculum Using QFD and Benchmarking

    Science.gov (United States)

    Gonzalez, Marvin E.; Quesada, Gioconda; Gourdin, Kent; Hartley, Mark

    2008-01-01

    Purpose: The purpose of this paper is to utilize quality function deployment (QFD), Benchmarking analyses and other innovative quality tools to develop a new customer-centered undergraduate curriculum in supply chain management (SCM). Design/methodology/approach: The researchers used potential employers as the source for data collection. Then,…

  8. A new, challenging benchmark for nonlinear system identification

    Science.gov (United States)

    Tiso, Paolo; Noël, Jean-Philippe

    2017-02-01

    The progress accomplished during the past decade in nonlinear system identification in structural dynamics is considerable. The objective of the present paper is to consolidate this progress by challenging the community through a new benchmark structure exhibiting complex nonlinear dynamics. The proposed structure consists of two offset cantilevered beams connected by a highly flexible element. For increasing forcing amplitudes, the system sequentially features linear behaviour, localised nonlinearity associated with the buckling of the connecting element, and distributed nonlinearity resulting from large elastic deformations across the structure. A finite element-based code with time integration capabilities is made available at https://sem.org/nonlinear-systems-imac-focus-group/. This code permits the numerical simulation of the benchmark dynamics in response to arbitrary excitation signals.

  9. Higgs Pair Production: Choosing Benchmarks With Cluster Analysis

    CERN Document Server

    Dall'Osso, Martino; Gottardo, Carlo A; Oliveira, Alexandra; Tosi, Mia; Goertz, Florian

    2015-01-01

    New physics theories often depend on a large number of free parameters. The precise values of those parameters in some cases drastically affect the resulting phenomenology of fundamental physics processes, while in others finite variations can leave it basically invariant at the level of detail experimentally accessible. When designing a strategy for the analysis of experimental data in the search for a signal predicted by a new physics model, it appears advantageous to categorize the parameter space describing the model according to the corresponding kinematical features of the final state. A multi-dimensional test statistic can be used to gauge the degree of similarity in the kinematics of different models; a clustering algorithm using that metric may then allow the division of the space into homogeneous regions, each of which can be successfully represented by a benchmark point. Searches targeting those benchmark points are then guaranteed to be sensitive to a large area of the parameter space. In this doc...

  10. RESRAD benchmarking against six radiation exposure pathway models

    Energy Technology Data Exchange (ETDEWEB)

    Faillace, E.R.; Cheng, J.J.; Yu, C.

    1994-10-01

    A series of benchmarking runs were conducted so that results obtained with the RESRAD code could be compared against those obtained with six pathway analysis models used to determine the radiation dose to an individual living on a radiologically contaminated site. The RESRAD computer code was benchmarked against five other computer codes - GENII-S, GENII, DECOM, PRESTO-EPA-CPG, and PATHRAE-EPA - and the uncodified methodology presented in the NUREG/CR-5512 report. Estimated doses for the external gamma pathway; the dust inhalation pathway; and the soil, food, and water ingestion pathways were calculated for each methodology by matching, to the extent possible, input parameters such as occupancy, shielding, and consumption factors.

  11. Benchmarking the Multi-dimensional Stellar Implicit Code MUSIC

    CERN Document Server

    Goffrey, T; Viallet, M; Baraffe, I; Popov, M V; Walder, R; Folini, D; Geroux, C; Constantino, T

    2016-01-01

    We present the results of a numerical benchmark study for the MUlti-dimensional Stellar Implicit Code (MUSIC) based on widely applicable two- and three-dimensional compressible hydrodynamics problems relevant to stellar interiors. MUSIC is an implicit large eddy simulation code that uses implicit time integration, implemented as a Jacobian-free Newton Krylov method. A physics based preconditioning technique which can be adjusted to target varying physics is used to improve the performance of the solver. The problems used for this benchmark study include the Rayleigh-Taylor and Kelvin-Helmholtz instabilities, and the decay of the Taylor-Green vortex. Additionally we show a test of hydrostatic equilibrium, in a stellar environment which is dominated by radiative effects. In this setting the flexibility of the preconditioning technique is demonstrated. This work aims to bridge the gap between the hydrodynamic test problems typically used during development of numerical methods and the complex flows of stellar in...

  12. Benchmarks for Dark Matter Searches at the LHC

    CERN Document Server

    de Simone, Andrea; Strumia, Alessandro

    2014-01-01

    We propose some scenarios to pursue dark matter searches at the LHC in a fairly model-independent way. The first benchmark case is dark matter co-annihilations with coloured particles (gluinos or squarks being special examples). We determine the masses that lead to the correct thermal relic density including, for the first time, strong Sommerfeld corrections taking into account colour decomposition. In the second benchmark case we consider dark matter that couples to SM particles via the Z or the Higgs. We determine the couplings allowed by present experiments and discuss future prospects. Finally we present the case of dark matter that freezes out via decays and apply our results to invisible Z and Higgs decays.

  13. A benchmark for fault tolerant flight control evaluation

    Science.gov (United States)

    Smaili, H.; Breeman, J.; Lombaerts, T.; Stroosma, O.

    2013-12-01

    A large transport aircraft simulation benchmark (REconfigurable COntrol for Vehicle Emergency Return - RECOVER) has been developed within the GARTEUR (Group for Aeronautical Research and Technology in Europe) Flight Mechanics Action Group 16 (FM-AG(16)) on Fault Tolerant Control (2004 2008) for the integrated evaluation of fault detection and identification (FDI) and reconfigurable flight control strategies. The benchmark includes a suitable set of assessment criteria and failure cases, based on reconstructed accident scenarios, to assess the potential of new adaptive control strategies to improve aircraft survivability. The application of reconstruction and modeling techniques, based on accident flight data, has resulted in high-fidelity nonlinear aircraft and fault models to evaluate new Fault Tolerant Flight Control (FTFC) concepts and their real-time performance to accommodate in-flight failures.

  14. Benchmarking of neutron production of heavy-ion transport codes

    Energy Technology Data Exchange (ETDEWEB)

    Remec, I. [Oak Ridge National Laboratory, Oak Ridge, TN 37831-6172 (United States); Ronningen, R. M. [Michigan State Univ., National Superconductiong Cyclotron Laboratory, East Lansing, MI 48824-1321 (United States); Heilbronn, L. [Univ. of Tennessee, 1004 Estabrook Rd., Knoxville, TN 37996-2300 (United States)

    2011-07-01

    Document available in abstract form only, full text of document follows: Accurate prediction of radiation fields generated by heavy ion interactions is important in medical applications, space missions, and in design and operation of rare isotope research facilities. In recent years, several well-established computer codes in widespread use for particle and radiation transport calculations have been equipped with the capability to simulate heavy ion transport and interactions. To assess and validate these capabilities, we performed simulations of a series of benchmark-quality heavy ion experiments with the computer codes FLUKA, MARS15, MCNPX, and PHITS. We focus on the comparisons of secondary neutron production. Results are encouraging; however, further improvements in models and codes and additional benchmarking are required. (authors)

  15. Coral benchmarks in the center of biodiversity.

    Science.gov (United States)

    Licuanan, W Y; Robles, R; Dygico, M; Songco, A; van Woesik, R

    2017-01-30

    There is an urgent need to quantify coral reef benchmarks that assess changes and recovery rates through time and serve as goals for management. Yet, few studies have identified benchmarks for hard coral cover and diversity in the center of marine diversity. In this study, we estimated coral cover and generic diversity benchmarks on the Tubbataha reefs, the largest and best-enforced no-take marine protected area in the Philippines. The shallow (2-6m) reef slopes of Tubbataha were monitored annually, from 2012 to 2015, using hierarchical sampling. Mean coral cover was 34% (σ±1.7) and generic diversity was 18 (σ±0.9) per 75m by 25m station. The southeastern leeward slopes supported on average 56% coral cover, whereas the northeastern windward slopes supported 30%, and the western slopes supported 18% coral cover. Generic diversity was more spatially homogeneous than coral cover. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. The national hydrologic bench-mark network

    Science.gov (United States)

    Cobb, Ernest D.; Biesecker, J.E.

    1971-01-01

    The United States is undergoing a dramatic growth of population and demands on its natural resources. The effects are widespread and often produce significant alterations of the environment. The hydrologic bench-mark network was established to provide data on stream basins which are little affected by these changes. The network is made up of selected stream basins which are not expected to be significantly altered by man. Data obtained from these basins can be used to document natural changes in hydrologic characteristics with time, to provide a better understanding of the hydrologic structure of natural basins, and to provide a comparative base for studying the effects of man on the hydrologic environment. There are 57 bench-mark basins in 37 States. These basins are in areas having a wide variety of climate and topography. The bench-mark basins and the types of data collected in the basins are described.

  17. DWEB: A Data Warehouse Engineering Benchmark

    CERN Document Server

    Darmont, Jérôme; Boussaïd, Omar

    2005-01-01

    Data warehouse architectural choices and optimization techniques are critical to decision support query performance. To facilitate these choices, the performance of the designed data warehouse must be assessed. This is usually done with the help of benchmarks, which can either help system users comparing the performances of different systems, or help system engineers testing the effect of various design choices. While the TPC standard decision support benchmarks address the first point, they are not tuneable enough to address the second one and fail to model different data warehouse schemas. By contrast, our Data Warehouse Engineering Benchmark (DWEB) allows to generate various ad-hoc synthetic data warehouses and workloads. DWEB is fully parameterized to fulfill data warehouse design needs. However, two levels of parameterization keep it relatively easy to tune. Finally, DWEB is implemented as a Java free software that can be interfaced with most existing relational database management systems. A sample usag...

  18. Benchmarking optimization solvers for structural topology optimization

    DEFF Research Database (Denmark)

    Rojas Labanda, Susana; Stolpe, Mathias

    2015-01-01

    The purpose of this article is to benchmark different optimization solvers when applied to various finite element based structural topology optimization problems. An extensive and representative library of minimum compliance, minimum volume, and mechanism design problem instances for different...... sizes is developed for this benchmarking. The problems are based on a material interpolation scheme combined with a density filter. Different optimization solvers including Optimality Criteria (OC), the Method of Moving Asymptotes (MMA) and its globally convergent version GCMMA, the interior point...... profiles conclude that general solvers are as efficient and reliable as classical structural topology optimization solvers. Moreover, the use of the exact Hessians in SAND formulations, generally produce designs with better objective function values. However, with the benchmarked implementations solving...

  19. Energy benchmarking of South Australian WWTPs.

    Science.gov (United States)

    Krampe, J

    2013-01-01

    Optimising the energy consumption and energy generation of wastewater treatment plants (WWTPs) is a topic with increasing importance for water utilities in times of rising energy costs and pressures to reduce greenhouse gas (GHG) emissions. Assessing the energy efficiency and energy optimisation of a WWTP are difficult tasks as most plants vary greatly in size, process layout and other influencing factors. To overcome these limits it is necessary to compare energy efficiency with a statistically relevant base to identify shortfalls and optimisation potential. Such energy benchmarks have been successfully developed and used in central Europe over the last two decades. This paper demonstrates how the latest available energy benchmarks from Germany have been applied to 24 WWTPs in South Australia. It shows how energy benchmarking can be used to identify shortfalls in current performance, prioritise detailed energy assessments and help inform decisions on capital investment.

  20. Confidential benchmarking based on multiparty computation

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Damgård, Kasper Lyneborg; Nielsen, Kurt;

    We report on the design and implementation of a system that uses multiparty computation to enable banks to benchmark their customers' confidential performance data against a large representative set of confidential performance data from a consultancy house. The system ensures that both the banks......' and the consultancy house's data stays confidential, the banks as clients learn nothing but the computed benchmarking score. In the concrete business application, the developed prototype help Danish banks to find the most efficient customers among a large and challenging group of agricultural customers with too much...... debt. We propose a model based on linear programming for doing the benchmarking and implement it using the SPDZ protocol by Damgård et al., which we modify using a new idea that allows clients to supply data and get output without having to participate in the preprocessing phase and without keeping...