WorldWideScience

Sample records for aer benchmark specification

  1. Solution of the AER6 benchmark problem by KIKO3D/ATHLET code system

    International Nuclear Information System (INIS)

    The realistic analysis of accident conditions requires the extension of thermohydraulic plant system codes with 3D neutronics models. Recently many activities have been performed to develop, verify and validate such coupled codes. During the last years the thermohydraulic system code ATHLET developed by GRS was coupled to the 3D neutronic code KIKO3D developed by KFKI AEKI in order to simulate time dependent behavior of the VVER NPP. The 6-loop ATHLET input model worked out by AEKI assures the more precise characterization of the primary system. As an example of application, results are presented for the AER6 Benchmark, which is a VVER specific Main Steam Line Break (MSLB) transient. Emphasis is given to one of the basic problems of coupled codes, namely the effect of the slightly different nodalization in the core vessel.(author)

  2. Solution of the fifth AER benchmark with code package ATHLET/BIPR8KN

    International Nuclear Information System (INIS)

    The fifth three-dimensional hexagonal benchmark problem continues a series of the international benchmark problems defined during 1992-1996 in the international VVER cooperation forum AER. The initial event of the fifth AER benchmark is a symmetrical break in the middle part of the main steam header at the end of the first fuel cycle and under the hot shutdown condition with one stuck control rod group. The main difference from previous benchmark is that the system works of the primary and secondary sides are considered in this benchmark. The main aim of the benchmark is a calculation of the transient after the recriticality had achieved. The solution of the fifth three-dimensional hexagonal dynamic AER benchmark problem obtained by code package ATHLET/BIPR8KN is presented. The used reactor scheme is described including the description of the core, primary and secondary side. The amount of necessary tuning and tools of tuning to achieve a requested in the definition of the problem reference values are considered. Comparative analysis of the results obtained by using a different detalization schemes are carried out. (author)

  3. Solution of the fifth AER benchmark with code package ATHLET/BIPR8KN

    International Nuclear Information System (INIS)

    The fifth three-dimensional hexagonal benchmark problem continues a series of the international benchmark problems defined during 1992-1996 in the international WWER cooperation forum atomic energy research. The initial event of the fifth AER benchmark is a symmetrical break in the middle part of the main steam header at the end of the first fuel cycle and under the hot shutdown condition with one stuck control rod group. The main difference from previous benchmark is that the system works of the primary and secondary sides are considered in this benchmark. The main aim of the benchmark is a calculation of the transient after the recriticality had achieved. The solution of the fifth three-dimensional hexagonal dynamic AER benchmark problem obtained by code package ATHLET/BIPR8KN is presented. The report contains the description of the used reactor scheme including the description of the core, primary and secondary side. The amount of necessary tuning and tools of tuning to achieve a requested in the definition of the problem reference values are considered. Comparative analysis of the results obtained by using a different detalization schemes are carried out.(Authors)

  4. Analysis of the WWER-440 AER2 rod ejection benchmark by the SKETCH-N code

    International Nuclear Information System (INIS)

    The neutron kinetics code SKETCH-N has been recently extended to treat hexagonal geometry using a polynomial nodal method based on the conformal mapping of a hexagon into a rectangle. Basic features of the code are outlined. Results of the steady-state benchmark calculations demonstrate excellent accuracy of the nodal method. To test a neutron kinetics module for WWER applications, the second AER rod ejection benchmark is computed and the results are compared with the results of the production WWER codes: BIPR8, DYN3D, HEXTRAN and KIKO3D. The steady-state results show that the SKETCH-N code gives an ejected control rod worth close to that of BIPR8 and HEXTRAN. The assembly power distribution is compared with the DYN3D results. Maximum discrepancies of about 5% are found in the power of peripheral assemblies and assemblies with partially inserted control rods (Authors)

  5. HEXTRAN-SMABRE calculation of the 6th AER Benchmark, main steam line break in a WWER-440 NPP

    International Nuclear Information System (INIS)

    The sixth AER benchmark is the second AER benchmark for couplings of the thermal hydraulic codes and three dimensional neutron kinetic core models. It concerns a double end break of one main steam line in a WWER-440 plant. The core is at the end of its first cycle in full power conditions. In VTT HEXTRAN2.9 is used for the core kinetics and dynamics and SMABRE4.8 as a thermal hydraulic model for the primary and secondary loop. The plant model for SMABRE consists mainly of two input models, Loviisa model and a standard WWER-440/213 plant model. The primary loop includes six separate loops, the pressure vessel is divided into six parallel channels in SMABRE and the whole core calculation is performed in the core with HEXTRAN. The horizontal steam generators are modelled with heat transfer tubes in five levels and vertically with two parts, riser and downcomer. With this kind of detailed modelling of steam generators there occurs strong flashing after break opening. As a sequence of the main steam line break at nominal power level, the reactor trip is followed quite soon. The liquid temperature continues to decrease in one core inlet sector which may lead to recriticality and neuron power increase. The situation is very sensitive to small changes in the steam generator and break flow modelling and therefore several sensitivity calculations have been done. Also two stucked control rods have been assumed. Due to boric acid concentration in the high pressure safety injection subcriticality is finally guaranteed in the transient (Authors)

  6. Comparison of the results of the fifth dynamic AER benchmark-a benchmark for coupled thermohydraulic system/three-dimensional hexagonal kinetic core models

    International Nuclear Information System (INIS)

    The fifth dynamic benchmark was defined at seventh AER-Symposium, held in Hoernitz, Germany in 1997. It is the first benchmark for coupled thermohydraulic system/three-dimensional hexagonal neutron kinetic core models. In this benchmark the interaction between the components of a WWER-440 NPP with the reactor core has been investigated. The initiating event is a symmetrical break of the main steam header at the end of the first fuel cycle and hot shutdown conditions with one control rod group stucking. This break causes an overcooling of the primary circuit. During this overcooling the scram reactivity is compensated and the scrammed reactor becomes re critical. The calculation was continued until the highly-borated water from the high pressure injection system terminated the power excursion. Each participant used own best-estimate nuclear cross section data. Only the initial subcriticality at the beginning of the transient was given. Solutions were received from Kurchatov Institute Russia with the code BIPR8/ATHLET, VTT Energy Finland with HEXTRAN/SMABRE, NRI Rez Czech Republic with DYN3/ATHLET, KFKI Budapest Hungary with KIKO3D/ATHLET and from FZR Germany with the code DYN3D/ATHLET.In this paper the results are compared. Beside the comparison of global results, the behaviour of several thermohydraulic and neutron kinetic parameters is presented to discuss the revealed differences between the solutions.(Authors)

  7. Final results of the sixth three-dimensional AER dynamic Benchmark problem calculation. Solution of problem with DYN3D and RELAP5-3D codes

    International Nuclear Information System (INIS)

    The paper gives a brief survey of the 6th three-dimensional AER dynamic benchmark calculation results received with the codes DYN3D and RELAP5-3D at NRI Rez. This benchmark was defined at the 10th AER Symposium. Its initiating event is a double ended break in the steam line of steam generator No. 1 in a WWER-440/213 plant at the end of the first fuel cycle and in hot full power conditions. Stationary and burnup calculations as well as tuning of initial state before the transient were performed with the code DYN3D. Transient calculations were made with the system code RELAP5-3D. The KASSETA library was used for the generation of reactor core neutronic parameters. The detailed six loops model of NPP Dukovany was adopted for the 6th AER dynamic benchmark purposes. The RELAP5-3D full core neutronic model was connected with 37 coolant channels thermal-hydraulic model of the core, 6-sector nodalization of reactor downcomer, lower and upper plenum was used. Mixing in lower and upper plenum was simulated. The first part of paper contains a brief characteristic of RELAP5 -3D system code and a short description of NPP input deck and reactor core model. The second part shows the time dependencies of important global and local parameters (Authors)

  8. Comparison of the updated solutions of the 6th dynamic AER Benchmark - main steam line break in a NPP with WWER-440

    International Nuclear Information System (INIS)

    The 6th dynamic AER Benchmark is used for the systematic validation of coupled 3D neutron kinetic/thermal hydraulic system codes. It was defined at The 10th AER-Symposium. In this benchmark, a hypothetical double ended break of one main steam line at full power in a WWER-440 plant is investigated. The main thermal hydraulic features are the consideration of incomplete coolant mixing in the lower and upper plenum of the reactor pressure vessel and an asymmetric operation of the feed water system. For the tuning of the different nuclear cross section data used by the participants, an isothermal re-criticality temperature was defined. The paper gives an overview on the behaviour of the main thermal hydraulic and neutron kinetic parameters in the provided solutions. The differences in the updated solution in comparison to the previous ones are described. Improvements in the modelling of the transient led to a better agreement of a part of the results while for another part the deviations rose up. The sensitivity of the core power behaviour on the secondary side modelling is discussed in detail (Authors)

  9. Simple Benchmark Specifications for Space Radiation Protection

    Science.gov (United States)

    Singleterry, Robert C. Jr.; Aghara, Sukesh K.

    2013-01-01

    This report defines space radiation benchmark specifications. This specification starts with simple, monoenergetic, mono-directional particles on slabs and progresses to human models in spacecraft. This report specifies the models and sources needed to what the team performing the benchmark needs to produce in a report. Also included are brief descriptions of how OLTARIS, the NASA Langley website for space radiation analysis, performs its analysis.

  10. Mixing effect in the solution of the AER6 Benchmark problem by KIKO3D/ATHLET code system

    International Nuclear Information System (INIS)

    The former result calculated with the coupled KIKO3D/ATHLET code system for the sixth dynamic benchmark problem is presented and compared with a new one. The only difference between the two calculations is the slightly different nodalization in the core vessel. Though it is a physically plausible fact that the lack of mixing in the upper plenum causes a considerable change in the results a rough nodalization above the core is widely used as it makes easier. The effect of this simplification is investigated (Authors)

  11. Specification for the VERA Depletion Benchmark Suite

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Kang Seog [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2015-12-17

    CASL-X-2015-1014-000 iii Consortium for Advanced Simulation of LWRs EXECUTIVE SUMMARY The CASL neutronics simulator MPACT is under development for the neutronics and T-H coupled simulation for the pressurized water reactor. MPACT includes the ORIGEN-API and internal depletion module to perform depletion calculations based upon neutron-material reaction and radioactive decay. It is a challenge to validate the depletion capability because of the insufficient measured data. One of the detoured methods to validate it is to perform a code-to-code comparison for benchmark problems. In this study a depletion benchmark suite has been developed and a detailed guideline has been provided to obtain meaningful computational outcomes which can be used in the validation of the MPACT depletion capability.

  12. Benchmark Specification for HTGR Fuel Element Depletion

    International Nuclear Information System (INIS)

    explicitly represent the dynamics of neutron slowing down in a heterogeneous environment with randomised grain distributions, but traditional tracking simulations can be extremely slow, and the large number of grains in a fuel element may often represent an extreme burden on computational resources. A number of approximations or simplifying assumptions have been developed to simplify the computational process and reduce the effort. Multi-group (MG) methods, on the other hand, require special treatment of DH fuels in order to properly capture resonance effects, and generally cannot explicitly represent a random distribution of grains due to the excessive computational burden resulting from the spatial grain distribution. The effect of such approximations may be important and has potential to misrepresent the spectrum within a fuel grain. Depletion methods utilised in lattice calculations typically rely on point depletion methods, based on the isotopic inventory of fuel depleted, assuming a single localised neutron flux. This flux is generally determined using either a CE or MG transport solver. Hence, in application to DH fuels, the primary factor influencing the accuracy of a depletion calculation will be the accuracy of the local flux calculated within the transport solution and the cross-sections. The current lack of well-qualified experimental measurements for spent HGTR fuel elements limits the validation of advanced DH depletion method. Because of this shortage of data, this benchmark has been developed as the first, simplest phase in a planned series of increasingly complex set of code-to-code benchmarks. The intent of this benchmark is to encourage submission of a wide range of computational results for depletion calculations in a set of basic fuel cell models. Comparison of results using independent methods and data should provide insight into potential limitations in various modelling approximations. The benchmark seeks to provide the simplest possible models, in

  13. AER image filtering

    Science.gov (United States)

    Gómez-Rodríguez, F.; Linares-Barranco, A.; Paz, R.; Miró-Amarante, L.; Jiménez, G.; Civit, A.

    2007-05-01

    Address Event Representation (AER) is an emergent neuromorphic interchip communication protocol that allows real-time virtual massive connectivity among huge number of neurons located on different chips.[1] By exploiting high speed digital communication circuits (with nano-seconds timing), synaptic neural connections can be time multiplexed, while neural activity signals (with mili-seconds timings) are sampled at low frequencies. Neurons generate "events" according to their activity levels. That is, more active neurons generate more events per unit time and access the interchip communication channel more frequently than neurons with low activity. In Neuromorphic system development, AER brings some advantages to develop real-time image processing system: (1) AER represents the information like time continuous stream not like a frame; (2) AER sends the most important information first (although this depends on the sender); (3) AER allows to process information as soon as it is received. When AER is used in artificial vision field, each pixel is considered like a neuron, so pixel's intensity is represented like a sequence of events; modifying the number and the frequency of these events, it is possible to make some image filtering. In this paper we present four image filters using AER: (a) Noise addition and suppression, (b) brightness modification, (c) single moving object tracking and (d) geometrical transformations (rotation, translation, reduction and magnification). For testing and debugging, we use USB-AER board developed by Robotic and Technology of Computers Applied to Rehabilitation (RTCAR) research group. This board is based on an FPGA, devoted to manage the AER functionality. This board also includes a micro-controlled for USB communication, 2 Mbytes RAM and 2 AER ports (one for input and one for output).

  14. Definition of the seventh dynamic AER benchmark-WWER-440 pressure vessel coolant mixing by re-connection of an isolated loop

    International Nuclear Information System (INIS)

    The seventh dynamic benchmark is a continuation of the efforts to validate systematically codes for the estimation of the transient behavior of VVER type nuclear power plants. This benchmark is a continuation of the work in the sixth dynamic benchmark. It is proposed to be simulated the transient - re-connection of an isolated circulating loop with low temperature or low boron concentration in a VVER-440 plant. It is supposed to expand the benchmark to other cases when a different number of loops are in operation leading to different symmetric and asymmetric core boundary conditions. The purposes of the proposed benchmark are: 1) Best-estimate simulations of an transient with a coolant flow mixing in the Reactor Pressure Vessel of WWER-440 plant by re-connection of one coolant loop to the several ones on operation, 2) Performing of code-to-code comparisons. The core is at the end of its first cycle with a power of 1196.25 MWt. The basic additional difference of the 7-seventh benchmark is in the detailed description of the downcomer and bottom part of the reactor vessel that allow describing the effects of coolant mixing in the Reactor Pressure Vessel without any additional conservative assumptions. The burn-up and the power distributions at this reactor state have to be calculated by the participants. The thermohydraulic conditions of the core in the beginning of the transient are specified. Participants self-generated best estimate nuclear data is to be used. The main geometrical parameters of the plant and the characteristics of the control and safety systems are also specified. Use generated input data decks developed for a WWER-440 plant and for the applied codes should be used. The behaviour of the plant should be studied applying coupled system codes, which combine a three-dimensional neutron kinetics description of the core with a pseudo or real 3D thermohydraulics system code. (Authors)

  15. Ensemble approach to predict specificity determinants: benchmarking and validation

    OpenAIRE

    Panchenko Anna R; Chakrabarti Saikat

    2009-01-01

    Abstract Background It is extremely important and challenging to identify the sites that are responsible for functional specification or diversification in protein families. In this study, a rigorous comparative benchmarking protocol was employed to provide a reliable evaluation of methods which predict the specificity determining sites. Subsequently, three best performing methods were applied to identify new potential specificity determining sites through ensemble approach and common agreeme...

  16. Continuation of the VVER burnup credit benchmark. Evaluation of CB1 results, overview of CB2 results to date, and specification of CB3

    International Nuclear Information System (INIS)

    A calculational benchmark focused on VVER-440 burnup credit, similar to that of the OECD/NEA/NSC Burnup Credit Benchmark Working Group, was proposed on the 96'AER Symposium. Its first part, CB1, was specified there whereas the second part, CB2, was specified a year later, on 97'AER Symposium in Zittau. A final statistical evaluation is presented of CB1 results and summarizes the CB2 results obtained to date. Further, the effect of an axial burnup profile of VVER-440 spent fuel on criticality ('end effect') is proposed to be studied in the CB3 benchmark problem of an infinite array of VVER-440 spent fuel rods. (author)

  17. Research Reactor Benchmarking Database: Facility Specification and Experimental Data

    International Nuclear Information System (INIS)

    This web publication contains the facility specifications, experiment descriptions, and corresponding experimental data for nine different research reactors covering a wide range of research reactor types, power levels and experimental configurations. Each data set was prepared in order to serve as a stand-alone resource of well documented experimental data, which can subsequently be used in benchmarking and validation of the neutronic and thermal-hydraulic computational methods and tools employed for improved utilization, operation and safety analysis of research reactors

  18. Ensemble approach to predict specificity determinants: benchmarking and validation

    Directory of Open Access Journals (Sweden)

    Panchenko Anna R

    2009-07-01

    Full Text Available Abstract Background It is extremely important and challenging to identify the sites that are responsible for functional specification or diversification in protein families. In this study, a rigorous comparative benchmarking protocol was employed to provide a reliable evaluation of methods which predict the specificity determining sites. Subsequently, three best performing methods were applied to identify new potential specificity determining sites through ensemble approach and common agreement of their prediction results. Results It was shown that the analysis of structural characteristics of predicted specificity determining sites might provide the means to validate their prediction accuracy. For example, we found that for smaller distances it holds true that the more reliable the prediction method is, the closer predicted specificity determining sites are to each other and to the ligand. Conclusion We observed certain similarities of structural features between predicted and actual subsites which might point to their functional relevance. We speculate that majority of the identified potential specificity determining sites might be indirectly involved in specific interactions and could be ideal target for mutagenesis experiments.

  19. AER working group E meeting in 2008

    International Nuclear Information System (INIS)

    The thirteenth meeting of the AER Working Group E 'Physical Problems on Spent Fuel, Radwaste and Decommissioning of Nuclear Power Plants' organized by NRI Rez was held in Rez, Czech Republic, on 3-4 April, 2008. The meeting was focused on the following topics: depletion and criticality calculations (ETE cask, CB5 benchmark,...), experience with SCALE 5.1 depletion sequence TRITON, spent fuel disposal, PIE (Post Irradiations Experiments), uncertainties in criticality safety, boron credit implementation, burnup credit implementation. Total number of participants was 14 from 6 countries. (authors)

  20. Assessment of Usability Benchmarks: Combining Standardized Scales with Specific Questions

    Directory of Open Access Journals (Sweden)

    Stephanie Bettina Linek

    2011-12-01

    Full Text Available The usability of Web sites and online services is of rising importance. When creating a completely new Web site, qualitative data are adequate for identifying the most usability problems. However, changes of an existing Web site should be evaluated by a quantitative benchmarking process. The proposed paper describes the creation of a questionnaire that allows a quantitative usability benchmarking, i.e. a direct comparison of the different versions of a Web site and an orientation on general standards of usability. The questionnaire is also open for qualitative data. The methodology will be explained by the digital library services of the ZBW.

  1. Continuation of the WWER burnup credit benchmark: evaluation of CB1 results, overview of CB2 results to date, and specification of CB3

    International Nuclear Information System (INIS)

    A calculational benchmark focused on WWER-440 burnup credit, simular to that of the OECD/NEA/NSC Burnup Credit Criticality Benchmark Working Group, was proposed on the 96'AER Symposium. Its first part, CB1, was specified there whereas the second part, CB2, was specified a year later, on 97'AER Symposium in Zittau. This paper brings a final statistical evaluation of CB1 results and summarizes the CB2 results obtained to date. Further, the effect of an axial burnup profile of WWER-440 spent fuel on criticality ('end effect') is proposed to be studied in the CB3 benchmark problem of an infinite array of WWER-440 spent fuel rods as specified in the paper. (Authors)

  2. Benchmarking Benchmarks

    NARCIS (Netherlands)

    D.C. Blitz (David)

    2011-01-01

    textabstractBenchmarking benchmarks is a bundle of six studies that are inspired by the prevalence of benchmarking in academic finance research as well as in investment practice. Three studies examine if current benchmark asset pricing models adequately describe the cross-section of stock returns. W

  3. Adverse Event Reporting System (AERS)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Adverse Event Reporting System (AERS) is a computerized information database designed to support the FDA's post-marketing safety surveillance program for all...

  4. Results of the isotopic concentrations of VVER calculational burnup credit benchmark No. 2(CB2)

    International Nuclear Information System (INIS)

    Results of the nuclide concentrations are presented of VVER Burnup Credit Benchmark No. 2(CB2) that were performed in The Nuclear Technology Center of Cuba with available codes and libraries. The CB2 benchmark specification as the second phase of the VVER burnup credit benchmark is summarized. The CB2 benchmark focused on VVER burnup credit study proposed on the 97' AER Symposium. The obtained results are isotopic concentrations of spent fuel as a function of the burnup and cooling time. The depletion point 'ORIGEN2' code and other codes were used for the calculation of the spent fuel concentration. (author)

  5. Specification of a benchmarking methodology for alignment techniques

    OpenAIRE

    Euzenat, Jérôme; García Castro, Raúl; Ehrig, Marc

    2004-01-01

    This document considers potential strategies for evaluating ontology alignment algorithms. It identifies various goals for such an evaluation. In the context of the Knowledge web network of excellence, the most important objective is the improvement of existing methods. We examine general evaluation strategies as well as efforts that have already been undergone in the specific field of ontology alignment. We then put forward some methodological and practical guidelines for running such an eva...

  6. Embedded Volttron specification - benchmarking small footprint compute device for Volttron

    Energy Technology Data Exchange (ETDEWEB)

    Sanyal, Jibonananda [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Fugate, David L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Woodworth, Ken [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Nutaro, James J. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Kuruganti, Teja [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2015-08-17

    An embedded system is a small footprint computing unit that typically serves a specific purpose closely associated with measurements and control of hardware devices. These units are designed for reasonable durability and operations in a wide range of operating conditions. Some embedded systems support real-time operations and can demonstrate high levels of reliability. Many have failsafe mechanisms built to handle graceful shutdown of the device in exception conditions. The available memory, processing power, and network connectivity of these devices are limited due to the nature of their specific-purpose design and intended application. Industry practice is to carefully design the software for the available hardware capability to suit desired deployment needs. Volttron is an open source agent development and deployment platform designed to enable researchers to interact with devices and appliances without having to write drivers themselves. Hosting Volttron on small footprint embeddable devices enables its demonstration for embedded use. This report details the steps required and the experience in setting up and running Volttron applications on three small footprint devices: the Intel Next Unit of Computing (NUC), the Raspberry Pi 2, and the BeagleBone Black. In addition, the report also details preliminary investigation of the execution performance of Volttron on these devices.

  7. AER working group D on WWER safety analysis. Report of the meeting in Garching, Germany, 6-7 April 2005

    International Nuclear Information System (INIS)

    AER working group D on WWER reactor safety analysis held its fourteenth meeting in the offices of GRS in Garching near Munich during the period 6-7 April 2005. The meeting followed the third workshop on the OECD/DOE/CEA WWER 1000 Coolant Transient Benchmark (V1000-CT) held at the same location on 4-5 April. Altogether 18 participant attended the Working Group D meeting, 12 from AER member organizations and a6 guests from non-member organizations. The coordinator for the working group, Mr. P. Siltanen (FNS) served as chairman. In addition to general information exchange on recent activities in the participating organizations, the topics of the meeting included: a) Code development and benchmarking for reactor dynamics applications; b) Safety analysis methodology and results; c) Dynamic benchmarks and solutions for the AER Bench;mark Book; d) Future activities (Authors)

  8. Pan-specific MHC class I predictors: A benchmark of HLA class I pan-specific prediction methods

    DEFF Research Database (Denmark)

    Zhang, Hao; Lundegaard, Claus; Nielsen, Morten

    2009-01-01

    MHCpan methods. Conclusions: The benchmark demonstrated that pan-specific methods do provide accurate predictions also for previously uncharacterized MHC molecules. The NetMHCpan method trained to predict actual binding affinities was consistently top ranking both on quantitative (affinity) and binary (ligand......) data. However, the KISS method trained to predict binary data was one of the best performing when benchmarked on binary data. Finally, a consensus method integrating predictions from the two best-performing methods was shown to improve the prediction accuracy. Associate Editor: Prof. Thomas Lengauer....... emerging pathogens. Methods have recently been published that are able to predict peptide binding to any human MHC class I molecule. In contrast to conventional allele-specific methods, these methods do allow for extrapolation to un-characterized MHC molecules. These pan-specific HLA predictors have not...

  9. AerChemMIP: Quantifying the effects of chemistry and aerosols in CMIP6

    OpenAIRE

    Collins, William J; Lamarque, Jean-François; Schulz, Michael; Boucher, Olivier; Eyring, Veronika; Hegglin, Michaela I.; Maycock, Amanda; Myhre, Gunnar; Prather, Michael; Shindell, Drew; Smith, Steven J.

    2016-01-01

    The Aerosol Chemistry Model Intercomparison Project (AerChemMIP) is endorsed by the Coupled-Model Intercomparison Project 6 (CMIP6) and is designed to quantify the climate and air quality impacts of aerosols and chemically-reactive gases. These are specifically near-term climate forcers (NTCFs: tropospheric ozone and aerosols, and their precursors), methane, nitrous oxide and ozone-depleting halocarbons. The aim of AerChemMIP is to answer four scientific questions: 1. How have anthropogeni...

  10. AER Working Group D on VVER safety analysis - report of the 2009 meeting

    International Nuclear Information System (INIS)

    The AER Working Group D on VVER reactor safety analysis held its 18-th meeting in Rez, Czech Republic, during the period 18-19 May, 2009. The meeting was hosted by the Nuclear Research Institute Rez. Altogether 17 participants attended the meeting of the working group D, 16 from AER member organizations and 1 guest from a non-member organization. The co-ordinator of the working group, S. Kliem, served as chairman of the meeting. The meeting started with a general information exchange about the recent activities in the participating organizations. The given presentations and the discussions can be attributed to the following topics: 1) Code validation and benchmarking; 2) Safety analysis and code developments; 3) Reactor pressure vessel thermal hydraulics; 4) Future activities including discussion on the participation in the OECD/NEA Benchmark for the Kalinin-3 NPP

  11. Results of the isotopic concentrations of VVER calculational burnup credit benchmark no. 2(cb2

    International Nuclear Information System (INIS)

    The characterization of the irradiated fuel materials is becoming more important with the Increasing use of nuclear energy in the world. The purpose of this document is to present the results of the nuclide concentrations calculated Using Calculation VVER Burnup Credit Benchmark No. 2(CB2). The calculations were Performed in The Nuclear Technology Center of Cuba. The CB2 benchmark specification as the second phase of the VVER burnup credit benchmark is Summarized in [1]. The CB2 benchmark focused on VVER burnup credit study proposed on the 97' AER Symposium [2]. It should provide a comparison of the ability of various code systems And data libraries to predict VVER-440 spent fuel isotopes (isotopic concentrations) using Depletion analysis. This phase of the benchmark calculations is still in progress. CB2 should be finished by summer 1999 and evaluated results could be presented on the next AER Symposium. The obtained results are isotopic concentrations of spent fuel as a function of the burnup and Cooling time. The depletion point ORIGEN2[3] code was used for the calculation of the spent Fuel concentration. The depletion analysis was performed using the VVER-440 irradiated fuel assemblies with in-core Irradiation time of 3 years, burnup of the 30000 mwd/TU, and an after discharge cooling Time of 0 and 1 year. This work also comprises the results obtained by other codes[4].

  12. Simplified benchmark based on 2670 ISTC WWER post-irradiation examinations - specification and preliminary results

    International Nuclear Information System (INIS)

    Experimental validation of depletion computer codes is an ongoing need in spent fuel management. In WWER application area the lack of well-documented experimental data concerning depleted fuel is serious, being an obstacle to introduce new effective technologies and approaches in spent fuel management, e.g. burnup credit (BUC). In 2004, the final report of ISTC 2670 project on post-irradiation examinations (PIE) of eight samples from Novovoronezh-4 NPP (specimens taken from one assembly covering burnup range from 22 to 45 MWd/kgU) was released and published. The 2670 WWER-440 post-irradiation is the first publicly available measurement providing also fission product concentrations for the 'BUC set' of isotopes. Although the documentation of the experiment was quite comprehensive, still there were missing some important data needed for a precise depletion simulation. Therefore in 2006, NRI in collaboration with RIAR Dimitrovgrad, where the measurements were carried out, gathered the missing data and prepared a well-specified simplified benchmark, based on this measurement. Its specification as well as results of preliminary calculations using several depletion codes are presented in this paper. Final evaluation of the results calculated by all benchmark participants is expected for presentation in 2008 (Authors)

  13. The PRISM Benchmark Suite

    OpenAIRE

    Kwiatkowsa, Marta; Norman, Gethin; Parker, David

    2012-01-01

    We present the PRISM benchmark suite: a collection of probabilistic models and property specifications, designed to facilitate testing, benchmarking and comparisons of probabilistic verification tools and implementations.

  14. AER Working Group B activities in 2001

    International Nuclear Information System (INIS)

    Review of AER Working Group B Meeting in Czech Republic - Plzen is given. Regular meeting of Core Design Group was organized by SKODA JS, Inc. and held at Plzen-Bolevec, Czech Republic, May 21+22, 2001, together with Working Group A (Authors)

  15. CSEWG shielding benchmark specifications neutron attenuation measurements in a mockup of the FFTF radial shield. STD 9

    Energy Technology Data Exchange (ETDEWEB)

    Rose, P. F.; Alter, H.; Paschall, R. K.; Thiele, A. W.

    1973-01-15

    The experimental details and the calculational specifications for a CSEWG integral data test shielding experiment are presented. The shielding experiment described in the benchmark model is a combination of sodium and stainless steel that simulates the FFTF radial shield. The measurements in general include use of foil activation techniques using resonance and threshold detectors and proton recoil neutron spectrometer measurements in the range 5 kev to 2 MeV. The benchmark model is a test of the neutron cross-section data for sodium and the material components of stainless steel.

  16. ''FULL-CORE'' VVER-440 calculation benchmark

    International Nuclear Information System (INIS)

    Because of the difficulties with experimental validation of power distribution predicted by macro-code on the pin by pin level we decided to prepare a calculation benchmark named ''FULL-CORE'' VVER-440. This benchmark is a two-dimensional (2D) calculation benchmark based on the VVER-440 reactor core cold state geometry with taking into account the geometry of explicit radial reflector. The main task of this benchmark is to test the pin by pin power distribution in fuel assemblies predicted by macro-codes that are used for neutron-physics calculations especially for VVER-440 reactors. The proposal of this benchmark was presented at the 21st Symposium of AER in 2011. The reference solution has been calculated by MCNP code using Monte Carlo method and the results have been published in the AER community. The results of reference calculation were presented at the 22nd Symposium of AER in 2012. In this paper we will compare the available macro-codes results of this calculation benchmark.

  17. Neutron Reference Benchmark Field Specification: ACRR Free-Field Environment (ACRR-FF-CC-32-CL).

    Energy Technology Data Exchange (ETDEWEB)

    Vega, Richard Manuel [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Parma, Edward J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Griffin, Patrick J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Vehar, David W. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-07-01

    This report was put together to support the International Atomic Energy Agency (IAEA) REAL- 2016 activity to validate the dosimetry community’s ability to use a consistent set of activation data and to derive consistent spectral characterizations. The report captures details of integral measurements taken in the Annular Core Research Reactor (ACRR) central cavity free-field reference neutron benchmark field. The field is described and an “a priori” calculated neutron spectrum is reported, based on MCNP6 calculations, and a subject matter expert (SME) based covariance matrix is given for this “a priori” spectrum. The results of 31 integral dosimetry measurements in the neutron field are reported.

  18. VLSI implementation of a 2.8 Gevent/s packet based AER interface with routing and event sorting functionality

    Directory of Open Access Journals (Sweden)

    Stefan eScholze

    2011-10-01

    Full Text Available State-of-the-art large scale neuromorphic systems require sophisticated spike event communication between units of the neural network. We present a high-speed communication infrastructure for a waferscale neuromorphic system, based on application-specific neuromorphic communication ICs in an FPGA-maintained environment. The ICs implement configurable axonal delays, as required for certain types of dynamic processing or for emulating spike based learning among distant cortical areas. Measurements are presented which show the efficacy of these delays in influencing behaviour of neuromorphic benchmarks. The specialized, dedicated AER communication in most current systems requires separate, low-bandwidth configuration channels. In contrast, the configuration of the waferscale neuromorphic system is also handled by the digital packet-based pulse channel, which transmits configuration data at the full bandwidth otherwise used for pulse transmission. The overall so-called pulse communication subgroup (ICs and FPGA delivers a factor 25-50 more event transmission rate than other current neuromorphic communication infrastructures.

  19. Estudio de impacto ambiental de un aeródromo

    Directory of Open Access Journals (Sweden)

    Gómez Orea, Domingo

    1996-04-01

    Full Text Available Airports and aerodromes are transport infrastructures which, apart from contributing to the mobility of people and goods, favor social development, since they promote new activities, stimulate local initiatives and reassess bordering areas. This article presents a synthesis of the Environmental Impact Study of a future aerodrome, whose design project is way at present. The aerodrome is submitted to the EIA administrative procedure due to the currently applicable specific legislation: R.D. 1302/86. The technical document submitted for study is the special plan of the aerodrome. One of the basic criteria in the conception of airports and private airfields is the compatibility with the habitability of the environment as well as with the ecological and landscape conditions. This criteria should play a part in the orientation of the runways, trajectory of the taking off and landing maneuvers..., even in the location and design og the parking spaces, hangars and other facilities. This idea suggests the design be conceived with environmental-friendly sensibility, right from the initial stages, without leaving the responsibility of this issue to the environmental study impact. The present study has its own style of approaching the dissemination, consisting of the idea that the reader will find the methodological aspects more useful than the technical data which are of consequence only in the handling of the different project stages. The aspects which are also considered important for the reader are those which allowed the team of editors to form their criteria on the issues of expenses, environmental benefits of the design, its acceptability, etc. The methodology applied is a classical one, in accordance with the requirements of the EIA regulations.

    Los aeropuertos y aeródromos son infraestructuras de transporte que, además de contribuir a la movilidad de las personas y mercancías, fomentan el desarrollo porque promocionan nuevas

  20. SMART- Small Motor AerRospace Technology

    Science.gov (United States)

    Balucani, M.; Crescenzi, R.; Ferrari, A.; Guarrea, G.; Pontetti, G.; Orsini, F.; Quattrino, L.; Viola, F.

    2004-11-01

    This paper presents the "SMART" (Small Motor AerRospace Tecnology) propulsion system, constituted of microthrusters array realised by semiconductor technology on silicon wafers. SMART system is obtained gluing three main modules: combustion chambers, igniters and nozzles. The module was then filled with propellant and closed by gluing a piece of silicon wafer in the back side of the combustion chambers. The complete assembled module composed of 25 micro- thrusters with a 3 x 5 nozzle is presented. The measurement showed a thrust of 129 mN and impulse of 56,8 mNs burning about 70mg of propellant for the micro-thruster with nozzle and a thrust of 21 mN and impulse of 8,4 mNs for the micro-thruster without nozzle.

  1. Efeitos do estado e especificidade do treinamento aeróbio na relação %VO2max versus %FCmax durante o ciclismo Effects of the state and specificity of aerobic training on the %VO2max versus %HRmax ratio during cycling

    Directory of Open Access Journals (Sweden)

    Fabrizio Caputo

    2005-01-01

    Full Text Available OBJETIVO: Determinar os efeitos do estado e especificidade de treinamento aeróbio na relação entre o percentual do consumo máximo de oxigênio (%VO2max e o percentual da frequência cardíaca máxima (%FCmax durante o exercício incremental realizado no cicloergômetro. MÉTODOS: Sete corredores, 9 ciclistas, 11 triatletas e 12 sedentários, todos do sexo masculino e aparentemente saudáveis, foram submetidos a um teste incremental até a exaustão no cicloergômetro. Regressões lineares entre %VO2max e %FCmax foram determinadas para cada indivíduo. Com base nessas regressões, foram calculados %FCmax correspondentes a determinados %VO2max (50, 60, 70, 80 e 90% de cada participante. RESULTADOS: Não foram encontradas diferenças significantes entre todos os grupos nos %FCmax para cada um dos %VO2max avaliados. Analisando-se os voluntários como um único grupo, as médias dos %FCmax correspondentes a 50, 60, 70, 80 e 90% %VO2max foram 67, 73, 80, 87, e 93%, respectivamente. CONCLUSÃO: Nos grupos analisados, a relação entre o %VO2max e %FCmax durante o exercício incremental no ciclismo não é dependente do estado e especificidade do treinamento aeróbio.OBJECTIVE: To determine the effects of the status and specificity of exercise training in the ratio between maximum oxygen consumption (%VO2max and the percentage of maximal heart rate (%HRmax during incremental exercise on a cycle ergometer. METHODS: Seven runners, 9 cyclists, 11 triathletes, and 12 sedentary individuals, all male and apparently healthy, underwent exhaustive incremental exercise on cycle ergometers. Linear regressions between %VO2max x %HRmax were determined for each individual. Based on these regressions, %HRmax was assessed corresponding to a determined %VO2max (50, 60, 70, 80, and 90% from each participant. RESULTS: Significant differences were not found between the groups in %HRmax for each of the %VO2max assessed. Analyzing the volunteers as a single group, the

  2. AER Working Group D on VVER safety analysis minutes of the meeting in Rez, Czech Republic 18-20 May 1998

    International Nuclear Information System (INIS)

    AER Working Group D on VVER reactor safety analysis held its seventh meeting in Hotel Vltava in Rez near Prague during the period 18-20 May 1998. There were altogether 11 participants from 8 member organisations. The coordinator for the working group, Mr. P. Siltanen (IVO) served as chairman. In addition to the general information exchange on recent activities, the topics of the meeting included: First review of solutions to the 3-dimensional AER Dynamic Benchmark Problem No. 5 on a steam line break accident. This benchmark involves a break of the main steam header. Safety analysis of reactivity events. Recent code development work and fuel behaviour. Coolant mixing calculations and experiments related to diluted slugs. A list of participants and a list of handouts distributed at the meeting are attached to the minutes. (author)

  3. Benchmarking Deep Networks for Predicting Residue-Specific Quality of Individual Protein Models in CASP11

    Science.gov (United States)

    Liu, Tong; Wang, Yiheng; Eickholt, Jesse; Wang, Zheng

    2016-01-01

    Quality assessment of a protein model is to predict the absolute or relative quality of a protein model using computational methods before the native structure is available. Single-model methods only need one model as input and can predict the absolute residue-specific quality of an individual model. Here, we have developed four novel single-model methods (Wang_deep_1, Wang_deep_2, Wang_deep_3, and Wang_SVM) based on stacked denoising autoencoders (SdAs) and support vector machines (SVMs). We evaluated these four methods along with six other methods participating in CASP11 at the global and local levels using Pearson’s correlation coefficients and ROC analysis. As for residue-specific quality assessment, our four methods achieved better performance than most of the six other CASP11 methods in distinguishing the reliably modeled residues from the unreliable measured by ROC analysis; and our SdA-based method Wang_deep_1 has achieved the highest accuracy, 0.77, compared to SVM-based methods and our ensemble of an SVM and SdAs. However, we found that Wang_deep_2 and Wang_deep_3, both based on an ensemble of multiple SdAs and an SVM, performed slightly better than Wang_deep_1 in terms of ROC analysis, indicating that integrating an SVM with deep networks works well in terms of certain measurements.

  4. Benchmarking Deep Networks for Predicting Residue-Specific Quality of Individual Protein Models in CASP11

    Science.gov (United States)

    Liu, Tong; Wang, Yiheng; Eickholt, Jesse; Wang, Zheng

    2016-01-01

    Quality assessment of a protein model is to predict the absolute or relative quality of a protein model using computational methods before the native structure is available. Single-model methods only need one model as input and can predict the absolute residue-specific quality of an individual model. Here, we have developed four novel single-model methods (Wang_deep_1, Wang_deep_2, Wang_deep_3, and Wang_SVM) based on stacked denoising autoencoders (SdAs) and support vector machines (SVMs). We evaluated these four methods along with six other methods participating in CASP11 at the global and local levels using Pearson’s correlation coefficients and ROC analysis. As for residue-specific quality assessment, our four methods achieved better performance than most of the six other CASP11 methods in distinguishing the reliably modeled residues from the unreliable measured by ROC analysis; and our SdA-based method Wang_deep_1 has achieved the highest accuracy, 0.77, compared to SVM-based methods and our ensemble of an SVM and SdAs. However, we found that Wang_deep_2 and Wang_deep_3, both based on an ensemble of multiple SdAs and an SVM, performed slightly better than Wang_deep_1 in terms of ROC analysis, indicating that integrating an SVM with deep networks works well in terms of certain measurements. PMID:26763289

  5. NUPEC BWR Full-size Fine-mesh Bundle Test (BFBT) Benchmark. Volume II: uncertainty and sensitivity analyses of void distribution and critical power - Specification

    International Nuclear Information System (INIS)

    This report provides the specification for the uncertainty exercises of the international OECD/NEA, NRC and NUPEC BFBT benchmark problem including the elemental task. The specification was prepared jointly by Pennsylvania State University (PSU), USA and the Japan Nuclear Energy Safety (JNES) Organisation, in cooperation with the OECD/NEA and the Commissariat a l'energie atomique (CEA Saclay, France). The work is sponsored by the US NRC, METI-Japan, the OECD/NEA and the Nuclear Engineering Program (NEP) of Pennsylvania State University. This uncertainty specification covers the fourth exercise of Phase I (Exercise-I-4), and the third exercise of Phase II (Exercise II-3) as well as the elemental task. The OECD/NRC BFBT benchmark provides a very good opportunity to apply uncertainty analysis (UA) and sensitivity analysis (SA) techniques and to assess the accuracy of thermal-hydraulic models for two-phase flows in rod bundles. During the previous OECD benchmarks, participants usually carried out sensitivity analysis on their models for the specification (initial conditions, boundary conditions, etc.) to identify the most sensitive models or/and to improve the computed results. The comprehensive BFBT experimental database (NEA, 2006) leads us one step further in investigating modelling capabilities by taking into account the uncertainty analysis in the benchmark. The uncertainties in input data (boundary conditions) and geometry (provided in the benchmark specification) as well as the uncertainties in code models can be accounted for to produce results with calculational uncertainties and compare them with the measurement uncertainties. Therefore, uncertainty analysis exercises were defined for the void distribution and critical power phases of the BFBT benchmark. This specification is intended to provide definitions related to UA/SA methods, sensitivity/ uncertainty parameters, suggested probability distribution functions (PDF) of sensitivity parameters, and selected

  6. PSA methods for technical specifications: insight gained from the reliability benchmark exercises and from the development of computerised support systems

    International Nuclear Information System (INIS)

    This paper describes the philosophy, the objectives and the lesson leaned from the Reliability Benchmark Exercises (RBE), organized by the Joint Research Center (JRC) Ispra of the Commission of the European Communities, and carried out over some years within a worldwide community of users and developers of Probabilistic Safety Assessment (PSA) methods and applications. The causes of uncertainties and the importance of the modelling uncertainties, revealed by the exercises, lead to a variety of observations also on the use of reliability methods for the definition of the technical specifications, including the limiting conditions for operation, the requirements of surveillance testing, the safety system set point limits and the administrative controls. In particular, it is argued that the use of PSA techniques, as source of information for a safe operability of the plant, requires validated system models which might be better achieved by means of computerised analysis tools. These are helpful both in the design phase and during operations, when the operator or the surveyor has to define, case by case, the boundary conditions for the case at hand. In this sense, the study and the development of computerised analysis tools is being developed within the JRC Ispra with the objective of ameliorating and exploiting further the application of appropriate reliability analyses of plants. The results so far obtained are presented and finally the perspectives of this work are discussed in terms of advantages, needs and characteristics of the information system for the optimization of the plant management and control

  7. Benchmarking HRD.

    Science.gov (United States)

    Ford, Donald J.

    1993-01-01

    Discusses benchmarking, the continuous process of measuring one's products, services, and practices against those recognized as leaders in that field to identify areas for improvement. Examines ways in which benchmarking can benefit human resources functions. (JOW)

  8. Multidimensional benchmarking

    OpenAIRE

    Campbell, Akiko

    2016-01-01

    Benchmarking is a process of comparison between performance characteristics of separate, often competing organizations intended to enable each participant to improve its own performance in the marketplace (Kay, 2007). Benchmarking sets organizations’ performance standards based on what “others” are achieving. Most widely adopted approaches are quantitative and reveal numerical performance gaps where organizations lag behind benchmarks; however, quantitative benchmarking on its own rarely yi...

  9. Corrections and additions to the proposal of a benchmark for core burnup calculations for a WWER-1000 reactor

    International Nuclear Information System (INIS)

    At the nineteenth AER symposium a benchmark on core burnup calculations for WWER-1000 reactors was proposed for further validation and verification of the reactor physics code systems. The work was continued in the framework of a project supported by the German BMU3). During the preparation of the calculations results corrections, refinement and additions the benchmark specification were done. The benchmark includes two stages: the first step comprises the data library preparation for all fuel assembly types used in the core loadings. The second step consists of the 3D core burnup calculation together with calculations of critical states for hot zero power conditions. The benchmark specification contains the description of the fuel assemblies (FA) for the few group data preparation, the core loading patterns and the load follow as well as a set of reference data such as boron acid concentration in the coolant, cycle length, measured reactivity coefficients and power density distributions for successive cycles of a WWER-1000 reactor core. Different reactor physics codes were used to produce solutions. FA burnup codes such as NESSEL, CASMO or HELIOS were used for data preparation. The core calculations were performed using codes such as DYN3D, TRAPEZ as well as several data libraries. The results of the calculations made by different organisations (IBBS, FZD, SSTC) are presented and discussed. The data needed to produce solutions as well as most of the calculated data are attached in the appendices of the paper presented. (Authors)

  10. Benchmarking and regulation

    OpenAIRE

    Agrell, Per Joakim; Bogetoft, Peter

    2013-01-01

    Benchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators. The application of benchmarking in regulation, however, requires specific steps in terms of data validation, model specification and outlier detection that are not systematically documented in open publication...

  11. Financial benchmarking

    OpenAIRE

    Boldyreva, Anna

    2014-01-01

    This bachelor's thesis is focused on financial benchmarking of TULIPA PRAHA s.r.o. The aim of this work is to evaluate financial situation of the company, identify its strengths and weaknesses and to find out how efficient is the performance of this company in comparison with top companies within the same field by using INFA benchmarking diagnostic system of financial indicators. The theoretical part includes the characteristic of financial analysis, which financial benchmarking is based on a...

  12. Benchmark selection

    DEFF Research Database (Denmark)

    Hougaard, Jens Leth; Tvede, Mich

    2002-01-01

    Within a production theoretic framework, this paper considers an axiomatic approach to benchmark selection. It is shown that two simple and weak axioms; efficiency and comprehensive monotonicity characterize a natural family of benchmarks which typically becomes unique. Further axioms are added in...... order to obtain a unique selection...

  13. Interactive benchmarking

    DEFF Research Database (Denmark)

    Lawson, Lartey; Nielsen, Kurt

    2005-01-01

    We discuss individual learning by interactive benchmarking using stochastic frontier models. The interactions allow the user to tailor the performance evaluation to preferences and explore alternative improvement strategies by selecting and searching the different frontiers using directional...... suggested benchmarking tool. The study investigates how different characteristics on dairy farms influences the technical efficiency....

  14. Precious benchmarking

    International Nuclear Information System (INIS)

    Recently, there has been a new word added to our vocabulary - benchmarking. Because of benchmarking, our colleagues travel to power plants all around the world and guests from the European power plants visit us. We asked Marek Niznansky from the Nuclear Safety Department in Jaslovske Bohunice NPP to explain us this term. (author)

  15. A benchmark-problem specification and calculation using SENSIBL, a one- and two-dimensional sensitivity and uncertainty analysis code of the AARE system

    International Nuclear Information System (INIS)

    The lack of suitable benchmark problems makes it difficult to test sensitivity codes with a covariance library. A benchmark problem has therefore been defined for one- and two-dimensional sensitivity and uncertainity analysis codes and code systems. The problem, representative of a fusion reactor blanket, has a simple, three-zone /tau/-z geometry containing a D-T fusion neutron source distributed in a central void region surrounded by a thick 6LiH annulus. The response of interest is the 6Li tritium production per source neutron, T6. The calculation has been performed with SENSIBL using other codes from the AARE code system as a test of both SENSIBL and the linked, modular system. The caluclation was performed using the code system in the standard manner with a covariance data library in the COVFILS-2 format but modified to contain specifically tailored covariance data for H and 6Li (Path A). The calculation was also performed by a second method which uses specially perturbed H and Li cross sections (Path B). This method bypasses SENSIBL and allows a hand calculation of the benchmark T6 uncertainties. The results of Path A and Path B were total uncertainties in T6 of 0.21% and 0.19%, respectively. The closeness of the results for this challenging test gives confidence that SENSIBL and the AARE system will perform well for realistic sensitivity and uncertainty analyses

  16. CB2 result evaluation (VVER-440 burnup credit benchmark)

    International Nuclear Information System (INIS)

    The second portion of the four-piece international calculational benchmark on the VVER burnup credit (CB2) prepared in the collaboration with the OECD/NEA/NSC Burnup Credit Criticality Benchmarks Working Group and proposed to the AER research community has been evaluated. The evaluated results of calculations performed by analysts from Cuba, the Czech Republic, Finland, Germany, Russia, Slovakia and the United Kingdom are presented. The goal of this study is to compare isotopic concentrations calculated by the participants using various codes and libraries for depletion of the VVER-440 fuel pin cell. No measured values were available for the comparison. (author)

  17. Information about AER WG a on improvement, extension and validation of parametrized few-group libraries for VVER 440 and VVER 1000

    International Nuclear Information System (INIS)

    Joint AER Working Group A on 'Improvement, extension and validation of parameterized few-group libraries for VVER-440 and VVER-1000' and AER Working group B on 'Core design' eighteenth meeting was hosted by Skoda JS a.s. in Plzen (Czech Republic) during the period of 4 to 6 May 2009. There were present altogether 16 participants from 6 member organizations and 13 presentations were read. Objectives of the meeting of WG A are: Issues connected with spectral calculations and few-groups libraries preparation, their accuracy and validation. Presentations were devoted to some aspects of few group libraries preparations and to the benchmark dealing with VVER-440 follower modeling in calculations. Gy. Hegyi gave some new information about NURESIM-NURISP EU project (ZR-6), R. Zajac spoke about the development of data libraries for codes BIPR-7 and PERMAK, P. Darilek compared FA's with Gd during burning process and Yu. Bilodid described further development of plutonium-based burnup history modeling in DYN3D burnup calculations. G. Hordosy presented results of control rod follower induced local power peaking computational benchmark and J. Svarny described Monte Carlo VVER-440 control rod follower benchmark computations. Future activities are also shortly described in the end of the paper. (author)

  18. Information about AER WG A on improvement, extension and validation of parametrized few-group libraries for VVER 440 and VVER 1000

    International Nuclear Information System (INIS)

    Joint AER Working Group A on 'Improvement, extension and validation of parameterized few-group libraries for WWER-440 and WWER-1000' and AER Working Group B on 'Core design' nineteenth meeting was hosted by VUJE a. s. in Modra - Harmonia (Slovakia) during the period of 20. to 22. April 2010. There were present altogether 12 participants from 8 member organizations and 9 papers were presented (8 of them in written form). Objectives of the meeting of WG A are: Issues connected with spectral calculations and few-groups libraries preparation, their accuracy and validation. Presentations were devoted to some aspects of transport and diffusion calculations and to the benchmark dealing with WWER-1000 core periphery power tilt. Tamas Parko (co-authors Istvan Pos and Sandor Patai Szabo) described in his presentation 'Application of Discontinuity factors in C-PORCA 7 code', Radoslav Zajac (co-authors Petr Darilek and Vladimir Necas) spoke about 'Fast Reactor Nodalisation in HELIOS Code', Gabriel Farkas presented 'Calculation of Spatial Weighting Functions of Ex-Core Neutron Detectors for WWER-440 Using Monte Carlo Approach' and Daniel Sprinzl (co-authors Vaclav Krysl, Pavel Mikolas and Jiri Svarny) provided a definition of a benchmark in ' 'MIDICORE' WWER-1000 core periphery power tilt benchmark proposal'. (Author)

  19. Benchmarks for Uncertainty Analysis in Modelling (UAM) for the Design, Operation and Safety Analysis of LWRs - Volume I: Specification and Support Data for Neutronics Cases (Phase I)

    International Nuclear Information System (INIS)

    The objective of the OECD LWR UAM activity is to establish an internationally accepted benchmark framework to compare, assess and further develop different uncertainty analysis methods associated with the design, operation and safety of LWRs. As a result, the LWR UAM benchmark will help to address current nuclear power generation industry and regulation needs and issues related to practical implementation of risk-informed regulation. The realistic evaluation of consequences must be made with best-estimate coupled codes, but to be meaningful, such results should be supplemented by an uncertainty analysis. The use of coupled codes allows us to avoid unnecessary penalties due to incoherent approximations in the traditional decoupled calculations, and to obtain more accurate evaluation of margins regarding licensing limit. This becomes important for licensing power upgrades, improved fuel assembly and control rod designs, higher burn-up and others issues related to operating LWRs as well as to the new Generation 3+ designs being licensed now (ESBWR, AP-1 000, EPR-1 600, etc.). Establishing an internationally accepted LWR UAM benchmark framework offers the possibility to accelerate the licensing process when using best estimate methods. The proposed technical approach is to establish a benchmark for uncertainty analysis in best-estimate modelling and coupled multi-physics and multi-scale LWR analysis, using as bases a series of well-defined problems with complete sets of input specifications and reference experimental data. The objective is to determine the uncertainty in LWR system calculations at all stages of coupled reactor physics/thermal hydraulics calculations. The full chain of uncertainty propagation from basic data, engineering uncertainties, across different scales (multi-scale), and physics phenomena (multi-physics) will be tested on a number of benchmark exercises for which experimental data are available and for which the power plant details have been

  20. WLUP benchmarks

    International Nuclear Information System (INIS)

    The IAEA-WIMS Library Update Project (WLUP) is on the end stage. The final library will be released on 2002. It is a result of research and development made by more than ten investigators during 10 years. The organization of benchmarks for testing and choosing the best set of data has been coordinated by the author of this paper. It is presented the organization, name conventions, contents and documentation of WLUP benchmarks, and an updated list of the main parameters for all cases. First, the benchmarks objectives and types are given. Then, comparisons of results from different WIMSD libraries are included. Finally it is described the program QVALUE for analysis and plot of results. Some examples are given. The set of benchmarks implemented on this work is a fundamental tool for testing new multigroup libraries. (author)

  1. Neutron Reference Benchmark Field Specifications: ACRR Polyethylene-Lead-Graphite (PLG) Bucket Environment (ACRR-PLG-CC-32-CL).

    Energy Technology Data Exchange (ETDEWEB)

    Vega, Richard Manuel [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Parm, Edward J. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Griffin, Patrick J. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Vehar, David W. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)

    2015-07-01

    This report was put together to support the International Atomic Energy Agency (IAEA) REAL- 2016 activity to validate the dosimetry community’s ability to use a consistent set of activation data and to derive consistent spectral characterizations. The report captures details of integral measurements taken in the Annular Core Research Reactor (ACRR) central cavity with the Polyethylene-Lead-Graphite (PLG) bucket, reference neutron benchmark field. The field is described and an “a priori” calculated neutron spectrum is reported, based on MCNP6 calculations, and a subject matter expert (SME) based covariance matrix is given for this “a priori” spectrum. The results of 37 integral dosimetry measurements in the neutron field are reported.

  2. Benchmarking in the Semantic Web

    OpenAIRE

    García-Castro, Raúl; Gómez-Pérez, A.

    2009-01-01

    The Semantic Web technology needs to be thoroughly evaluated for providing objective results and obtaining massive improvement in its quality; thus, the transfer of this technology from research to industry will speed up. This chapter presents software benchmarking, a process that aims to improve the Semantic Web technology and to find the best practices. The chapter also describes a specific software benchmarking methodology and shows how this methodology has been used to benchmark the inter...

  3. Efeitos do estado e especificidade do treinamento aeróbio na relação %VO2max versus %FCmax durante o ciclismo Effects of the state and specificity of aerobic training on the %VO2max versus %HRmax ratio during cycling

    OpenAIRE

    Fabrizio Caputo; Camila Coelho Greco; Benedito Sérgio Denadai

    2005-01-01

    OBJETIVO: Determinar os efeitos do estado e especificidade de treinamento aeróbio na relação entre o percentual do consumo máximo de oxigênio (%VO2max) e o percentual da frequência cardíaca máxima (%FCmax) durante o exercício incremental realizado no cicloergômetro. MÉTODOS: Sete corredores, 9 ciclistas, 11 triatletas e 12 sedentários, todos do sexo masculino e aparentemente saudáveis, foram submetidos a um teste incremental até a exaustão no cicloergômetro. Regressões lineares entre %VO2max ...

  4. AER working group D on WWER safety analysis - report of the 2008 meeting

    International Nuclear Information System (INIS)

    The AER Working Group D on WWER reactor safety analysis held its seventeenth meeting in Garching, Germany during the period 31 March-01 April 2008. The meeting was hosted by the GRS Garching. Altogether 19 participants attend the meeting of the working group D, 16 from AER member organizations and 3 guests from non-member organizations. (Author)

  5. Signal detection in FDA AERS database using Dirichlet process.

    Science.gov (United States)

    Hu, Na; Huang, Lan; Tiwari, Ram C

    2015-08-30

    In the recent two decades, data mining methods for signal detection have been developed for drug safety surveillance, using large post-market safety data. Several of these methods assume that the number of reports for each drug-adverse event combination is a Poisson random variable with mean proportional to the unknown reporting rate of the drug-adverse event pair. Here, a Bayesian method based on the Poisson-Dirichlet process (DP) model is proposed for signal detection from large databases, such as the Food and Drug Administration's Adverse Event Reporting System (AERS) database. Instead of using a parametric distribution as a common prior for the reporting rates, as is the case with existing Bayesian or empirical Bayesian methods, a nonparametric prior, namely, the DP, is used. The precision parameter and the baseline distribution of the DP, which characterize the process, are modeled hierarchically. The performance of the Poisson-DP model is compared with some other models, through an intensive simulation study using a Bayesian model selection and frequentist performance characteristics such as type-I error, false discovery rate, sensitivity, and power. For illustration, the proposed model and its extension to address a large amount of zero counts are used to analyze statin drugs for signals using the 2006-2011 AERS data. PMID:25924820

  6. Benchmark exercise

    International Nuclear Information System (INIS)

    The motivation to conduct this benchmark exercise, a summary of the results, and a discussion of and conclusions from the intercomparison are given in Section 5.2. This section contains further details of the results of the calculations and intercomparisons, illustrated by tables and figures, but avoiding repetition of Section 5.2 as far as possible. (author)

  7. Vitamin B12 regulates photosystem gene expression via the CrtJ antirepressor AerR in Rhodobacter capsulatus

    OpenAIRE

    Cheng, Zhuo; Li, Keran; Hammad, Loubna A.; Karty, Jonathan A.; Bauer, Carl E.

    2014-01-01

    The tetrapyrroles heme, bacteriochlorophyll and cobalamin (B12) exhibit a complex interrelationship regarding their synthesis. In this study, we demonstrate that AerR functions as an antirepressor of the tetrapyrrole regulator CrtJ. We show that purified AerR contains B12 that is bound to a conserved histidine (His145) in AerR. The interaction of AerR to CrtJ was further demonstrated in vitro by pull down experiments using AerR as bait and quantified using microscale thermophoresis. DNase I D...

  8. Discussion forum on electron beam instruments AERE Harwell

    International Nuclear Information System (INIS)

    The purpose of this catalogue is to provide a source of information on the equipment available at AERE Harwell, to the nuclear and non-nuclear scientist. The original aim, that is, is to provide data on electron/proton beam instruments has been revised to include optical devices and ancillary preparatory equipment. The intention is to enable prospective users to have a contact who can provide further detailed information, although it must be recognised that work on certain projects completely fills the time available. This publication has been updated, first catalogue published in January 1975, to August 1980 and it is the intention that it should form part of a similar publication which incorporates details of similar equipment available throughout the UKAEA. (author)

  9. CFD Simulation of Thermal-Hydraulic Benchmark V1000CT-2 Using ANSYS CFX

    OpenAIRE

    Thomas Höhne

    2009-01-01

    Plant measured data from VVER-1000 coolant mixing experiments were used within the OECD/NEA and AER coupled code benchmarks for light water reactors to test and validate computational fluid dynamic (CFD) codes. The task is to compare the various calculations with measured data, using specified boundary conditions and core power distributions. The experiments, which are provided for CFD validation, include single loop cooling down or heating-up by disturbing the heat transfer in the steam gene...

  10. SU-E-I-32: Benchmarking Head CT Doses: A Pooled Vs. Protocol Specific Analysis of Radiation Doses in Adult Head CT Examinations

    International Nuclear Information System (INIS)

    Purpose: The aim of this study was to collect CT dose index data from adult head exams to establish benchmarks based on either: (a) values pooled from all head exams or (b) values for specific protocols. One part of this was to investigate differences in scan frequency and CT dose index data for inpatients versus outpatients. Methods: We collected CT dose index data (CTDIvol) from adult head CT examinations performed at our medical facilities from Jan 1st to Dec 31th, 2014. Four of these scanners were used for inpatients, the other five were used for outpatients. All scanners used Tube Current Modulation. We used X-ray dose management software to mine dose index data and evaluate CTDIvol for 15807 inpatients and 4263 outpatients undergoing Routine Brain, Sinus, Facial/Mandible, Temporal Bone, CTA Brain and CTA Brain-Neck protocols, and combined across all protocols. Results: For inpatients, Routine Brain series represented 84% of total scans performed. For outpatients, Sinus scans represented the largest fraction (36%). The CTDIvol (mean ± SD) across all head protocols was 39 ± 30 mGy (min-max: 3.3–540 mGy). The CTDIvol for Routine Brain was 51 ± 6.2 mGy (min-max: 36–84 mGy). The values for Sinus were 24 ± 3.2 mGy (min-max: 13–44 mGy) and for Facial/Mandible were 22 ± 4.3 mGy (min-max: 14–46 mGy). The mean CTDIvol for inpatients and outpatients was similar across protocols with one exception (CTA Brain-Neck). Conclusion: There is substantial dose variation when results from all protocols are pooled together; this is primarily a function of the differences in technical factors of the protocols themselves. When protocols are analyzed separately, there is much less variability. While analyzing pooled data affords some utility, reviewing protocols segregated by clinical indication provides greater opportunity for optimization and establishing useful benchmarks

  11. SU-E-I-32: Benchmarking Head CT Doses: A Pooled Vs. Protocol Specific Analysis of Radiation Doses in Adult Head CT Examinations

    Energy Technology Data Exchange (ETDEWEB)

    Fujii, K [Graduate School of Medicine, Nagoya University, Nagoya, JP (Japan); UCLA School of Medicine, Los Angeles, CA (United States); Bostani, M; Cagnon, C; McNitt-Gray, M [UCLA School of Medicine, Los Angeles, CA (United States)

    2015-06-15

    Purpose: The aim of this study was to collect CT dose index data from adult head exams to establish benchmarks based on either: (a) values pooled from all head exams or (b) values for specific protocols. One part of this was to investigate differences in scan frequency and CT dose index data for inpatients versus outpatients. Methods: We collected CT dose index data (CTDIvol) from adult head CT examinations performed at our medical facilities from Jan 1st to Dec 31th, 2014. Four of these scanners were used for inpatients, the other five were used for outpatients. All scanners used Tube Current Modulation. We used X-ray dose management software to mine dose index data and evaluate CTDIvol for 15807 inpatients and 4263 outpatients undergoing Routine Brain, Sinus, Facial/Mandible, Temporal Bone, CTA Brain and CTA Brain-Neck protocols, and combined across all protocols. Results: For inpatients, Routine Brain series represented 84% of total scans performed. For outpatients, Sinus scans represented the largest fraction (36%). The CTDIvol (mean ± SD) across all head protocols was 39 ± 30 mGy (min-max: 3.3–540 mGy). The CTDIvol for Routine Brain was 51 ± 6.2 mGy (min-max: 36–84 mGy). The values for Sinus were 24 ± 3.2 mGy (min-max: 13–44 mGy) and for Facial/Mandible were 22 ± 4.3 mGy (min-max: 14–46 mGy). The mean CTDIvol for inpatients and outpatients was similar across protocols with one exception (CTA Brain-Neck). Conclusion: There is substantial dose variation when results from all protocols are pooled together; this is primarily a function of the differences in technical factors of the protocols themselves. When protocols are analyzed separately, there is much less variability. While analyzing pooled data affords some utility, reviewing protocols segregated by clinical indication provides greater opportunity for optimization and establishing useful benchmarks.

  12. An ecological and economic assessment of absorption-enhanced-reforming (AER) biomass gasification

    International Nuclear Information System (INIS)

    Highlights: • Analysis of biomass gasification with new absorption enhanced reforming technology. • Energy- and mass balances for three different process configurations to produce heat, SNG and/or hydrogen. • Ecological (based on LCA) and economic (based on production costs) assessment of the technology. • Comparison of results with existing operational plants producing similar products. - Abstract: Biomass gasification with absorption enhanced reforming (AER) is a promising technology to produce a hydrogen-rich product gas that can be used to generate electricity, heat, substitute natural gas (SNG) and hydrogen (5.0 quality). To evaluate the production of the four products from an ecological and economic point of view, three different process configurations are considered. The plant setup involves two coupled fluidized beds: the steam gasifier and the regenerator. Subsequently the product gas can be used to operate a CHP plant (configuration one), be methanised (configuration two) or used to produce high-quality hydrogen (configuration three). Regarding ecological criteria, the global warming potential, the acidification potential and the cumulative energy demand of the processes are calculated, based on a life-cycle assessment approach. The economic analysis is based on the levelized costs of energy generation (LCOE). The AER-based processes are compared to conventional and renewable reference processes, which they might stand to substitute. The results show that the AER processes are beneficial from an ecological point of view as they are less carbon intensive (mitigating up to 800gCO2-eq.kW-1hel-1), require less fossil energy input (only about 0.5kWhfossilkW-1hel-1) and have a comparable acidification potential (300–900mgSO2-eq.kW-1hel-1) to most reference processes. But the results depend heavily on the extent to which excess heat can be used to replace conventional heating processes, and hence on the exact location of the plant. The economic results

  13. Kvantitativ benchmark - Produktionsvirksomheder

    DEFF Research Database (Denmark)

    Sørensen, Ole H.; Andersen, Vibeke

    Rapport med resultatet af kvantitativ benchmark over produktionsvirksomhederne i VIPS projektet.......Rapport med resultatet af kvantitativ benchmark over produktionsvirksomhederne i VIPS projektet....

  14. Benchmarking in Student Affairs.

    Science.gov (United States)

    Mosier, Robert E.; Schwarzmueller, Gary J.

    2002-01-01

    Discusses the use of benchmarking in student affairs, focusing on issues related to student housing. Provides examples of how benchmarking has influenced administrative practice at many institutions. (EV)

  15. Status of the international criticality safety benchmark evaluation project (ICSBEP)

    International Nuclear Information System (INIS)

    Since ICNC'99, four new editions of the International Handbook of Evaluated Criticality Safety Benchmark Experiments have been published. The number of benchmark specifications in the Handbook has grown from 2157 in 1999 to 3073 in 2003, an increase of nearly 1000 specifications. These benchmarks are used to validate neutronics codes and nuclear cross-section data. Twenty evaluations representing 192 benchmark specifications were added to the Handbook in 2003. The status of the International Criticality Safety Benchmark Evaluation Project (ICSBEP) is provided in this paper along with a summary of the newly added benchmark specifications that appear in the 2003 Edition of the Handbook. (author)

  16. Fusion Welding of AerMet 100 Alloy

    Energy Technology Data Exchange (ETDEWEB)

    ENGLEHART, DAVID A.; MICHAEL, JOSEPH R.; NOVOTNY, PAUL M.; ROBINO, CHARLES V.

    1999-08-01

    A database of mechanical properties for weldment fusion and heat-affected zones was established for AerMet{reg_sign}100 alloy, and a study of the welding metallurgy of the alloy was conducted. The properties database was developed for a matrix of weld processes (electron beam and gas-tungsten arc) welding parameters (heat inputs) and post-weld heat treatment (PWHT) conditions. In order to insure commercial utility and acceptance, the matrix was commensurate with commercial welding technology and practice. Second, the mechanical properties were correlated with fundamental understanding of microstructure and microstructural evolution in this alloy. Finally, assessments of optimal weld process/PWHT combinations for cotildent application of the alloy in probable service conditions were made. The database of weldment mechanical properties demonstrated that a wide range of properties can be obtained in welds in this alloy. In addition, it was demonstrated that acceptable welds, some with near base metal properties, could be produced from several different initial heat treatments. This capability provides a means for defining process parameters and PWHT's to achieve appropriate properties for different applications, and provides useful flexibility in design and manufacturing. The database also indicated that an important region in welds is the softened region which develops in the heat-affected zone (HAZ) and analysis within the welding metallurgy studies indicated that the development of this region is governed by a complex interaction of precipitate overaging and austenite formation. Models and experimental data were therefore developed to describe overaging and austenite formation during thermal cycling. These models and experimental data can be applied to essentially any thermal cycle, and provide a basis for predicting the evolution of microstructure and properties during thermal processing.

  17. Benchmarking and Sustainable Transport Policy

    DEFF Research Database (Denmark)

    Gudmundsson, Henrik; Wyatt, Andrew; Gordon, Lucy

    2004-01-01

    Order to learn from the best. In 2000 the European Commission initiated research to explore benchmarking as a tool to promote policies for ‘sustainable transport’. This paper reports findings and recommendations on how to address this challenge. The findings suggest that benchmarking is a valuable...... tool that may indeed help to move forward the transport policy agenda. However, there are major conditions and limitations. First of all it is not always so straightforward to delimit, measure and compare transport services in order to establish a clear benchmark. Secondly ‘sustainable transport......’ evokes a broad range of concerns that are hard to address fully at the level of specific practices. Thirdly policies are not directly comparable across space and context. For these reasons attempting to benchmark ‘sustainable transport policies’ against one another would be a highly complex task, which...

  18. Benchmarking and Performance Measurement.

    Science.gov (United States)

    Town, J. Stephen

    This paper defines benchmarking and its relationship to quality management, describes a project which applied the technique in a library context, and explores the relationship between performance measurement and benchmarking. Numerous benchmarking methods contain similar elements: deciding what to benchmark; identifying partners; gathering…

  19. An introduction to benchmarking in healthcare.

    Science.gov (United States)

    Benson, H R

    1994-01-01

    Benchmarking--the process of establishing a standard of excellence and comparing a business function or activity, a product, or an enterprise as a whole with that standard--will be used increasingly by healthcare institutions to reduce expenses and simultaneously improve product and service quality. As a component of total quality management, benchmarking is a continuous process by which an organization can measure and compare its own processes with those of organizations that are leaders in a particular area. Benchmarking should be viewed as a part of quality management programs, not as a replacement. There are four kinds of benchmarking: internal, competitive, functional and generic. With internal benchmarking, functions within an organization are compared with each other. Competitive benchmarking partners do business in the same market and provide a direct comparison of products or services. Functional and generic benchmarking are performed with organizations which may have a specific similar function, such as payroll or purchasing, but which otherwise are in a different business. Benchmarking must be a team process because the outcome will involve changing current practices, with effects felt throughout the organization. The team should include members who have subject knowledge; communications and computer proficiency; skills as facilitators and outside contacts; and sponsorship of senior management. Benchmarking requires quantitative measurement of the subject. The process or activity that you are attempting to benchmark will determine the types of measurements used. Benchmarking metrics usually can be classified in one of four categories: productivity, quality, time and cost-related. PMID:10139084

  20. Hydrogen transport and embrittlement in 300 M and AerMet100 ultra high strength steels

    International Nuclear Information System (INIS)

    This paper describes how hydrogen transport affects the severity of hydrogen embrittlement in 300 M and AerMet100 ultra high strength steels. Slow strain rate tests were carried out on specimens coated with electrodeposited cadmium and aluminium-based SermeTel 1140/962. Hydrogen diffusivities were measured using two-cell permeation and galvanostatic charging methods and values of 8.0 x 10-8 and 1.0 x 10-9 cm2 s-1 were obtained for 300 M and AerMet100, respectively. A two-dimensional diffusion model was used to predict the hydrogen distributions in the SSR specimens at the time of failure. The superior embrittlement resistance of AerMet100 was attributed to reverted austenite forming around martensite laths during tempering.

  1. Research on computer systems benchmarking

    Science.gov (United States)

    Smith, Alan Jay (Principal Investigator)

    1996-01-01

    This grant addresses the topic of research on computer systems benchmarking and is more generally concerned with performance issues in computer systems. This report reviews work in those areas during the period of NASA support under this grant. The bulk of the work performed concerned benchmarking and analysis of CPUs, compilers, caches, and benchmark programs. The first part of this work concerned the issue of benchmark performance prediction. A new approach to benchmarking and machine characterization was reported, using a machine characterizer that measures the performance of a given system in terms of a Fortran abstract machine. Another report focused on analyzing compiler performance. The performance impact of optimization in the context of our methodology for CPU performance characterization was based on the abstract machine model. Benchmark programs are analyzed in another paper. A machine-independent model of program execution was developed to characterize both machine performance and program execution. By merging these machine and program characterizations, execution time can be estimated for arbitrary machine/program combinations. The work was continued into the domain of parallel and vector machines, including the issue of caches in vector processors and multiprocessors. All of the afore-mentioned accomplishments are more specifically summarized in this report, as well as those smaller in magnitude supported by this grant.

  2. Dynamic Yield Strength and Spall Strength Determination for AerMet 100 Steels

    International Nuclear Information System (INIS)

    Well-controlled impact studies have been conducted on ''as-received'' and heat-treated AerMet 100 steel alloy samples to determine their dynamic material properties. In particular, gas gun and time-resolved laser interferometer have been used to measure the fine structure in the particle velocity profiles resulting from symmetric plate impact. Impact velocities ranged from 0.40 km/s to 0.90 km/s. These experiments have allowed us to estimate the dynamic yield and span strengths of the ''as-received'' and heat-treated AerMet 100 steel

  3. Hydrogen transport and embrittlement in 300 M and AerMet100 ultra high strength steels

    OpenAIRE

    Figueroa-Gordon, Douglas J.; Robinson, M. J.

    2010-01-01

    This paper describes how hydrogen transport affects the severity of hydrogen embrittlement in 300 M and AerMet100 ultra high strength steels. Slow strain rate tests were carried out on specimens coated with electrodeposited cadmium and aluminium-based SermeTel 1140/962. Hydrogen diffusivities were measured using two-cell permeation and galvanostatic charging methods and values of 8.0 × 10−8 and 1.0 × 10−9 cm2 s−1 were obtained for 300 M and AerMet100, respectively. A two-dim...

  4. A massa gorda de risco afeta a capacidade aeróbia de jovens adolescentes

    Directory of Open Access Journals (Sweden)

    Luís Massuça

    2013-12-01

    Full Text Available OBJETIVO: Estudar o comportamento do sexo e os efeitos da idade e da massa gorda sobre a capacidade aeróbia de jovens adolescentes. MÉTODOS: Os 621 estudantes do ensino secundário participantes no estudo (14 aos 17 anos; feminino: n = 329, idade, 15,84 ± 0,92 anos; masculino: n = 292, idade, 15,82 ± 0,87 anos foram avaliados em duas categorias: morfologia (altura, peso e % massa gorda - %MG e aptidão física (capacidade aeróbia. As medições antropométricas foram realizadas de acordo com o protocolo descrito por Marfell-Jones e a %MG foi calculada por bioimpedância. A avaliação da capacidade aeróbia foi realizada com o teste aeróbio de corrida - PACER, e VO2máx relativo foi calculado utilizando a equação de Léger. Os resultados das avaliações foram classificados de acordo com os valores normativos das tabelas de referência da bateria de testes FITNESSGRAM® As técnicas estatísticas utilizadas foram: 1 cálculo de frequências; 2 teste t de Student para amostras independentes; e 3 ANOVA two-way seguida do teste post-hoc HSD de Bonferroni. RESULTADOS: 1 existem diferenças significativas entre sexos no que se refere à %MG e ao VO2máx; 2 durante a adolescência, o VO2máx estabiliza nos rapazes e sofre um declínio nas moças; 3 independentemente do sexo, a classe de %MG e a idade cronológica têm um efeito significativo sobre a capacidade aeróbia; e 4 em jovens adolescentes, com %MG de risco, a redução da %MG para níveis saudáveis parece resultar na melhoria da capacidade aeróbia. CONCLUSÃO: O impacto da %MG na capacidade aeróbia, reforça a importância da educação física escolar na promoção da saúde cardiovascular.

  5. Aquatic Life Benchmarks

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Aquatic Life Benchmarks is an EPA-developed set of criteria for freshwater species. These benchmarks are based on toxicity values reviewed by EPA and used in...

  6. Closed-Loop Neuromorphic Benchmarks

    Science.gov (United States)

    Stewart, Terrence C.; DeWolf, Travis; Kleinhans, Ashley; Eliasmith, Chris

    2015-01-01

    Evaluating the effectiveness and performance of neuromorphic hardware is difficult. It is even more difficult when the task of interest is a closed-loop task; that is, a task where the output from the neuromorphic hardware affects some environment, which then in turn affects the hardware's future input. However, closed-loop situations are one of the primary potential uses of neuromorphic hardware. To address this, we present a methodology for generating closed-loop benchmarks that makes use of a hybrid of real physical embodiment and a type of “minimal” simulation. Minimal simulation has been shown to lead to robust real-world performance, while still maintaining the practical advantages of simulation, such as making it easy for the same benchmark to be used by many researchers. This method is flexible enough to allow researchers to explicitly modify the benchmarks to identify specific task domains where particular hardware excels. To demonstrate the method, we present a set of novel benchmarks that focus on motor control for an arbitrary system with unknown external forces. Using these benchmarks, we show that an error-driven learning rule can consistently improve motor control performance across a randomly generated family of closed-loop simulations, even when there are up to 15 interacting joints to be controlled. PMID:26696820

  7. Benchmarking semantic web technology

    CERN Document Server

    García-Castro, R

    2009-01-01

    This book addresses the problem of benchmarking Semantic Web Technologies; first, from a methodological point of view, proposing a general methodology to follow in benchmarking activities over Semantic Web Technologies and, second, from a practical point of view, presenting two international benchmarking activities that involved benchmarking the interoperability of Semantic Web technologies using RDF(S) as the interchange language in one activity and OWL in the other.The book presents in detail how the different resources needed for these interoperability benchmarking activities were defined:

  8. Research Reactor Benchmarks

    International Nuclear Information System (INIS)

    A criticality benchmark experiment performed at the Jozef Stefan Institute TRIGA Mark II research reactor is described. This experiment and its evaluation are given as examples of benchmark experiments at research reactors. For this reason the differences and possible problems compared to other benchmark experiments are particularly emphasized. General guidelines for performing criticality benchmarks in research reactors are given. The criticality benchmark experiment was performed in a normal operating reactor core using commercially available fresh 20% enriched fuel elements containing 12 wt% uranium in uranium-zirconium hydride fuel material. Experimental conditions to minimize experimental errors and to enhance computer modeling accuracy are described. Uncertainties in multiplication factor due to fuel composition and geometry data are analyzed by sensitivity analysis. The simplifications in the benchmark model compared to the actual geometry are evaluated. Sample benchmark calculations with the MCNP and KENO Monte Carlo codes are given

  9. 42 CFR 422.258 - Calculation of benchmarks.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 3 2010-10-01 2010-10-01 false Calculation of benchmarks. 422.258 Section 422.258... and Plan Approval § 422.258 Calculation of benchmarks. (a) The term “MA area-specific non-drug monthly... the plan bids. (c) Calculation of MA regional non-drug benchmark amount. CMS calculates the...

  10. Neutron Reference Benchmark Field Specification: ACRR 44 Inch Lead-Boron (LB44) Bucket Environment (ACRR-LB44-CC-32-CL).

    Energy Technology Data Exchange (ETDEWEB)

    Vega, Richard Manuel [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Parma, Edward J. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Griffin, Patrick J. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Vehar, David W. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)

    2015-07-01

    This report was put together to support the International Atomic Energy Agency (IAEA) REAL- 2016 activity to validate the dosimetry community’s ability to use a consistent set of activation data and to derive consistent spectral characterizations. The report captures details of integral measurements taken in the Annular Core Research Reactor (ACRR) central cavity with the 44 inch Lead-Boron (LB44) bucket, reference neutron benchmark field. The field is described and an “a priori” calculated neutron spectrum is reported, based on MCNP6 calculations, and a subject matter expert (SME) based covariance matrix is given for this “a priori” spectrum. The results of 31 integral dosimetry measurements in the neutron field are reported.

  11. Benchmark calculation with improved VVER-440/213 RPV CFD model

    International Nuclear Information System (INIS)

    A detailed RPV model of WWER-440/213 type reactors was developed in BME NTI in the last years. This model contains the main structural elements as inlet and outlet nozzles, guide baffles of hydro-accumulators, alignment drifts, perforated plates, brake- and guide tube chamber and simplified core. For the meshing and simulations ANSYS software's (ICEM 12.0 and CFX 12.0) were used. With the new vessel model a series of parameter studies were performed considering turbulence models, discretization schemes, and modeling methods. In steady state the main results were presented on last AER Symposium in Varna. The model is suitable for different transient calculations as well. The purpose of the suggested new benchmark (seventh Dynamic AER Benchmark) is to investigate the reactor dynamic effects of coolant mixing in the WWER-440/213 reactor vessel and to compare the different codes. The task of this benchmark is to investigate the start up of the sixth main coolant pump. The computation was carried out with the help of ATHLET/BIPRVVER code in Kurchatov Institute for this transient and was repeated with ANSYS CFX 12.0 at our Institute. (Authors)

  12. Articulated Entity Relationship (AER) Diagram for Complete Automation of Relational Database Normalization

    OpenAIRE

    P. S. Dhabe; M. S. Patwardhan; Asavari A. Deshpande; M.L. Dhore; B.V. Barbadekar; H. K. Abhyankar

    2010-01-01

    In this paper an Articulated Entity Relationship (AER) diagram is proposed, which is an extension of EntityRelationship (ER) diagram to accommodate the Functional Dependency (FD) information as its integral partfor complete automation of normalization. In current relational databases (RDBMS) automation ofnormalization by top down approach is possible using ER diagram as an input, provided the FD informationis available independently, meanwhile, through user interaction. Such automation we cal...

  13. Articulated Entity Relationship (AER Diagram for Complete Automation of Relational Database Normalization

    Directory of Open Access Journals (Sweden)

    P. S. Dhabe

    2010-05-01

    Full Text Available In this paper an Articulated Entity Relationship (AER diagram is proposed, which is an extension of EntityRelationship (ER diagram to accommodate the Functional Dependency (FD information as its integral partfor complete automation of normalization. In current relational databases (RDBMS automation ofnormalization by top down approach is possible using ER diagram as an input, provided the FD informationis available independently, meanwhile, through user interaction. Such automation we call partial andconditional automation. To avoid this user interaction, there is a strong need to accommodate FDinformation as an element of ER diagram itself. Moreover, ER diagrams are not designed by taking intoaccount the requirements of normalization. However, for better automation of normalization it must be anintegral part of conceptual design (ER Diagram. The prime motivation behind this paper to design a systemthat need only proposed AER diagram as a sole input and normalize the database up to a given normal formin one go. This would allow more amount of automation than the current approach. Such automation we callas total and unconditional automation, which is better and complete in true sense. As the proposed AERdiagram is designed by taking in to account the normalization process, normalization up to Boyce CoddNormal Form (BCNF becomes an integral part of conceptual design. Additional advantage of AER diagramis that any modifications (addition, deletion or updation of attributes made to the AER diagram willautomatically be reflected in its FD information. Thus description of schema and FD information isguaranteed to be consistent. This cannot be assured in current approach using ER diagrams, as schema andFD information are provided to the system at two different times, separately.

  14. On real-time AER 2-D convolutions hardware for neuromorphic spike-based cortical processing

    OpenAIRE

    Serrano-Gotarredona, Rafael; Serrano-Gotarredona, Teresa; Acosta, Antonio José; Serrano-Gotarredona, Clara; Perez-Carrasco, J. A.; Linares-Barranco, Bernabé; Linares-Barranco, Alejandro; Jimenez-Moreno, Gabriel; Civit-Balcells, Antón

    2008-01-01

    In this paper, a chip that performs real-time image convolutions with programmable kernels of arbitrary shape is presented. The chip is a first experimental prototype of reduced size to validate the implemented circuits and system level techniques. The convolution processing is based on the address-event-representation (AER) technique, which is a spike-based biologically inspired image and video representation technique that favors communication bandwidth for pixels with more information. As ...

  15. The Conic Benchmark Format

    DEFF Research Database (Denmark)

    Friberg, Henrik A.

    This document constitutes the technical reference manual of the Conic Benchmark Format with le extension: .cbf or .CBF. It unies linear, second-order cone (also known as conic quadratic) and semidenite optimization with mixed-integer variables. The format has been designed with benchmark libraries...... in mind, and therefore focuses on compact and easily parsable representations. The problem structure is separated from the problem data, and the format moreover facilitate benchmarking of hotstart capability through sequences of changes....

  16. Bayesian Benchmark Dose Analysis

    OpenAIRE

    Fang, Qijun; Piegorsch, Walter W.; Barnes, Katherine Y.

    2014-01-01

    An important objective in environmental risk assessment is estimation of minimum exposure levels, called Benchmark Doses (BMDs) that induce a pre-specified Benchmark Response (BMR) in a target population. Established inferential approaches for BMD analysis typically involve one-sided, frequentist confidence limits, leading in practice to what are called Benchmark Dose Lower Limits (BMDLs). Appeal to Bayesian modeling and credible limits for building BMDLs is far less developed, however. Indee...

  17. Risk Management with Benchmarking

    OpenAIRE

    Suleyman Basak; Alex Shapiro; Lucie Teplá

    2005-01-01

    Portfolio theory must address the fact that, in reality, portfolio managers are evaluated relative to a benchmark, and therefore adopt risk management practices to account for the benchmark performance. We capture this risk management consideration by allowing a prespecified shortfall from a target benchmark-linked return, consistent with growing interest in such practice. In a dynamic setting, we demonstrate how a risk-averse portfolio manager optimally under- or overperforms a target benchm...

  18. Aeroelastic Benchmark Experiments Project

    Data.gov (United States)

    National Aeronautics and Space Administration — M4 Engineering proposes to conduct canonical aeroelastic benchmark experiments. These experiments will augment existing sources for aeroelastic data in the...

  19. MCNP neutron benchmarks

    International Nuclear Information System (INIS)

    Over 50 neutron benchmark calculations have recently been completed as part of an ongoing program to validate the MCNP Monte Carlo radiation transport code. The new and significant aspects of this work are as follows: These calculations are the first attempt at a validation program for MCNP and the first official benchmarking of version 4 of the code. We believe the chosen set of benchmarks is a comprehensive set that may be useful for benchmarking other radiation transport codes and data libraries. These calculations provide insight into how well neutron transport calculations can be expected to model a wide variety of problems

  20. How Activists Use Benchmarks

    DEFF Research Database (Denmark)

    Seabrooke, Leonard; Wigan, Duncan

    2015-01-01

    Non-governmental organisations use benchmarks as a form of symbolic violence to place political pressure on firms, states, and international organisations. The development of benchmarks requires three elements: (1) salience, that the community of concern is aware of the issue and views it as...... important; (2) will, that activists and issue entrepreneurs will carry the message forward; and (3) expertise, that benchmarks created can be defended as accurate representations of what is happening on the issue of concern. We contrast two types of benchmarking cycles where salience, will, and expertise...

  1. PRISMATIC CORE COUPLED TRANSIENT BENCHMARK

    Energy Technology Data Exchange (ETDEWEB)

    J. Ortensi; M.A. Pope; G. Strydom; R.S. Sen; M.D. DeHart; H.D. Gougar; C. Ellis; A. Baxter; V. Seker; T.J. Downar; K. Vierow; K. Ivanov

    2011-06-01

    The Prismatic Modular Reactor (PMR) is one of the High Temperature Reactor (HTR) design concepts that have existed for some time. Several prismatic units have operated in the world (DRAGON, Fort St. Vrain, Peach Bottom) and one unit is still in operation (HTTR). The deterministic neutronics and thermal-fluids transient analysis tools and methods currently available for the design and analysis of PMRs have lagged behind the state of the art compared to LWR reactor technologies. This has motivated the development of more accurate and efficient tools for the design and safety evaluations of the PMR. In addition to the work invested in new methods, it is essential to develop appropriate benchmarks to verify and validate the new methods in computer codes. The purpose of this benchmark is to establish a well-defined problem, based on a common given set of data, to compare methods and tools in core simulation and thermal hydraulics analysis with a specific focus on transient events. The benchmark-working group is currently seeking OECD/NEA sponsorship. This benchmark is being pursued and is heavily based on the success of the PBMR-400 exercise.

  2. ICSBEP Benchmarks For Nuclear Data Applications

    Science.gov (United States)

    Briggs, J. Blair

    2005-05-01

    The International Criticality Safety Benchmark Evaluation Project (ICSBEP) was initiated in 1992 by the United States Department of Energy. The ICSBEP became an official activity of the Organization for Economic Cooperation and Development (OECD) — Nuclear Energy Agency (NEA) in 1995. Representatives from the United States, United Kingdom, France, Japan, the Russian Federation, Hungary, Republic of Korea, Slovenia, Serbia and Montenegro (formerly Yugoslavia), Kazakhstan, Spain, Israel, Brazil, Poland, and the Czech Republic are now participating. South Africa, India, China, and Germany are considering participation. The purpose of the ICSBEP is to identify, evaluate, verify, and formally document a comprehensive and internationally peer-reviewed set of criticality safety benchmark data. The work of the ICSBEP is published as an OECD handbook entitled "International Handbook of Evaluated Criticality Safety Benchmark Experiments." The 2004 Edition of the Handbook contains benchmark specifications for 3331 critical or subcritical configurations that are intended for use in validation efforts and for testing basic nuclear data. New to the 2004 Edition of the Handbook is a draft criticality alarm / shielding type benchmark that should be finalized in 2005 along with two other similar benchmarks. The Handbook is being used extensively for nuclear data testing and is expected to be a valuable resource for code and data validation and improvement efforts for decades to come. Specific benchmarks that are useful for testing structural materials such as iron, chromium, nickel, and manganese; beryllium; lead; thorium; and 238U are highlighted.

  3. Benchmark af erhvervsuddannelserne

    DEFF Research Database (Denmark)

    Bogetoft, Peter; Wittrup, Jesper

    I dette arbejdspapir diskuterer vi, hvorledes de danske erhvervsskoler kan benchmarkes, og vi præsenterer resultaterne af en række beregningsmodeller. Det er begrebsmæssigt kompliceret at benchmarke erhvervsskolerne. Skolerne udbyder en lang række forskellige uddannelser. Det gør det vanskeligt at...

  4. Thermal Performance Benchmarking (Presentation)

    Energy Technology Data Exchange (ETDEWEB)

    Moreno, G.

    2014-11-01

    This project will benchmark the thermal characteristics of automotive power electronics and electric motor thermal management systems. Recent vehicle systems will be benchmarked to establish baseline metrics, evaluate advantages and disadvantages of different thermal management systems, and identify areas of improvement to advance the state-of-the-art.

  5. Internet based benchmarking

    DEFF Research Database (Denmark)

    Bogetoft, Peter; Nielsen, Kurt

    2005-01-01

    We discuss the design of interactive, internet based benchmarking using parametric (statistical) as well as nonparametric (DEA) models. The user receives benchmarks and improvement potentials. The user is also given the possibility to search different efficiency frontiers and hereby to explore...

  6. Verification and validation benchmarks.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy

    2007-02-01

    Verification and validation (V&V) are the primary means to assess the accuracy and reliability of computational simulations. V&V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V&V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the level of

  7. The General Concept of Benchmarking and Its Application in Higher Education in Europe

    Science.gov (United States)

    Nazarko, Joanicjusz; Kuzmicz, Katarzyna Anna; Szubzda-Prutis, Elzbieta; Urban, Joanna

    2009-01-01

    The purposes of this paper are twofold: a presentation of the theoretical basis of benchmarking and a discussion on practical benchmarking applications. Benchmarking is also analyzed as a productivity accelerator. The authors study benchmarking usage in the private and public sectors with due consideration of the specificities of the two areas.…

  8. A performance geodynamo benchmark

    Science.gov (United States)

    Matsui, H.; Heien, E. M.

    2014-12-01

    In the last ten years, a number of numerical dynamo models have successfully represented basic characteristics of the geomagnetic field. However, to approach the parameters regime of the Earth's outer core, we need massively parallel computational environment for extremely large spatial resolutions. Local methods are expected to be more suitable for massively parallel computation because the local methods needs less data communication than the spherical harmonics expansion, but only a few groups have reported results of the dynamo benchmark using local methods (Harder and Hansen, 2005; Matsui and Okuda, 2005; Chan et al., 2007) because of the difficulty treating magnetic boundary conditions based on the local methods. On the other hand, some numerical dynamo models using spherical harmonics expansion has performed successfully with thousands of processes. We perform benchmark tests to asses various numerical methods to asses the next generation of geodynamo simulations. The purpose of the present benchmark test is to assess numerical geodynamo models on a massively parallel computational platform. To compare among many numerical methods as possible, we consider the model with the insulated magnetic boundary by Christensen et al. (2001) and with the pseudo vacuum magnetic boundary, because the pseudo vacuum boundaries are implemented easier by using the local method than the magnetic insulated boundaries. In the present study, we consider two kinds of benchmarks, so-called accuracy benchmark and performance benchmark. In the present study, we will report the results of the performance benchmark. We perform the participated dynamo models under the same computational environment (XSEDE TACC Stampede), and investigate computational performance. To simplify the problem, we choose the same model and parameter regime as the accuracy benchmark test, but perform the simulations with much finer spatial resolutions as possible to investigate computational capability (e

  9. Uncertainties in modelling Mt. Pinatubo eruption with 2-D AER model and CCM SOCOL

    Science.gov (United States)

    Kenzelmann, P.; Weisenstein, D.; Peter, T.; Luo, B. P.; Rozanov, E.; Fueglistaler, S.; Thomason, L. W.

    2009-04-01

    Large volcanic eruptions may introduce a strong forcing on climate. They challenge the skills of climate models. In addition to the short time attenuation of solar light by ashes the formation of stratospheric sulphate aerosols, due to volcanic sulphur dioxide injection into the lower stratosphere, may lead to a significant enhancement of the global albedo. The sulphate aerosols have a residence time of about 2 years. As a consequence of the enhanced sulphate aerosol concentration both the stratospheric chemistry and dynamics are strongly affected. Due to absorption of longwave and near infrared radiation the temperature in the lower stratosphere increases. So far chemistry climate models overestimate this warming [Eyring et al. 2006]. We present an extensive validation of extinction measurements and model runs of the eruption of Mt. Pinatubo in 1991. Even if Mt. Pinatubo eruption has been the best quantified volcanic eruption of this magnitude, the measurements show considerable uncertainties. For instance the total amount of sulphur emitted to the stratosphere ranges from 5-12 Mt sulphur [e.g. Guo et al. 2004, McCormick, 1992]. The largest uncertainties are in the specification of the main aerosol cloud. SAGE II, for instance, could not measure the peak of the aerosol extinction for about 1.5 years, because optical termination was reached. The gap-filling of the SAGE II [Thomason and Peter, 2006] using lidar measurements underestimates the total extinctions in the tropics for the first half year after the eruption by 30% compared to AVHRR [Rusell et. al 1992]. The same applies to the optical dataset described by Stenchikov et al. [1998]. We compare these extinction data derived from measurements with extinctions derived from AER 2D aerosol model calculations [Weisenstein et al., 2007]. Full microphysical calculations with injections of 14, 17, 20 and 26 Mt SO2 in the lower stratosphere were performed. The optical aerosol properties derived from SAGE II

  10. Benchmarking expert system tools

    Science.gov (United States)

    Riley, Gary

    1988-01-01

    As part of its evaluation of new technologies, the Artificial Intelligence Section of the Mission Planning and Analysis Div. at NASA-Johnson has made timing tests of several expert system building tools. Among the production systems tested were Automated Reasoning Tool, several versions of OPS5, and CLIPS (C Language Integrated Production System), an expert system builder developed by the AI section. Also included in the test were a Zetalisp version of the benchmark along with four versions of the benchmark written in Knowledge Engineering Environment, an object oriented, frame based expert system tool. The benchmarks used for testing are studied.

  11. Toxicological Benchmarks for Wildlife

    Energy Technology Data Exchange (ETDEWEB)

    Sample, B.E. Opresko, D.M. Suter, G.W.

    1993-01-01

    Ecological risks of environmental contaminants are evaluated by using a two-tiered process. In the first tier, a screening assessment is performed where concentrations of contaminants in the environment are compared to no observed adverse effects level (NOAEL)-based toxicological benchmarks. These benchmarks represent concentrations of chemicals (i.e., concentrations presumed to be nonhazardous to the biota) in environmental media (water, sediment, soil, food, etc.). While exceedance of these benchmarks does not indicate any particular level or type of risk, concentrations below the benchmarks should not result in significant effects. In practice, when contaminant concentrations in food or water resources are less than these toxicological benchmarks, the contaminants may be excluded from further consideration. However, if the concentration of a contaminant exceeds a benchmark, that contaminant should be retained as a contaminant of potential concern (COPC) and investigated further. The second tier in ecological risk assessment, the baseline ecological risk assessment, may use toxicological benchmarks as part of a weight-of-evidence approach (Suter 1993). Under this approach, based toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. Other sources of evidence include media toxicity tests, surveys of biota (abundance and diversity), measures of contaminant body burdens, and biomarkers. This report presents NOAEL- and lowest observed adverse effects level (LOAEL)-based toxicological benchmarks for assessment of effects of 85 chemicals on 9 representative mammalian wildlife species (short-tailed shrew, little brown bat, meadow vole, white-footed mouse, cottontail rabbit, mink, red fox, and whitetail deer) or 11 avian wildlife species (American robin, rough-winged swallow, American woodcock, wild turkey, belted kingfisher, great blue heron, barred owl, barn owl, Cooper's hawk, and red

  12. Shielding benchmark problems, (2)

    International Nuclear Information System (INIS)

    Shielding benchmark problems prepared by Working Group of Assessment of Shielding Experiments in the Research Committee on Shielding Design in the Atomic Energy Society of Japan were compiled by Shielding Laboratory in Japan Atomic Energy Research Institute. Fourteen shielding benchmark problems are presented newly in addition to twenty-one problems proposed already, for evaluating the calculational algorithm and accuracy of computer codes based on discrete ordinates method and Monte Carlo method and for evaluating the nuclear data used in codes. The present benchmark problems are principally for investigating the backscattering and the streaming of neutrons and gamma rays in two- and three-dimensional configurations. (author)

  13. A polishing hybrid AER/UF membrane process for the treatment of a high DOC content surface water.

    Science.gov (United States)

    Humbert, H; Gallard, H; Croué, J-P

    2012-03-15

    The efficacy of a combined AER/UF (Anion Exchange Resin/Ultrafiltration) process for the polishing treatment of a high DOC (Dissolved Organic Carbon) content (>8 mgC/L) surface water was investigated at lab-scale using a strong base AER. Both resin dose and bead size had a significant impact on the kinetic removal of DOC for short contact times (i.e. treatment conditions were applied in combination with UF membrane filtration on water previously treated by coagulation-flocculation (i.e. 3 mgC/L). A more severe fouling was observed for each filtration run in the presence of AER. This fouling was shown to be mainly reversible and caused by the progressive attrition of the AER through the centrifugal pump leading to the production of resin particles below 50 μm in diameter. More important, the presence of AER significantly lowered the irreversible fouling (loss of permeability recorded after backwash) and reduced the DOC content of the clarified water to l.8 mgC/L (40% removal rate), concentration that remained almost constant throughout the experiment. PMID:22200260

  14. Microbiota aeróbia conjuntival nas conjuntivites adenovirais Ocular flora in adenoviral conjunctivitis

    OpenAIRE

    Eliane Mayumi Nakano; Denise de Freitas; Maria Cecília Zorat Yu; Lênio Souza Alvarenga; Ana Luisa Hofling- Lima

    2002-01-01

    Objetivos: Estudar a microbiota aeróbica conjuntival em pacientes com quadro clínico de conjuntivite viral aguda. Método: Trinta pacientes entre 18 e 40 anos portadores de conjuntivite adenoviral e 30 pacientes sem a doença foram submetidos à colheita de material da conjuntiva para cultura. Os portadores de conjuntivite adenoviral foram submetidos ao exame até 3 dias após o início dos sintomas. As culturas foram realizadas utilizando-se os meios de ágar-sangue e ágar-chocolate. Pacientes em u...

  15. Sistema bio - inspirado basado en AER aplicado a automoción

    OpenAIRE

    González Blanco, Manuel

    2012-01-01

    VULCANO (Ref: TEC2009-10639-C04-04) En este proyecto se ha trabajado en el diseño e implementación de un sistema bio-inspirado basado en eventos y su adaptación a cámaras comerciales, evitando unos de los principales problemas de este tipo de sistemas. Por otro lado, se ha aplicado el sistema a un entorno de automoción mediante la utilización de un simulador altamente inmersivo. De este modo se ha evaluado el rendimiento y adecuación de los sistemas basados en AER para la medición de la...

  16. AERE contracts with DoE on the treatment and disposal of intermediate level wastes

    International Nuclear Information System (INIS)

    This document reports work carried out in 1983/84 under 10 contracts between DoE and AERE on the treatment and disposal of intermediate level wastes. Individual summaries are provided for each contract report within the document, under the headings: comparative evaluation of α and βγ irradiated medium level waste forms; modelling and characterisation of intermediate level waste forms based on polymers; optimisation of processing parameters for polymer and bitumen modified cements; ceramic waste forms; radionuclide release during leaching; ion exchange processes; electrical processes for the treatment of medium active liquid wastes; fast reactor fuel element cladding; dissolver residues; flowsheeting/systems study. (U.K.)

  17. Diagnostic Algorithm Benchmarking

    Science.gov (United States)

    Poll, Scott

    2011-01-01

    A poster for the NASA Aviation Safety Program Annual Technical Meeting. It describes empirical benchmarking on diagnostic algorithms using data from the ADAPT Electrical Power System testbed and a diagnostic software framework.

  18. GeodeticBenchmark_GEOMON

    Data.gov (United States)

    Vermont Center for Geographic Information — The GeodeticBenchmark_GEOMON data layer consists of geodetic control monuments (points) that have a known position or spatial reference. The locations of these...

  19. DOE Commercial Building Benchmark Models: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Torcelini, P.; Deru, M.; Griffith, B.; Benne, K.; Halverson, M.; Winiarski, D.; Crawley, D. B.

    2008-07-01

    To provide a consistent baseline of comparison and save time conducting such simulations, the U.S. Department of Energy (DOE) has developed a set of standard benchmark building models. This paper will provide an executive summary overview of these benchmark buildings, and how they can save building analysts valuable time. Fully documented and implemented to use with the EnergyPlus energy simulation program, the benchmark models are publicly available and new versions will be created to maintain compatibility with new releases of EnergyPlus. The benchmark buildings will form the basis for research on specific building technologies, energy code development, appliance standards, and measurement of progress toward DOE energy goals. Having a common starting point allows us to better share and compare research results and move forward to make more energy efficient buildings.

  20. Benchmarking in University Toolbox

    OpenAIRE

    Katarzyna Kuźmicz

    2015-01-01

    In the face of global competition and rising challenges that higher education institutions (HEIs) meet, it is imperative to increase innovativeness and efficiency of their management. Benchmarking can be the appropriate tool to search for a point of reference necessary to assess institution’s competitive position and learn from the best in order to improve. The primary purpose of the paper is to present in-depth analysis of benchmarking application in HEIs worldwide. The study involves indica...

  1. Accelerator shielding benchmark problems

    International Nuclear Information System (INIS)

    Accelerator shielding benchmark problems prepared by Working Group of Accelerator Shielding in the Research Committee on Radiation Behavior in the Atomic Energy Society of Japan were compiled by Radiation Safety Control Center of National Laboratory for High Energy Physics. Twenty-five accelerator shielding benchmark problems are presented for evaluating the calculational algorithm, the accuracy of computer codes and the nuclear data used in codes. (author)

  2. Benchmarking conflict resolution algorithms

    OpenAIRE

    Vanaret, Charlie; Gianazza, David; Durand, Nicolas; Gotteland, Jean-Baptiste

    2012-01-01

    Applying a benchmarking approach to conflict resolution problems is a hard task, as the analytical form of the constraints is not simple. This is especially the case when using realistic dynamics and models, considering accelerating aircraft that may follow flight paths that are not direct. Currently, there is a lack of common problems and data that would allow researchers to compare the performances of several conflict resolution algorithms. The present paper introduces a benchmarking approa...

  3. Accelerator shielding benchmark problems

    Energy Technology Data Exchange (ETDEWEB)

    Hirayama, H.; Ban, S.; Nakamura, T. [and others

    1993-01-01

    Accelerator shielding benchmark problems prepared by Working Group of Accelerator Shielding in the Research Committee on Radiation Behavior in the Atomic Energy Society of Japan were compiled by Radiation Safety Control Center of National Laboratory for High Energy Physics. Twenty-five accelerator shielding benchmark problems are presented for evaluating the calculational algorithm, the accuracy of computer codes and the nuclear data used in codes. (author).

  4. The KMAT: Benchmarking Knowledge Management.

    Science.gov (United States)

    de Jager, Martha

    Provides an overview of knowledge management and benchmarking, including the benefits and methods of benchmarking (e.g., competitive, cooperative, collaborative, and internal benchmarking). Arthur Andersen's KMAT (Knowledge Management Assessment Tool) is described. The KMAT is a collaborative benchmarking tool, designed to help organizations make…

  5. Capacidad aeróbica, felicidad y satisfacción con la vida en adolescentes españoles

    OpenAIRE

    Juan A. Jim\\u00E9nez-Moral; Mar\\u00EDa L. Zagalaz S\\u00E1nchez; David Molero; Manuel Pulido-Martos; Ruiz, Jonatan R.

    2013-01-01

    Objetivos: Analizar la asociación entre capacidad aeróbica, felicidad subjetiva y satisfacción con la vida en adolescentes. Método: Participaron 388 adolescentes (207 mujeres) de 12-18 años de edad. La capacidad aeróbica se evaluó mediante el test de 20 metros de ida y vuelta. La felicidad subjetiva y la satisfacción con la vida se evaluaron mediante las escalas Subjective Happiness Scale y Satisfaction With Life Scale, respectivamente. Se midió el peso y la talla de los adolescentes y se cal...

  6. Dose assessment for CEGB users of the Kodak type 2 film used in the NRPB/AERE holder

    International Nuclear Information System (INIS)

    Some work, complementary to that of the National Radiological Protection Board (NRPB) and the Atomic Energy Research Establishment (AERE), has been done at Berkeley Nuclear Laboratories (BNL) on the response of the Kodak Type 2 film in the NRPB/AERE holder. Initial results indicate that the combination forms a satisfactory dosemeter. Comparison between the BNL and NRPB results shows differences which appear to be due to the fact that the angle of incidence was 900 for the former and 350 for the latter. Some conclusions are drawn on dosimetry but in general, for CEGB users, no substantial changes from existing procedures are required. (author)

  7. Benchmarking in Mobarakeh Steel Company

    OpenAIRE

    Sasan Ghasemi; Mohammad Nazemi; Mehran Nejati

    2008-01-01

    Benchmarking is considered as one of the most effective ways of improving performance in companies. Although benchmarking in business organizations is a relatively new concept and practice, it has rapidly gained acceptance worldwide. This paper introduces the benchmarking project conducted in Esfahan's Mobarakeh Steel Company, as the first systematic benchmarking project conducted in Iran. It aims to share the process deployed for the benchmarking project in this company and illustrate how th...

  8. Benchmarking the Netherlands. Benchmarking for growth

    International Nuclear Information System (INIS)

    This is the fourth edition of the Ministry of Economic Affairs' publication 'Benchmarking the Netherlands', which aims to assess the competitiveness of the Dutch economy. The methodology and objective of the benchmarking remain the same. The basic conditions for economic activity (institutions, regulation, etc.) in a number of benchmark countries are compared in order to learn from the solutions found by other countries for common economic problems. This publication is devoted entirely to the potential output of the Dutch economy. In other words, its ability to achieve sustainable growth and create work over a longer period without capacity becoming an obstacle. This is important because economic growth is needed to increase prosperity in the broad sense and meeting social needs. Prosperity in both a material (per capita GDP) and immaterial (living environment, environment, health, etc) sense, in other words. The economy's potential output is determined by two structural factors: the growth of potential employment and the structural increase in labour productivity. Analysis by the Netherlands Bureau for Economic Policy Analysis (CPB) shows that in recent years the increase in the capacity for economic growth has been realised mainly by increasing the supply of labour and reducing the equilibrium unemployment rate. In view of the ageing of the population in the coming years and decades the supply of labour is unlikely to continue growing at the pace we have become accustomed to in recent years. According to a number of recent studies, to achieve a respectable rate of sustainable economic growth the aim will therefore have to be to increase labour productivity. To realise this we have to focus on for six pillars of economic policy: (1) human capital, (2) functioning of markets, (3) entrepreneurship, (4) spatial planning, (5) innovation, and (6) sustainability. These six pillars determine the course for economic policy aiming at higher productivity growth. Throughout

  9. MCNP neutron benchmarks

    International Nuclear Information System (INIS)

    More than 50 neutron benchmark calculations have recently been completed as part of an ongoing program to validate the MCNP Monte Carlo radiation transport code. The benchmark calculations reported here are part of an ongoing multiyear, multiperson effort to benchmark version 4 of the MCNP code. The MCNP is a Monte Carlo three-dimensional general-purpose, continuous-energy neutron, photon, and electron transport code. It is used around the world for many applications including aerospace, oil-well logging, physics experiments, criticality safety, reactor analysis, medical imaging, defense applications, accelerator design, radiation hardening, radiation shielding, health physics, fusion research, and education. The first phase of the benchmark project consisted of analytic and photon problems. The second phase consists of the ENDF/B-V neutron problems reported in this paper and in more detail in the comprehensive report. A cooperative program being carried out a General Electric, San Jose, consists of light water reactor benchmark problems. A subsequent phase focusing on electron problems is planned

  10. Shielding Benchmark Computational Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Hunter, H.T.; Slater, C.O.; Holland, L.B.; Tracz, G.; Marshall, W.J.; Parsons, J.L.

    2000-09-17

    Over the past several decades, nuclear science has relied on experimental research to verify and validate information about shielding nuclear radiation for a variety of applications. These benchmarks are compared with results from computer code models and are useful for the development of more accurate cross-section libraries, computer code development of radiation transport modeling, and building accurate tests for miniature shielding mockups of new nuclear facilities. When documenting measurements, one must describe many parts of the experimental results to allow a complete computational analysis. Both old and new benchmark experiments, by any definition, must provide a sound basis for modeling more complex geometries required for quality assurance and cost savings in nuclear project development. Benchmarks may involve one or many materials and thicknesses, types of sources, and measurement techniques. In this paper the benchmark experiments of varying complexity are chosen to study the transport properties of some popular materials and thicknesses. These were analyzed using three-dimensional (3-D) models and continuous energy libraries of MCNP4B2, a Monte Carlo code developed at Los Alamos National Laboratory, New Mexico. A shielding benchmark library provided the experimental data and allowed a wide range of choices for source, geometry, and measurement data. The experimental data had often been used in previous analyses by reputable groups such as the Cross Section Evaluation Working Group (CSEWG) and the Organization for Economic Cooperation and Development/Nuclear Energy Agency Nuclear Science Committee (OECD/NEANSC).

  11. Deviating From the Benchmarks

    DEFF Research Database (Denmark)

    Rocha, Vera; Van Praag, Mirjam; Carneiro, Anabela

    This paper studies three related questions: To what extent otherwise similar startups employ different quantities and qualities of human capital at the moment of entry? How persistent are initial human capital choices over time? And how does deviating from human capital benchmarks influence firm...... survival? The analysis is based on a matched employer-employee dataset and covers about 17,500 startups in manufacturing and services. We adopt a new procedure to estimate individual benchmarks for the quantity and quality of initial human resources, acknowledging correlations between hiring decisions......, founders human capital, and the ownership structure of startups (solo entrepreneurs versus entrepreneurial teams). We then study the survival implications of exogenous deviations from these benchmarks, based on spline models for survival data. Our results indicate that (especially negative) deviations from...

  12. Benchmarking for Best Practice

    CERN Document Server

    Zairi, Mohamed

    1998-01-01

    Benchmarking for Best Practice uses up-to-the-minute case-studies of individual companies and industry-wide quality schemes to show how and why implementation has succeeded. For any practitioner wanting to establish best practice in a wide variety of business areas, this book makes essential reading. .It is also an ideal textbook on the applications of TQM since it describes concepts, covers definitions and illustrates the applications with first-hand examples. Professor Mohamed Zairi is an international expert and leading figure in the field of benchmarking. His pioneering work in this area l

  13. Remote Sensing Segmentation Benchmark

    Czech Academy of Sciences Publication Activity Database

    Mikeš, Stanislav; Haindl, Michal; Scarpa, G.

    Piscataway, NJ : IEEE Press, 2012, s. 1-4. ISBN 978-1-4673-4960-4. [IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS). Tsukuba Science City (JP), 11.11.2012] R&D Projects: GA ČR GAP103/11/0335; GA ČR GA102/08/0593 Grant ostatní: CESNET(CZ) 409/2011 Keywords : remote sensing * segmentation * benchmark Subject RIV: BD - Theory of Information http://library.utia.cas.cz/separaty/2013/RO/mikes-remote sensing segmentation benchmark.pdf

  14. Benchmark calculations of power distribution within assemblies

    International Nuclear Information System (INIS)

    The main objective of this Benchmark is to compare different techniques for fine flux prediction based upon coarse mesh diffusion or transport calculations. We proposed 5 ''core'' configurations including different assembly types (17 x 17 pins, ''uranium'', ''absorber'' or ''MOX'' assemblies), with different boundary conditions. The specification required results in terms of reactivity, pin by pin fluxes and production rate distributions. The proposal for these Benchmark calculations was made by J.C. LEFEBVRE, J. MONDOT, J.P. WEST and the specification (with nuclear data, assembly types, core configurations for 2D geometry and results presentation) was distributed to correspondents of the OECD Nuclear Energy Agency. 11 countries and 19 companies answered the exercise proposed by this Benchmark. Heterogeneous calculations and homogeneous calculations were made. Various methods were used to produce the results: diffusion (finite differences, nodal...), transport (Pij, Sn, Monte Carlo). This report presents an analysis and intercomparisons of all the results received

  15. Standard Procedure for Dose Assessment using the film holder NRPB/AERE and the film AGFA Monitoring 2/10

    International Nuclear Information System (INIS)

    This paper describes the calculation method to assess dose and energy using the film holder from NRPB/AERE and the film Agfa Monitoring 2/10. Also includes all the steps since preparing the standard curve, fitting of calibration curve, dose assesment, description of filtration of the film holder and the form of the calibration curve

  16. Hypersensitivity reactions to anticancer agents: Data mining of the public version of the FDA adverse event reporting system, AERS

    Directory of Open Access Journals (Sweden)

    Sakaeda Toshiyuki

    2011-10-01

    Full Text Available Abstract Background Previously, adverse event reports (AERs submitted to the US Food and Drug Administration (FDA database were reviewed to confirm platinum agent-associated hypersensitivity reactions. The present study was performed to confirm whether the database could suggest the hypersensitivity reactions caused by anticancer agents, paclitaxel, docetaxel, procarbazine, asparaginase, teniposide, and etoposide. Methods After a revision of arbitrary drug names and the deletion of duplicated submissions, AERs involving candidate agents were analyzed. The National Cancer Institute Common Terminology Criteria for Adverse Events version 4.0 was applied to evaluate the susceptibility to hypersensitivity reactions, and standardized official pharmacovigilance tools were used for quantitative detection of signals, i.e., drug-associated adverse events, including the proportional reporting ratio, the reporting odds ratio, the information component given by a Bayesian confidence propagation neural network, and the empirical Bayes geometric mean. Results Based on 1,644,220 AERs from 2004 to 2009, the signals were detected for paclitaxel-associated mild, severe, and lethal hypersensitivity reactions, and docetaxel-associated lethal reactions. However, the total number of adverse events occurring with procarbazine, asparaginase, teniposide, or etoposide was not large enough to detect signals. Conclusions The FDA's adverse event reporting system, AERS, and the data mining methods used herein are useful for confirming drug-associated adverse events, but the number of co-occurrences is an important factor in signal detection.

  17. Benchmarking the World's Best

    Science.gov (United States)

    Tucker, Marc S.

    2012-01-01

    A century ago, the United States was a world leader in industrial benchmarking. However, after World War II, once no one could compete with the U.S., it became complacent. Many industrialized countries now have higher student achievement and more equitable and efficient education systems. A higher proportion of young people in their workforces…

  18. Benchmarks: WICHE Region 2012

    Science.gov (United States)

    Western Interstate Commission for Higher Education, 2013

    2013-01-01

    Benchmarks: WICHE Region 2012 presents information on the West's progress in improving access to, success in, and financing of higher education. The information is updated annually to monitor change over time and encourage its use as a tool for informed discussion in policy and education communities. To establish a general context for the…

  19. Benchmark problem proposal

    International Nuclear Information System (INIS)

    The meeting of the Radiation Energy Spectra Unfolding Workshop organized by the Radiation Shielding Information Center is discussed. The plans of the unfolding code benchmarking effort to establish methods of standardization for both the few channel neutron and many channel gamma-ray and neutron spectroscopy problems are presented

  20. Benchmarking and Performance Management

    Directory of Open Access Journals (Sweden)

    Adrian TANTAU

    2010-12-01

    Full Text Available The relevance of the chosen topic is explained by the meaning of the firm efficiency concept - the firm efficiency means the revealed performance (how well the firm performs in the actual market environment given the basic characteristics of the firms and their markets that are expected to drive their profitability (firm size, market power etc.. This complex and relative performance could be due to such things as product innovation, management quality, work organization, some other factors can be a cause even if they are not directly observed by the researcher. The critical need for the management individuals/group to continuously improve their firm/company’s efficiency and effectiveness, the need for the managers to know which are the success factors and the competitiveness determinants determine consequently, what performance measures are most critical in determining their firm’s overall success. Benchmarking, when done properly, can accurately identify both successful companies and the underlying reasons for their success. Innovation and benchmarking firm level performance are critical interdependent activities. Firm level variables, used to infer performance, are often interdependent due to operational reasons. Hence, the managers need to take the dependencies among these variables into account when forecasting and benchmarking performance. This paper studies firm level performance using financial ratio and other type of profitability measures. It uses econometric models to describe and then propose a method to forecast and benchmark performance.

  1. CCF benchmark test

    International Nuclear Information System (INIS)

    A benchmark test on common cause failures (CCF) was performed giving interested institutions in Germany the opportunity of demonstrating and justifying their interpretations of events, their methods and models for analyzed CCF. The participants of this benchmark test belonged to expert and consultant organisations and to industrial institutions. The task for the benchmark test was to analyze two typical groups of motor-operated valves in German nuclear power plants. The benchmark test was carried out in two steps. In the first step the participants were to assess in a qualitative way some 200 event-reports on isolation valves. They then were to establish, quantitatively, the reliability parameters for the CCF in the two groups of motor-operated valves using their own methods and their own calculation models. In a second step the reliability parameters were to be recalculated on the basis of a common reference of well defined events, chosen from all given events, in order to analyze the influence of the calculation models on the reliability parameters. (orig.)

  2. Benchmarking Public Procurement 2016

    OpenAIRE

    World Bank Group

    2015-01-01

    Benchmarking Public Procurement 2016 Report aims to develop actionable indicators which will help countries identify and monitor policies and regulations that impact how private sector companies do business with the government. The project builds on the Doing Business methodology and was initiated at the request of the G20 Anti-Corruption Working Group.

  3. NAS Parallel Benchmarks Results

    Science.gov (United States)

    Subhash, Saini; Bailey, David H.; Lasinski, T. A. (Technical Monitor)

    1995-01-01

    The NAS Parallel Benchmarks (NPB) were developed in 1991 at NASA Ames Research Center to study the performance of parallel supercomputers. The eight benchmark problems are specified in a pencil and paper fashion i.e. the complete details of the problem to be solved are given in a technical document, and except for a few restrictions, benchmarkers are free to select the language constructs and implementation techniques best suited for a particular system. In this paper, we present new NPB performance results for the following systems: (a) Parallel-Vector Processors: Cray C90, Cray T'90 and Fujitsu VPP500; (b) Highly Parallel Processors: Cray T3D, IBM SP2 and IBM SP-TN2 (Thin Nodes 2); (c) Symmetric Multiprocessing Processors: Convex Exemplar SPP1000, Cray J90, DEC Alpha Server 8400 5/300, and SGI Power Challenge XL. We also present sustained performance per dollar for Class B LU, SP and BT benchmarks. We also mention NAS future plans of NPB.

  4. Sp6 and Sp8 transcription factors control AER formation and dorsal-ventral patterning in limb development.

    Directory of Open Access Journals (Sweden)

    Endika Haro

    2014-08-01

    Full Text Available The formation and maintenance of the apical ectodermal ridge (AER is critical for the outgrowth and patterning of the vertebrate limb. The induction of the AER is a complex process that relies on integrated interactions among the Fgf, Wnt, and Bmp signaling pathways that operate within the ectoderm and between the ectoderm and the mesoderm of the early limb bud. The transcription factors Sp6 and Sp8 are expressed in the limb ectoderm and AER during limb development. Sp6 mutant mice display a mild syndactyly phenotype while Sp8 mutants exhibit severe limb truncations. Both mutants show defects in AER maturation and in dorsal-ventral patterning. To gain further insights into the role Sp6 and Sp8 play in limb development, we have produced mice lacking both Sp6 and Sp8 activity in the limb ectoderm. Remarkably, the elimination or significant reduction in Sp6;Sp8 gene dosage leads to tetra-amelia; initial budding occurs, but neither Fgf8 nor En1 are activated. Mutants bearing a single functional allele of Sp8 (Sp6-/-;Sp8+/- exhibit a split-hand/foot malformation phenotype with double dorsal digit tips probably due to an irregular and immature AER that is not maintained in the center of the bud and on the abnormal expansion of Wnt7a expression to the ventral ectoderm. Our data are compatible with Sp6 and Sp8 working together and in a dose-dependent manner as indispensable mediators of Wnt/βcatenin and Bmp signaling in the limb ectoderm. We suggest that the function of these factors links proximal-distal and dorsal-ventral patterning.

  5. A protein–DNA docking benchmark

    NARCIS (Netherlands)

    van Dijk, M.; Bonvin, A.M.J.J.

    2008-01-01

    We present a protein–DNA docking benchmark containing 47 unbound–unbound test cases of which 13 are classified as easy, 22 as intermediate and 12 as difficult cases. The latter shows considerable structural rearrangement upon complex formation. DNA-specific modifications such as flipped out bases an

  6. Benchmarking i den offentlige sektor

    DEFF Research Database (Denmark)

    Bukh, Per Nikolaj; Dietrichson, Lars; Sandalgaard, Niels

    2008-01-01

    I artiklen vil vi kort diskutere behovet for benchmarking i fraværet af traditionelle markedsmekanismer. Herefter vil vi nærmere redegøre for, hvad benchmarking er med udgangspunkt i fire forskellige anvendelser af benchmarking. Regulering af forsyningsvirksomheder vil blive behandlet, hvorefter...

  7. Storage-Intensive Supercomputing Benchmark Study

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, J; Dossa, D; Gokhale, M; Hysom, D; May, J; Pearce, R; Yoo, A

    2007-10-30

    : SuperMicro X7DBE Xeon Dual Socket Blackford Server Motherboard; 2 Intel Xeon Dual-Core 2.66 GHz processors; 1 GB DDR2 PC2-5300 RAM (2 x 512); 80GB Hard Drive (Seagate SATA II Barracuda). The Fusion board is presently capable of 4X in a PCIe slot. The image resampling benchmark was run on a dual Xeon workstation with NVIDIA graphics card (see Chapter 5 for full specification). An XtremeData Opteron+FPGA was used for the language classification application. We observed that these benchmarks are not uniformly I/O intensive. The only benchmark that showed greater that 50% of the time in I/O was the graph algorithm when it accessed data files over NFS. When local disk was used, the graph benchmark spent at most 40% of its time in I/O. The other benchmarks were CPU dominated. The image resampling benchmark and language classification showed order of magnitude speedup over software by using co-processor technology to offload the CPU-intensive kernels. Our experiments to date suggest that emerging hardware technologies offer significant benefit to boosting the performance of data-intensive algorithms. Using GPU and FPGA co-processors, we were able to improve performance by more than an order of magnitude on the benchmark algorithms, eliminating the processor bottleneck of CPU-bound tasks. Experiments with a prototype solid state nonvolative memory available today show 10X better throughput on random reads than disk, with a 2X speedup on a graph processing benchmark when compared to the use of local SATA disk.

  8. Benchmarks for multicomponent diffusion and electrochemical migration

    DEFF Research Database (Denmark)

    Rasouli, Pejman; Steefel, Carl I.; Mayer, K. Ulrich; Rolle, Massimo

    2015-01-01

    been published to date. This contribution provides a set of three benchmark problems that demonstrate the effect of electric coupling during multicomponent diffusion and electrochemical migration and at the same time facilitate the intercomparison of solutions from existing reactive transport codes...... considered in solute transport problems, electromigration can strongly affect mass transport processes. The number of reactive transport models that consider electromigration has been growing in recent years, but a direct model intercomparison that specifically focuses on the role of electromigration has not....... The first benchmark focuses on the 1D transient diffusion of HNO3 (pH = 4) in a NaCl solution into a fixed concentration reservoir, also containing NaCl—but with lower HNO3 concentrations (pH = 6). The second benchmark describes the 1D steady-state migration of the sodium isotope 22Na triggered by...

  9. SP2Bench: A SPARQL Performance Benchmark

    CERN Document Server

    Schmidt, Michael; Lausen, Georg; Pinkel, Christoph

    2008-01-01

    Recently, the SPARQL query language for RDF has reached the W3C recommendation status. In response to this emerging standard, the database community is currently exploring efficient storage techniques for RDF data and evaluation strategies for SPARQL queries. A meaningful analysis and comparison of these approaches necessitates a comprehensive and universal benchmark platform. To this end, we have developed SP$^2$Bench, a publicly available, language-specific SPARQL performance benchmark. SP$^2$Bench is settled in the DBLP scenario and comprises both a data generator for creating arbitrarily large DBLP-like documents and a set of carefully designed benchmark queries. The generated documents mirror key characteristics and social-world distributions encountered in the original DBLP data set, while the queries implement meaningful requests on top of this data, covering a variety of SPARQL operator constellations and RDF access patterns. As a proof of concept, we apply SP$^2$Bench to existing engines and discuss ...

  10. Texture Segmentation Benchmark

    Czech Academy of Sciences Publication Activity Database

    Haindl, Michal; Mikeš, Stanislav

    Los Alamitos : IEEE Press, 2008, s. 2933-2936. ISBN 978-1-4244-2174-9. [19th International Conference on Pattern Recognition. Tampa (US), 07.12.2008-11.12.2008] R&D Projects: GA AV ČR 1ET400750407; GA MŠk 1M0572; GA ČR GA102/07/1594; GA ČR GA102/08/0593 Grant ostatní: GA MŠk(CZ) 2C06019 Institutional research plan: CEZ:AV0Z10750506 Keywords : texture segmentation * image segmentation * benchmark Subject RIV: BD - Theory of Information http://library.utia.cas.cz/separaty/2008/RO/haindl-texture segmentation benchmark.pdf

  11. Radiography benchmark 2014

    International Nuclear Information System (INIS)

    The purpose of the 2014 WFNDEC RT benchmark study was to compare predictions of various models of radiographic techniques, in particular those that predict the contribution of scattered radiation. All calculations were carried out for homogenous materials and a mono-energetic X-ray point source in the energy range between 100 keV and 10 MeV. The calculations were to include the best physics approach available considering electron binding effects. Secondary effects like X-ray fluorescence and bremsstrahlung production were to be taken into account if possible. The problem to be considered had two parts. Part I examined the spectrum and the spatial distribution of radiation behind a single iron plate. Part II considered two equally sized plates, made of iron and aluminum respectively, only evaluating the spatial distribution. Here we present the results of above benchmark study, comparing them to MCNP as the assumed reference model. The possible origins of the observed deviations are discussed

  12. Introduction to 'International Handbook of Criticality Safety Benchmark Experiments'

    International Nuclear Information System (INIS)

    The Criticality Safety Benchmark Evaluation Project (CSBEP) was initiated in 1992 by the United States Department of Energy. The project quickly became an international effort as scientists from other interested countries became involved. The International Criticality Safety Benchmark Evaluation Project (ICSBEP) is now an official activity of the Organization for Economic Cooperation and Development-Nuclear Energy Agency (OECD-NEA). 'International Handbook of Criticality Safety Benchmark Experiments' was prepared and is updated year by year by the working group of the project. This handbook contains criticality safety benchmark specifications that have been derived from experiments that were performed at various nuclear critical facilities around the world. The benchmark specifications are intended for use by criticality safety engineers to validate calculation techniques used. The author briefly introduces the informative handbook and would like to encourage Japanese engineers who are in charge of nuclear criticality safety to use the handbook. (author)

  13. Standard Guide for Benchmark Testing of Light Water Reactor Calculations

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2010-01-01

    1.1 This guide covers general approaches for benchmarking neutron transport calculations in light water reactor systems. A companion guide (Guide E2005) covers use of benchmark fields for testing neutron transport calculations and cross sections in well controlled environments. This guide covers experimental benchmarking of neutron fluence calculations (or calculations of other exposure parameters such as dpa) in more complex geometries relevant to reactor surveillance. Particular sections of the guide discuss: the use of well-characterized benchmark neutron fields to provide an indication of the accuracy of the calculational methods and nuclear data when applied to typical cases; and the use of plant specific measurements to indicate bias in individual plant calculations. Use of these two benchmark techniques will serve to limit plant-specific calculational uncertainty, and, when combined with analytical uncertainty estimates for the calculations, will provide uncertainty estimates for reactor fluences with ...

  14. Benchmarking of LSTM Networks

    OpenAIRE

    Breuel, Thomas M.

    2015-01-01

    LSTM (Long Short-Term Memory) recurrent neural networks have been highly successful in a number of application areas. This technical report describes the use of the MNIST and UW3 databases for benchmarking LSTM networks and explores the effect of di?erent architectural and hyperparameter choices on performance. Significant ?ndings include: (1) LSTM performance depends smoothly on learning rates, (2) batching and momentum has no significant effect on performance, (3) softmax training outperfor...

  15. Texture Fidelity Benchmark

    Czech Academy of Sciences Publication Activity Database

    Haindl, Michal; Kudělka, Miloš

    Los Alamitos, USA: IEEE Computer Society CPS, 2014. ISBN 978-1-4799-7971-4. [International Workshop on Computational Intelligence for Multimedia Understanding 2014 (IWCIM). Paris (FR), 01.11.2014-02.11.2014] R&D Projects: GA ČR(CZ) GA14-10911S Institutional support: RVO:67985556 Keywords : Benchmark testing * fidelity criteria * texture Subject RIV: BD - Theory of Information http://library.utia.cas.cz/separaty/2014/RO/haindl-0439654.pdf

  16. Cloud benchmarking for performance

    OpenAIRE

    Varghese, Blesson; Akgun, Ozgur; Miguel, Ian; Thai, Long; Barker, Adam

    2014-01-01

    How can applications be deployed on the cloud to achieve maximum performance? This question has become significant and challenging with the availability of a wide variety of Virtual Machines (VMs) with different performance capabilities in the cloud. The above question is addressed by proposing a six step benchmarking methodology in which a user provides a set of four weights that indicate how important each of the following groups: memory, processor, computation and storage are to the applic...

  17. Specifications

    International Nuclear Information System (INIS)

    As part of the Danish RERTR Program, three fuel elements with LEU U3O8-Al fuel and three fuel elements with LEU U3Si2-Al fuel were manufactured by NUKEM for irradiation testing in the DR-3 reactor at the Risoe National Laboratory in Denmark. The specifications for the elements with U3O8-Al fuel are presented here as an illustration only. Specifications for the elements with U3Si2-Al fuel were very similar. In this example, materials, material numbers, documents numbers, and drawing numbers specific to a single fabricator have been deleted. (author)

  18. The NAS Parallel Benchmarks

    Energy Technology Data Exchange (ETDEWEB)

    Bailey, David H.

    2009-11-15

    The NAS Parallel Benchmarks (NPB) are a suite of parallel computer performance benchmarks. They were originally developed at the NASA Ames Research Center in 1991 to assess high-end parallel supercomputers. Although they are no longer used as widely as they once were for comparing high-end system performance, they continue to be studied and analyzed a great deal in the high-performance computing community. The acronym 'NAS' originally stood for the Numerical Aeronautical Simulation Program at NASA Ames. The name of this organization was subsequently changed to the Numerical Aerospace Simulation Program, and more recently to the NASA Advanced Supercomputing Center, although the acronym remains 'NAS.' The developers of the original NPB suite were David H. Bailey, Eric Barszcz, John Barton, David Browning, Russell Carter, LeoDagum, Rod Fatoohi, Samuel Fineberg, Paul Frederickson, Thomas Lasinski, Rob Schreiber, Horst Simon, V. Venkatakrishnan and Sisira Weeratunga. The original NAS Parallel Benchmarks consisted of eight individual benchmark problems, each of which focused on some aspect of scientific computing. The principal focus was in computational aerophysics, although most of these benchmarks have much broader relevance, since in a much larger sense they are typical of many real-world scientific computing applications. The NPB suite grew out of the need for a more rational procedure to select new supercomputers for acquisition by NASA. The emergence of commercially available highly parallel computer systems in the late 1980s offered an attractive alternative to parallel vector supercomputers that had been the mainstay of high-end scientific computing. However, the introduction of highly parallel systems was accompanied by a regrettable level of hype, not only on the part of the commercial vendors but even, in some cases, by scientists using the systems. As a result, it was difficult to discern whether the new systems offered any fundamental

  19. Self-benchmarking Guide for Data Centers: Metrics, Benchmarks, Actions

    Energy Technology Data Exchange (ETDEWEB)

    Mathew, Paul; Ganguly, Srirupa; Greenberg, Steve; Sartor, Dale

    2009-07-13

    This guide describes energy efficiency metrics and benchmarks that can be used to track the performance of and identify potential opportunities to reduce energy use in data centers. This guide is primarily intended for personnel who have responsibility for managing energy use in existing data centers - including facilities managers, energy managers, and their engineering consultants. Additionally, data center designers may also use the metrics and benchmarks described in this guide for goal-setting in new construction or major renovation. This guide provides the following information: (1) A step-by-step outline of the benchmarking process. (2) A set of performance metrics for the whole building as well as individual systems. For each metric, the guide provides a definition, performance benchmarks, and potential actions that can be inferred from evaluating this metric. (3) A list and descriptions of the data required for computing the metrics. This guide is complemented by spreadsheet templates for data collection and for computing the benchmarking metrics. This guide builds on prior data center benchmarking studies supported by the California Energy Commission. Much of the benchmarking data are drawn from the LBNL data center benchmarking database that was developed from these studies. Additional benchmark data were obtained from engineering experts including facility designers and energy managers. This guide also builds on recent research supported by the U.S. Department of Energy's Save Energy Now program.

  20. Final evaluation of the CB3+burnup credit benchmark addition

    International Nuclear Information System (INIS)

    In 1966 a series of benchmarks focused on the application of burnup credit in WWER spent fuel management system was launched by L.Markova (1). The four phases of the proposed benchmark series corresponded to the phases of the Burnup Credit Criticality Benchmark organised by the OECD/NEA.These phases referred as CB1, CB2, CB3 and CB4 benchmarks were designed to investigate the main features of burnup credit in WWER spent fuel management systems. In the CB1 step, the multiplication factor of an infinite array of spent fuel rods was calculated taking the burnup, cooling time and different group of nuclides as parameters. The fuel compositions was given in the benchmark specification (Authors)

  1. Entropy-based benchmarking methods

    OpenAIRE

    Temurshoev, Umed

    2012-01-01

    We argue that benchmarking sign-volatile series should be based on the principle of movement and sign preservation, which states that a bench-marked series should reproduce the movement and signs in the original series. We show that the widely used variants of Denton (1971) method and the growth preservation method of Causey and Trager (1981) may violate this principle, while its requirements are explicitly taken into account in the pro-posed entropy-based benchmarking methods. Our illustrati...

  2. Selecting benchmarks for reactor calculations

    OpenAIRE

    Alhassan, Erwin; Sjöstrand, Henrik; Duan, Junfeng; Helgesson, Petter; Pomp, Stephan; Österlund, Michael; Rochman, Dimitri; Koning, Arjan J.

    2014-01-01

    Criticality, reactor physics, fusion and shielding benchmarks are expected to play important roles in GENIV design, safety analysis and in the validation of analytical tools used to design these reactors. For existing reactor technology, benchmarks are used to validate computer codes and test nuclear data libraries. However the selection of these benchmarks are usually done by visual inspection which is dependent on the expertise and the experience of the user and there by resulting in a user...

  3. A Bio-Inspired AER Temporal Tri-Color Differentiator Pixel Array.

    Science.gov (United States)

    Farian, Łukasz; Leñero-Bardallo, Juan Antonio; Häfliger, Philipp

    2015-10-01

    This article investigates the potential of a bio-inspired vision sensor with pixels that detect transients between three primary colors. The in-pixel color processing is inspired by the retinal color opponency that are found in mammalian retinas. Color transitions in a pixel are represented by voltage spikes, which are akin to a neuron's action potential. These spikes are conveyed off-chip by the Address Event Representation (AER) protocol. To achieve sensitivity to three different color spectra within the visual spectrum, each pixel has three stacked photodiodes at different depths in the silicon substrate. The sensor has been fabricated in the standard TSMC 90 nm CMOS technology. A post-processing method to decode events into color transitions has been proposed and implemented as a custom interface to display real-time color changes in the visual scene. Experimental results are provided. Color transitions can be detected at high speed (up to 2.7 kHz). The sensor has a dynamic range of 58 dB and a power consumption of 22.5 mW. This type of sensor can be of use in industrial, robotics, automotive and other applications where essential information is contained in transient emissions shifts within the visual spectrum. PMID:26540694

  4. HPC Benchmark Suite NMx Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Intelligent Automation Inc., (IAI) and University of Central Florida (UCF) propose to develop a comprehensive numerical test suite for benchmarking current and...

  5. Results of the fifth three-dimensional dynamic atomic energy research benchmark problem calculation

    International Nuclear Information System (INIS)

    The pare gives a brief survey of the fifth three-dimensional dynamic atomic energy research benchmark calculation results received with the code DYN3D/ATHLET at NRI Rez. This benchmark was defined at the seventh AER Symposium. Its initiating event is a symmetrical break of the main steam header at the end of the first fuel cycle and hot shutdown conditions with one stuck out control rot group. The calculations were performed with the externally coupled codes ATHLET Mod.1.1 Cycle C and DYN3DH1.1/M3. The Kasseta library was used for the generation of reactor core neutronic parameters. The standard WWER-440/213 input deck of ATHLET code was adopted for benchmark purposes and for coupling with the code DYN3D. The first part of paper contains a brief characteristics of NPP input deck and reactor core model. The second part shows the time dependencies of important global, fuel assembly and loops parameters.(Author)

  6. CAVIAR: A 45k neuron, 5M synapse, 12G connects/s AER hardware sensory-processing-learning-actuating system for high-speed visual object recognition and tracking

    OpenAIRE

    Linares-Barranco, Alejandro; Paz-Vicente, R.; Camuñas-Mesa, L.; Delbruck, Tobi; Jimenez-Moreno, Gabriel; Civit-Balcells, Antón; Serrano-Gotarredona, Teresa; Acosta, Antonio José; Linares-Barranco, Bernabé

    2009-01-01

    This paper describes CAVIAR, a massively parallel hardware implementation of a spike-based sensing-processing-learning-actuating system inspired by the physiology of the nervous system. CAVIAR uses the asychronous address-event representation (AER) communication framework and was developed in the context of a European Union funded project. It has four custom mixed-signal AER chips, five custom digital AER interface components, 45k neurons (spiking cells), up to 5M synapses, performs 12G synap...

  7. Regional Competitive Intelligence: Benchmarking and Policymaking

    OpenAIRE

    Huggins, Robert

    2010-01-01

    Im Bereich der Regionalpolitik erfreuen sich Benchmarking-Untersuchungen wachsender Beliebtheit. In diesem Beitrag werden das Konzept des regionalen Benchmarking sowie seine Verbindungen mit den regionalpolitischen Gestaltungsprozessen analysiert. Ich entwickle eine Typologie der regionalen Benchmarking-Untersuchungen und Benchmarker und unterziehe die Literatur einer kritischen Uumlberpruumlfung. Ich argumentiere, dass die Kritiker des regionalen Benchmarking nicht die Vielfalt und Entwicklu...

  8. Shielding benchmark test

    International Nuclear Information System (INIS)

    Iron data in JENDL-2 have been tested by analyzing shielding benchmark experiments for neutron transmission through iron block performed at KFK using CF-252 neutron source and at ORNL using collimated neutron beam from reactor. The analyses are made by a shielding analysis code system RADHEAT-V4 developed at JAERI. The calculated results are compared with the measured data. As for the KFK experiments, the C/E values are about 1.1. For the ORNL experiments, the calculated values agree with the measured data within an accuracy of 33% for the off-center geometry. The d-t neutron transmission measurements through carbon sphere made at LLNL are also analyzed preliminarily by using the revised JENDL data for fusion neutronics calculation. (author)

  9. Benchmarking monthly homogenization algorithms

    Directory of Open Access Journals (Sweden)

    V. K. C. Venema

    2011-08-01

    Full Text Available The COST (European Cooperation in Science and Technology Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative. The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random break-type inhomogeneities were added to the simulated datasets modeled as a Poisson process with normally distributed breakpoint sizes. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide trend was added.

    Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including (i the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii the error in linear trend estimates and (iii traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve

  10. SSI and structural benchmarks

    International Nuclear Information System (INIS)

    This paper presents the latest results of the ongoing program entitled, Standard Problems for Structural Computer Codes, currently being worked on at BNL for the USNRC, Office of Nuclear Regulatory Research. During FY 1986, efforts were focussed on three tasks, namely, (1) an investigation of ground water effects on the response of Category I structures, (2) the Soil-Structure Interaction Workshop and (3) studies on structural benchmarks associated with Category I structures. The objective of the studies on ground water effects is to verify the applicability and the limitations of the SSI methods currently used by the industry in performing seismic evaluations of nuclear plants which are located at sites with high water tables. In a previous study by BNL (NUREG/CR-4588), it has been concluded that the pore water can influence significantly the soil-structure interaction process. This result, however, is based on the assumption of fully saturated soil profiles. Consequently, the work was further extended to include cases associated with variable water table depths. In this paper, results related to cut-off depths beyond which the pore water effects can be ignored in seismic calculations, are addressed. Comprehensive numerical data are given for soil configurations typical to those encountered in nuclear plant sites. These data were generated by using a modified version of the SLAM code which is capable of handling problems related to the dynamic response of saturated soils. Further, the paper presents some key aspects of the Soil-Structure Interaction Workshop (NUREG/CP-0054) which was held in Bethesda, MD on June 1, 1986. Finally, recent efforts related to the task on the structural benchmarks are described

  11. Benchmarking foreign electronics technologies

    Energy Technology Data Exchange (ETDEWEB)

    Bostian, C.W.; Hodges, D.A.; Leachman, R.C.; Sheridan, T.B.; Tsang, W.T.; White, R.M.

    1994-12-01

    This report has been drafted in response to a request from the Japanese Technology Evaluation Center`s (JTEC) Panel on Benchmarking Select Technologies. Since April 1991, the Competitive Semiconductor Manufacturing (CSM) Program at the University of California at Berkeley has been engaged in a detailed study of quality, productivity, and competitiveness in semiconductor manufacturing worldwide. The program is a joint activity of the College of Engineering, the Haas School of Business, and the Berkeley Roundtable on the International Economy, under sponsorship of the Alfred P. Sloan Foundation, and with the cooperation of semiconductor producers from Asia, Europe and the United States. Professors David A. Hodges and Robert C. Leachman are the project`s Co-Directors. The present report for JTEC is primarily based on data and analysis drawn from that continuing program. The CSM program is being conducted by faculty, graduate students and research staff from UC Berkeley`s Schools of Engineering and Business, and Department of Economics. Many of the participating firms are represented on the program`s Industry Advisory Board. The Board played an important role in defining the research agenda. A pilot study was conducted in 1991 with the cooperation of three semiconductor plants. The research plan and survey documents were thereby refined. The main phase of the CSM benchmarking study began in mid-1992 and will continue at least through 1997. reports are presented on the manufacture of integrated circuits; data storage; wireless technology; human-machine interfaces; and optoelectronics. Selected papers are indexed separately for inclusion in the Energy Science and Technology Database.

  12. Benchmark experiments for nuclear data

    International Nuclear Information System (INIS)

    Benchmark experiments offer the most direct method for validation of nuclear data. Benchmark experiments for several areas of application of nuclear data were specified by CSEWG. These experiments are surveyed and tests of recent versions of ENDF/B are presented. (U.S.)

  13. Internal Benchmarking for Institutional Effectiveness

    Science.gov (United States)

    Ronco, Sharron L.

    2012-01-01

    Internal benchmarking is an established practice in business and industry for identifying best in-house practices and disseminating the knowledge about those practices to other groups in the organization. Internal benchmarking can be done with structures, processes, outcomes, or even individuals. In colleges or universities with multicampuses or a…

  14. Quantum benchmarks for Gaussian states

    CERN Document Server

    Chiribella, Giulio

    2014-01-01

    Teleportation and storage of continuous variable states of light and atoms are essential building blocks for the realization of large scale quantum networks. Rigorous validation of these implementations require identifying, and surpassing, benchmarks set by the most effective strategies attainable without the use of quantum resources. Such benchmarks have been established for special families of input states, like coherent states and particular subclasses of squeezed states. Here we solve the longstanding problem of defining quantum benchmarks for general pure Gaussian states with arbitrary phase, displacement, and squeezing, randomly sampled according to a realistic prior distribution. As a special case, we show that the fidelity benchmark for teleporting squeezed states with totally random phase and squeezing degree is 1/2, equal to the corresponding one for coherent states. We discuss the use of entangled resources to beat the benchmarks in experiments.

  15. NASA Software Engineering Benchmarking Study

    Science.gov (United States)

    Rarick, Heather L.; Godfrey, Sara H.; Kelly, John C.; Crumbley, Robert T.; Wifl, Joel M.

    2013-01-01

    was its software assurance practices, which seemed to rate well in comparison to the other organizational groups and also seemed to include a larger scope of activities. An unexpected benefit of the software benchmarking study was the identification of many opportunities for collaboration in areas including metrics, training, sharing of CMMI experiences and resources such as instructors and CMMI Lead Appraisers, and even sharing of assets such as documented processes. A further unexpected benefit of the study was the feedback on NASA practices that was received from some of the organizations interviewed. From that feedback, other potential areas where NASA could improve were highlighted, such as accuracy of software cost estimation and budgetary practices. The detailed report contains discussion of the practices noted in each of the topic areas, as well as a summary of observations and recommendations from each of the topic areas. The resulting 24 recommendations from the topic areas were then consolidated to eliminate duplication and culled into a set of 14 suggested actionable recommendations. This final set of actionable recommendations, listed below, are items that can be implemented to improve NASA's software engineering practices and to help address many of the items that were listed in the NASA top software engineering issues. 1. Develop and implement standard contract language for software procurements. 2. Advance accurate and trusted software cost estimates for both procured and in-house software and improve the capture of actual cost data to facilitate further improvements. 3. Establish a consistent set of objectives and expectations, specifically types of metrics at the Agency level, so key trends and models can be identified and used to continuously improve software processes and each software development effort. 4. Maintain the CMMI Maturity Level requirement for critical NASA projects and use CMMI to measure organizations developing software for NASA. 5

  16. Artificial Emotion Engine Benchmark Problem Based on Psychological Test Paradigm

    OpenAIRE

    Wang Yi; Wang Zhi-liang

    2013-01-01

    Most of testing and evaluations of emotion model in the field of affective computing are self-evaluation, which aims at the application-specific background, while the research on the problem of the Benchmark emotional model is scarce. This paper firstly proposed the feasibility of making psychological test paradigm a part of artificial Benchmark engine, and with taking versatility and effectiveness as the evolutional factor to judge the engine by testing psychological paradigms. In addition, ...

  17. Implementing the Verified Software Initiative Benchmarks using Perfect Developer

    OpenAIRE

    Xu, Yan; Monahan, Rosemary

    2012-01-01

    This paper describes research on the Perfect Developer tool and its associated programming language, Perfect. We focus on verification benchmarks that have been presented as part of the Verified Software Initiative (VSI), proposing their specification, implementation and verification in the Perfect language and the Perfect Developer tools. To the best of our knowledge this is the first attempt to meet these benchmarks using the Perfect Developer tools. Our aim is...

  18. Implementing the Verified Software Initiative Benchmarks using Perfect Developer

    OpenAIRE

    Xu, Yan

    2010-01-01

    This paper describes research on the Perfect Developer tool and its associated programming language, Perfect. We focus on seven verification benchmarks that have been presented as part of the Verified Software Initiative (VSI), proposing their specification, implementation and verification in the Perfect language and the Perfect Developer tools. To the best of our knowledge this is the first attempt to meet these benchmarks using the Perfect Developer tools and the first ful...

  19. Adverse Event Profiles of 5-Fluorouracil and Capecitabine: Data Mining of the Public Version of the FDA Adverse Event Reporting System, AERS, and Reproducibility of Clinical Observations

    Directory of Open Access Journals (Sweden)

    Kaori Kadoyama, Ikuya Miki, Takao Tamura, JB Brown, Toshiyuki Sakaeda, Yasushi Okuno

    2012-01-01

    Full Text Available Objective: The safety profiles of oral fluoropyrimidines were compared with 5-fluorouracil (5-FU using adverse event reports (AERs submitted to the Adverse Event Reporting System, AERS, of the US Food and Drug Administration (FDA.Methods: After a revision of arbitrary drug names and the deletion of duplicated submissions, AERs involving 5-FU and oral fluoropyrimidines were analyzed. Standardized official pharmacovigilance tools were used for the quantitative detection of signals, i.e., drug-associated adverse events, including the proportional reporting ratio, the reporting odds ratio, the information component given by a Bayesian confidence propagation neural network, and the empirical Bayes geometric mean.Results: Based on 22,017,956 co-occurrences, i.e., drug-adverse event pairs, found in 1,644,220 AERs from 2004 to 2009, it was suggested that leukopenia, neutropenia, and thrombocytopenia were more frequently accompanied by the use of 5-FU than capecitabine, whereas diarrhea, nausea, vomiting, and hand-foot syndrome were more frequently associated with capecitabine. The total number of co-occurrences was not large enough to compare tegafur, tegafur-uracil (UFT, tegafur-gimeracil-oteracil potassium (S-1, or doxifluridine to 5-FU.Conclusion: The results obtained herein were consistent with clinical observations, suggesting the usefulness of the FDA's AERS database and data mining methods used, but the number of co-occurrences is an important factor in signal detection.

  20. BENCHMARKING ON-LINE SERVICES INDUSTRIES

    Institute of Scientific and Technical Information of China (English)

    John HAMILTON

    2006-01-01

    The Web Quality Analyser (WQA) is a new benchmarking tool for industry. It hasbeen extensively tested across services industries. Forty five critical success features are presented as measures that capture the user's perception of services industry websites. This tool differs to previous tools, in that it captures the information technology (IT) related driver sectors of website performance, along with the marketing-services related driver sectors. These driver sectors capture relevant structure, function and performance components.An 'on-off' switch measurement approach determines each component. Relevant component measures scale into a relative presence of the applicable feature, with a feature block delivering one of the sector drivers. Although it houses both measurable and a few subjective components, the WQA offers a proven and useful means to compare relevant websites.The WQA defines website strengths and weaknesses, thereby allowing for corrections to the website structure of the specific business. WQA benchmarking against services related business competitors delivers a position on the WQA index, facilitates specific website driver rating comparisons, and demonstrates where key competitive advantage may reside. This paper reports on the marketing-services driver sectors of this new benchmarking WQA tool.

  1. Selecting benchmarks for reactor calculations

    International Nuclear Information System (INIS)

    Criticality, reactor physics, fusion and shielding benchmarks are expected to play important roles in GENIV design, safety analysis and in the validation of analytical tools used to design these reactors. For existing reactor technology, benchmarks are used to validate computer codes and test nuclear data libraries. However the selection of these benchmarks are usually done by visual inspection which is dependent on the expertise and the experience of the user and thereby resulting in a user bias in the process. In this paper we present a method for the selection of these benchmarks for reactor applications and uncertainty reduction based on Total Monte Carlo (TMC) method. Similarities between an application case and one or several benchmarks are quantified using the correlation coefficient. Based on the method, we also propose two approaches for reducing nuclear data uncertainty using integral benchmark experiments as an additional constrain in the TMC method: a binary accept/reject method and a method of uncertainty reduction using weights. Finally, the methods were applied to a full Lead Fast Reactor core and a set of criticality benchmarks. (author)

  2. Benchmarking biofuels; Biobrandstoffen benchmarken

    Energy Technology Data Exchange (ETDEWEB)

    Croezen, H.; Kampman, B.; Bergsma, G.

    2012-03-15

    A sustainability benchmark for transport biofuels has been developed and used to evaluate the various biofuels currently on the market. For comparison, electric vehicles, hydrogen vehicles and petrol/diesel vehicles were also included. A range of studies as well as growing insight are making it ever clearer that biomass-based transport fuels may have just as big a carbon footprint as fossil fuels like petrol or diesel, or even bigger. At the request of Greenpeace Netherlands, CE Delft has brought together current understanding on the sustainability of fossil fuels, biofuels and electric vehicles, with particular focus on the performance of the respective energy carriers on three sustainability criteria, with the first weighing the heaviest: (1) Greenhouse gas emissions; (2) Land use; and (3) Nutrient consumption [Dutch] Greenpeace Nederland heeft CE Delft gevraagd een duurzaamheidsmeetlat voor biobrandstoffen voor transport te ontwerpen en hierop de verschillende biobrandstoffen te scoren. Voor een vergelijk zijn ook elektrisch rijden, rijden op waterstof en rijden op benzine of diesel opgenomen. Door onderzoek en voortschrijdend inzicht blijkt steeds vaker dat transportbrandstoffen op basis van biomassa soms net zoveel of zelfs meer broeikasgassen veroorzaken dan fossiele brandstoffen als benzine en diesel. CE Delft heeft voor Greenpeace Nederland op een rijtje gezet wat de huidige inzichten zijn over de duurzaamheid van fossiele brandstoffen, biobrandstoffen en elektrisch rijden. Daarbij is gekeken naar de effecten van de brandstoffen op drie duurzaamheidscriteria, waarbij broeikasgasemissies het zwaarst wegen: (1) Broeikasgasemissies; (2) Landgebruik; en (3) Nutriëntengebruik.

  3. Cleanroom energy benchmarking results

    Energy Technology Data Exchange (ETDEWEB)

    Tschudi, William; Xu, Tengfang

    2001-09-01

    A utility market transformation project studied energy use and identified energy efficiency opportunities in cleanroom HVAC design and operation for fourteen cleanrooms. This paper presents the results of this work and relevant observations. Cleanroom owners and operators know that cleanrooms are energy intensive but have little information to compare their cleanroom's performance over time, or to others. Direct comparison of energy performance by traditional means, such as watts/ft{sup 2}, is not a good indicator with the wide range of industrial processes and cleanliness levels occurring in cleanrooms. In this project, metrics allow direct comparison of the efficiency of HVAC systems and components. Energy and flow measurements were taken to determine actual HVAC system energy efficiency. The results confirm a wide variation in operating efficiency and they identify other non-energy operating problems. Improvement opportunities were identified at each of the benchmarked facilities. Analysis of the best performing systems and components is summarized, as are areas for additional investigation.

  4. Extraction of pure thermal neutron beam for the proposed PGNAA facility at the TRIGA research reactor of AERE, Savar, Bangladesh

    Energy Technology Data Exchange (ETDEWEB)

    Alam, S. (Physics Dept., Jahangirnagar Univ., Savar, Dhaka (Bangladesh)); Zaman, M.A. (Physics Dept., Jahangirnagar Univ., Savar, Dhaka (Bangladesh)); Islam, S.M.A. (Physics Dept., Jahangirnagar Univ., Savar, Dhaka (Bangladesh)); Ahsan, M.H. (Inst. of Nuclear Science and Technology (INST), AERE, Savar, Dhaka (Bangladesh))

    1993-10-01

    A study on collimators and filters for the design of a spectrometer for prompt gamma neutron activation analysis (PGNAA) at one of the radial beamports of the TRIGA Mark II reactor at AERE, Savar has been carried out. On the basis of this study a collimator and a filter have been designed for the proposed PGNAA facility. Calculations have been done for measuring neutron flux at various positions of the core of the reactor using the computer code TRIGAP. Gamma dose in the core of the reactor has also been measured experimentally using TLD technique in the present work. (orig.)

  5. Extraction of pure thermal neutron beam for the proposed PGNAA facility at the TRIGA research reactor of AERE, Savar, Bangladesh

    International Nuclear Information System (INIS)

    A study on collimators and filters for the design of a spectrometer for prompt gamma neutron activation analysis (PGNAA) at one of the radial beamports of the TRIGA Mark II reactor at AERE, Savar has been carried out. On the basis of this study a collimator and a filter have been designed for the proposed PGNAA facility. Calculations have been done for measuring neutron flux at various positions of the core of the reactor using the computer code TRIGAP. Gamma dose in the core of the reactor has also been measured experimentally using TLD technique in the present work. (orig.)

  6. Extraction of pure thermal neutron beam for the proposed PGNAA facility at the TRIGA research reactor of AERE, Savar, Bangladesh

    Science.gov (United States)

    Alam, Sabina; Zaman, M. A.; Islam, S. M. A.; Ahsan, M. H.

    1993-10-01

    A study on collimators and filters for the design of a spectrometer for prompt gamma neutron activation analysis (PGNAA) at one of the radial beamports of the TRIGA Mark II reactor at AERE, Savar has been carried out. On the basis of this study a collimator and a filter have been designed for the proposed PGNAA facility. Calculations have been done for measuring neutron flux at various positions of the core of the reactor using the computer code TRIGAP. Gamma dose in the core of the reactor has also been measured experimentally using TLD technique in the present work.

  7. Summary on the activity of AERs Working Group on core monitoring (flux reconstruction, in-core measurements)

    International Nuclear Information System (INIS)

    Working Group C had a joint meeting with Group G in Balatonfuered, Hungary, 31 May-1 June, 2010. At the joint meeting 21 people from 10 AER member organisations of 4 countries - such as Russia, Czech Republic, Slovakia and Hungary - participated. In the 2 days of the program 15 papers were presented, 10 from these connected to the topic of working group C. The title of papers and the list of participants are attached. At the meeting the following topics were discussed:1-Gd fuel introduction and experiences;2-Reactor physical measurement and evaluation problems; 3-Code development and testing;4-In-core surveillance system developments. (Author)

  8. Efeitos do treinamento aeróbio sobre o perfil lipídico de ratos com hipertireoidismo

    OpenAIRE

    Renata Valle Pedroso; Alexandre Konig Garcia Prado; Luiza Hermínia Gallo; Marcelo Costa Junior; Natália Oliveira Betolini; Rodrigo Augusto Dalia; Maria Alice Rostom de Mello; Eliete Luciano

    2012-01-01

    Há poucos estudos analisando a importante relação entre o exercício físico, agudo e crônico, e alterações metabólicas decorrentes do hipertireoidismo. O objetivo do presente estudo foi analisar o efeito de quatro semanas de treinamento aeróbio sobre o perfil lipídico de ratos com hipertireoidismo experimental. Foram utilizados 45 ratos da linhagem Wistar, divididos aleatoriamente em quatro grupos: Controle Sedentário (CS) - administrados com salina durante o período experimental, não praticar...

  9. Aves y aeropuertos: control no letal de chimangos (milvago chimango) en un aeródromo militar de argentina

    OpenAIRE

    MARATEO GERMÁN; GRILLI PABLO G.; SOAVE GUILLERMO E.; FERRETTI VANINA; BOUZAS NANCY M.; ALMAGRO RAMIRO

    2012-01-01

    La concentración de aves cerca de pistas de aviación ha aumentado el riesgo de accidentes aéreos. En varios países se llevan a cabo programas de monitoreo y control de aves en aeropuertos. En Argentina, existen antecedentes aislados en esta temática, aunque se han registrado algunos incidentes. Dos de ellos fueron en el aeródromo de Campo de Mayo, donde se realizó este trabajo. Allí, la especie potencialmente más riesgosa es el Chimango (Milvago chimango). Nuestro objetivo fue evaluar la efec...

  10. Issues in Benchmark Metric Selection

    Science.gov (United States)

    Crolotte, Alain

    It is true that a metric can influence a benchmark but will esoteric metrics create more problems than they will solve? We answer this question affirmatively by examining the case of the TPC-D metric which used the much debated geometric mean for the single-stream test. We will show how a simple choice influenced the benchmark and its conduct and, to some extent, DBMS development. After examining other alternatives our conclusion is that the “real” measure for a decision-support benchmark is the arithmetic mean.

  11. Benchmarking & European Sustainable Transport Policies

    DEFF Research Database (Denmark)

    Gudmundsson, H.

    2003-01-01

    support Sustainable European Transport Policies. The key message is that transport benchmarking has not yet been developed to cope with the challenges of this task. Rather than backing down completely, the paper suggests some critical conditions for applying and adopting benchmarking for this purpose. One...... way forward is to ensure a higher level of environmental integration in transport policy benchmarking. To this effect the paper will discuss the possible role of the socalled Transport and Environment Reporting Mechanism developed by the European Environment Agency. The paper provides an independent...

  12. Benchmark simulation models, quo vadis?

    DEFF Research Database (Denmark)

    Jeppsson, U.; Alex, J; Batstone, D. J.;

    2013-01-01

    As the work of the IWA Task Group on Benchmarking of Control Strategies for wastewater treatment plants (WWTPs) is coming to an end, it is essential to disseminate the knowledge gained. For this reason, all authors of the IWA Scientific and Technical Report on benchmarking have come together to...... and spatial extension, process modifications within the WWTP, the realism of models, control strategy extensions and the potential for new evaluation tools within the existing benchmark system. We find that there are major opportunities for application within all of these areas, either from existing...

  13. California commercial building energy benchmarking

    Energy Technology Data Exchange (ETDEWEB)

    Kinney, Satkartar; Piette, Mary Ann

    2003-07-01

    Building energy benchmarking is the comparison of whole-building energy use relative to a set of similar buildings. It provides a useful starting point for individual energy audits and for targeting buildings for energy-saving measures in multiple-site audits. Benchmarking is of interest and practical use to a number of groups. Energy service companies and performance contractors communicate energy savings potential with ''typical'' and ''best-practice'' benchmarks while control companies and utilities can provide direct tracking of energy use and combine data from multiple buildings. Benchmarking is also useful in the design stage of a new building or retrofit to determine if a design is relatively efficient. Energy managers and building owners have an ongoing interest in comparing energy performance to others. Large corporations, schools, and government agencies with numerous facilities also use benchmarking methods to compare their buildings to each other. The primary goal of Task 2.1.1 Web-based Benchmarking was the development of a web-based benchmarking tool, dubbed Cal-Arch, for benchmarking energy use in California commercial buildings. While there were several other benchmarking tools available to California consumers prior to the development of Cal-Arch, there were none that were based solely on California data. Most available benchmarking information, including the Energy Star performance rating, were developed using DOE's Commercial Building Energy Consumption Survey (CBECS), which does not provide state-level data. Each database and tool has advantages as well as limitations, such as the number of buildings and the coverage by type, climate regions and end uses. There is considerable commercial interest in benchmarking because it provides an inexpensive method of screening buildings for tune-ups and retrofits. However, private companies who collect and manage consumption data are concerned that the

  14. A Global Vision over Benchmarking Process: Benchmarking Based Enterprises

    OpenAIRE

    Catalina SITNIKOV; Giurca Vasilescu, Laura

    2008-01-01

    Benchmarking uses the knowledge and the experience of others to improve the enterprise. Starting from the analysis of the performance and underlying the strengths and weaknesses of the enterprise it should be assessed what must be done in order to improve its activity. Using benchmarking techniques, an enterprise looks at how processes in the value chain are performed. The approach based on the vision “from the whole towards the parts” (a fragmented image of the enterprise’s value chain) redu...

  15. Benchmarking and energy management schemes in SMEs

    Energy Technology Data Exchange (ETDEWEB)

    Huenges Wajer, Boudewijn [SenterNovem (Netherlands); Helgerud, Hans Even [New Energy Performance AS (Norway); Lackner, Petra [Austrian Energy Agency (Austria)

    2007-07-01

    Many companies are reluctant to focus on energy management or to invest in energy efficiency measures. Nevertheless, there are many good examples proving that the right approach to implementing energy efficiency can very well be combined with the business-priorities of most companies. SMEs in particular can benefit from a facilitated European approach because they normally have a lack of resources and time to invest in energy efficiency. In the EU supported pilot project BESS, 60 SMEs from 11 European countries of the food and drink industries successfully tested a package of interactive instruments which offers such a facilitated approach. A number of pilot companies show a profit increase of 3 up to 10 %. The package includes a user-friendly and web based E-learning scheme for implementing energy management as well as a benchmarking module for company specific comparison of energy performance indicators. Moreover, it has several practical and tested tools to support the cycle of continuous improvement of energy efficiency in the company such as checklists, sector specific measure lists, templates for auditing and energy conservation plans. An important feature and also a key trigger for companies is the possibility for SMEs to benchmark anonymously their energy situation against others of the same sector. SMEs can participate in a unique web based benchmarking system to interactively benchmark in a way which fully guarantees confidentiality and safety of company data. Furthermore, the available data can contribute to a bottom-up approach to support the objectives of (national) monitoring and targeting and thereby also contributing to the EU Energy Efficiency and Energy Services Directive. A follow up project to expand the number of participating SMEs of various sectors is currently being developed.

  16. International Criticality Safety Benchmark Evaluation Project (ICSBEP) - ICSBEP 2015 Handbook

    International Nuclear Information System (INIS)

    The Criticality Safety Benchmark Evaluation Project (CSBEP) was initiated in October of 1992 by the United States Department of Energy (DOE). The project quickly became an international effort as scientists from other interested countries became involved. The International Criticality Safety Benchmark Evaluation Project (ICSBEP) became an official activity of the Nuclear Energy Agency (NEA) in 1995. This handbook contains criticality safety benchmark specifications that have been derived from experiments performed at various critical facilities around the world. The benchmark specifications are intended for use by criticality safety engineers to validate calculation techniques used to establish minimum subcritical margins for operations with fissile material and to determine criticality alarm requirements and placement. Many of the specifications are also useful for nuclear data testing. Example calculations are presented; however, these calculations do not constitute a validation of the codes or cross-section data. The evaluated criticality safety benchmark data are given in nine volumes. These volumes span approximately 69000 pages and contain 567 evaluations with benchmark specifications for 4874 critical, near-critical or subcritical configurations, 31 criticality alarm placement/shielding configurations with multiple dose points for each, and 207 configurations that have been categorised as fundamental physics measurements that are relevant to criticality safety applications. New to the handbook are benchmark specifications for neutron activation foil and thermoluminescent dosimeter measurements performed at the SILENE critical assembly in Valduc, France as part of a joint venture in 2010 between the US DOE and the French Alternative Energies and Atomic Energy Commission (CEA). A photograph of this experiment is shown on the front cover. Experiments that are found unacceptable for use as criticality safety benchmark experiments are discussed in these

  17. Performance Targets and External Benchmarking

    DEFF Research Database (Denmark)

    Friis, Ivar; Hansen, Allan; Vámosi, Tamás S.

    Research on relative performance measures, transfer pricing, beyond budgeting initiatives, target costing, piece rates systems and value based management has for decades underlined the importance of external benchmarking in performance management. Research conceptualises external benchmarking as a...... market mechanism that can be brought inside the firm to provide incentives for continuous improvement and the development of competitive advances. However, whereas extant research primarily has focused on the importance and effects of using external benchmarks, less attention has been directed towards...... the conditions upon which the market mechanism is performing within organizations. This paper aims to contribute to research by providing more insight to the conditions for the use of external benchmarking as an element in performance management in organizations. Our study explores a particular type...

  18. Benchmarking Developing Asia's Manufacturing Sector

    OpenAIRE

    Felipe, Jesus; Gemma ESTRADA

    2007-01-01

    This paper documents the transformation of developing Asia's manufacturing sector during the last three decades and benchmarks its share in GDP with respect to the international regression line by estimating a logistic regression.

  19. Water Level Superseded Benchmark Sheets

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Images of National Coast & Geodetic Survey (now NOAA's National Geodetic Survey/NGS) tidal benchmarks which have been superseded by new markers or locations....

  20. The International Criticality Safety Benchmark Evaluation Project (ICSBEP)

    International Nuclear Information System (INIS)

    The International Criticality Safety Benchmark Evaluation Project (ICSBEP) was initiated in 1992 by the United States Department of Energy. The ICSBEP became an official activity of the Organisation for Economic Cooperation and Development (OECD) - Nuclear Energy Agency (NEA) in 1995. Representatives from the United States, United Kingdom, France, Japan, the Russian Federation, Hungary, Republic of Korea, Slovenia, Yugoslavia, Kazakhstan, Israel, Spain, and Brazil are now participating. The purpose of the ICSBEP is to identify, evaluate, verify, and formally document a comprehensive and internationally peer-reviewed set of criticality safety benchmark data. The work of the ICSBEP is published as an OECD handbook entitled 'International Handbook of Evaluated Criticality Safety Benchmark Experiments.' The 2003 Edition of the Handbook contains benchmark model specifications for 3070 critical or subcritical configurations that are intended for validating computer codes that calculate effective neutron multiplication and for testing basic nuclear data. (author)

  1. Benchmarking hypercube hardware and software

    Science.gov (United States)

    Grunwald, Dirk C.; Reed, Daniel A.

    1986-01-01

    It was long a truism in computer systems design that balanced systems achieve the best performance. Message passing parallel processors are no different. To quantify the balance of a hypercube design, an experimental methodology was developed and the associated suite of benchmarks was applied to several existing hypercubes. The benchmark suite includes tests of both processor speed in the absence of internode communication and message transmission speed as a function of communication patterns.

  2. Strategic Behaviour under Regulation Benchmarking

    OpenAIRE

    Jamasb, Tooraj; Nillesen, Paul; Michael G. Pollitt

    2003-01-01

    Liberalisation of generation and supply activities in the electricity sectors is often followed by regulatory reform of distribution networks. In order to improve the efficiency of distribution utilities, some regulators have adopted incentive regulation schemes that rely on performance benchmarking. Although regulation benchmarking can influence the ?regulation game?, the subject has received limited attention. This paper discusses how strategic behaviour can result in inefficient behav...

  3. Implementation of Benchmarking Transportation Logistics Practices and Future Benchmarking Organizations

    International Nuclear Information System (INIS)

    The purpose of the Office of Civilian Radioactive Waste Management's (OCRWM) Logistics Benchmarking Project is to identify established government and industry practices for the safe transportation of hazardous materials which can serve as a yardstick for design and operation of OCRWM's national transportation system for shipping spent nuclear fuel and high-level radioactive waste to the proposed repository at Yucca Mountain, Nevada. The project will present logistics and transportation practices and develop implementation recommendations for adaptation by the national transportation system. This paper will describe the process used to perform the initial benchmarking study, highlight interim findings, and explain how these findings are being implemented. It will also provide an overview of the next phase of benchmarking studies. The benchmarking effort will remain a high-priority activity throughout the planning and operational phases of the transportation system. The initial phase of the project focused on government transportation programs to identify those practices which are most clearly applicable to OCRWM. These Federal programs have decades of safe transportation experience, strive for excellence in operations, and implement effective stakeholder involvement, all of which parallel OCRWM's transportation mission and vision. The initial benchmarking project focused on four business processes that are critical to OCRWM's mission success, and can be incorporated into OCRWM planning and preparation in the near term. The processes examined were: transportation business model, contract management/out-sourcing, stakeholder relations, and contingency planning. More recently, OCRWM examined logistics operations of AREVA NC's Business Unit Logistics in France. The next phase of benchmarking will focus on integrated domestic and international commercial radioactive logistic operations. The prospective companies represent large scale shippers and have vast experience in

  4. Benchmark calculations of sodium fast critical experiments

    International Nuclear Information System (INIS)

    The high expectations from fast critical experiments impose the additional requirements on reliability of final reconstructed values, obtained in experiments at critical facility. Benchmark calculations of critical experiments are characterized by impossibility of complete experiment reconstruction, the large amounts of input data (dependent and independent) with very different reliability. It should also take into account different sensitivity of the measured and appropriate calculated characteristics to the identical changes of geometry parameters, temperature, and isotopic composition of individual materials. The calculations of critical facility experiments are produced for the benchmark models, generated by the specific reconstructing codes with its features when adjusting model parameters, and using the nuclear data library. The generated benchmark model, providing the agreed calculated and experimental values for one or more neutronic characteristics can lead to considerable differences for other key characteristics. The sensitivity of key neutronic characteristics to the extra steel allocation in the core, and ENDF/B nuclear data sources is performed using a few calculated models of BFS-62-3A and BFS1-97 critical assemblies. The comparative analysis of the calculated effective multiplication factor, spectral indices, sodium void reactivity, and radial fission-rate distributions leads to quite different models, providing the best agreement the calculated and experimental neutronic characteristics. This fact should be considered during the refinement of computational models and code-verification purpose. (author)

  5. Uncertainty analysis of benchmark experiments using MCBEND

    International Nuclear Information System (INIS)

    Differences between measurement and calculation for shielding benchmark experiments can arise from uncertainties in a number of areas including nuclear data, radiation transport modelling, source specification, geometry modelling, measurement, and calculation statistics. In order to understand the significance of these differences, detailed sensitivity analysis of these various uncertainties is required. This is of particular importance when considering the requirements for nuclear data improvements aimed at providing better agreement between calculation and measurement. As part of a programme of validation activity associated with the international JEFF data project, the Monte Carlo code MCBEND has been used to analyse a range of benchmark experiments using JEF-2.2 based nuclear data together with modern dosimetry data. This paper describes detailed uncertainty analyses that have been performed for the following Winfrith material benchmark experiments: graphite, water, iron, graphite/steel and steel/water. Conclusions are reported and compared with calculations using other nuclear data libraries. In addition, the effect that nuclear data uncertainties have on the calculated results is discussed by making use of the data adjustment code DATAK. Requirements for further nuclear data evaluation arising from this work are identified. (author)

  6. Proposed Post-LEP benchmarks for supersymmetry

    International Nuclear Information System (INIS)

    We propose a new set of supersymmetric benchmark scenarios, taking into account the constraints from LEP, b →s γ, gμ - 2 and cosmology. We work in the specific context of the constrained MSSM (CMSSM) with universal soft supersymmetry-breaking masses and vanishing trilinear terms, assuming that R parity is conserved. We propose benchmark points that exemplify the different generic possibilities in this context, including focus-point models, points where coannihilation effects on the relic density are important, and points with rapid relic annihilation via direct-channel Higgs poles. We discuss the principal decays and signatures of the different classes of benchmark scenarios, and make initial estimates of the physics reaches of different accelerators, including the Tevatron collider, the LHC, and e+ e- colliders in the sub- and multi-TeV ranges. We stress the complementarity of hadron and lepton colliders, with the latter favoured for non-strongly-interacting particles and precision measurements. We mention features that could usefully be included in future versions of supersymmetric event generators. (orig.)

  7. Building a knowledge base of severe adverse drug events based on AERS reporting data using semantic web technologies.

    Science.gov (United States)

    Jiang, Guoqian; Wang, Liwei; Liu, Hongfang; Solbrig, Harold R; Chute, Christopher G

    2013-01-01

    A semantically coded knowledge base of adverse drug events (ADEs) with severity information is critical for clinical decision support systems and translational research applications. However it remains challenging to measure and identify the severity information of ADEs. The objective of the study is to develop and evaluate a semantic web based approach for building a knowledge base of severe ADEs based on the FDA Adverse Event Reporting System (AERS) reporting data. We utilized a normalized AERS reporting dataset and extracted putative drug-ADE pairs and their associated outcome codes in the domain of cardiac disorders. We validated the drug-ADE associations using ADE datasets from SIDe Effect Resource (SIDER) and the UMLS. We leveraged the Common Terminology Criteria for Adverse Event (CTCAE) grading system and classified the ADEs into the CTCAE in the Web Ontology Language (OWL). We identified and validated 2,444 unique Drug-ADE pairs in the domain of cardiac disorders, of which 760 pairs are in Grade 5, 775 pairs in Grade 4 and 2,196 pairs in Grade 3. PMID:23920604

  8. A comprehensive test specification for pulse fission counters

    International Nuclear Information System (INIS)

    The following test specification is based on the memorandum AERE - M 728 which it now replaces It contains a standard acceptance test procedure for the many U.K.A.E.A, designed pulse fission counters now commercially available. This test specification may be used for any pulse fission counter provided a specification sheet as shown in Appendix 3 is supplied to the contractor quoting this report and including specified values for the measured quantities. (author)

  9. Benchmark analysis of the TRIGA MARK II research reactor using Monte Carlo techniques

    International Nuclear Information System (INIS)

    This study deals with the neutronic analysis of the current core configuration of a 3-MW TRIGA MARK II research reactor at Atomic Energy Research Establishment (AERE), Savar, Dhaka, Bangladesh and validation of the results by benchmarking with the experimental, operational and available Final Safety Analysis Report (FSAR) values. The 3-D continuous-energy Monte Carlo code MCNP4C was used to develop a versatile and accurate full-core model of the TRIGA core. The model represents in detail all components of the core with literally no physical approximation. All fresh fuel and control elements as well as the vicinity of the core were precisely described. Continuous energy cross-section data from ENDF/B-VI and ENDF/B-V and S(α,β) scattering functions from the ENDF/B-VI library were used. The consistency and accuracy of both the Monte Carlo simulation and neutron transport physics was established by benchmarking the TRIGA experiments. The effective multiplication factor, power distribution and peaking factors, neutron flux distribution, and reactivity experiments comprising control rod worths, critical rod height, excess reactivity and shutdown margin were used in the validation process. The MCNP predictions and the experimentally determined values are found to be in very good agreement, which indicates that the simulation of TRIGA reactor is treated adequately

  10. Neutronic Analysis of the 3 MW TRIGA MARK II Research Reactor, Part II: Benchmark Analysis of TRIGA Experiments

    International Nuclear Information System (INIS)

    The three-dimensional continuous-energy Monte Carlo code MCNP4C was used to develop a versatile and accurate full-core model of the TRIGA MARK II research reactor at AERE, Savar. Thr consistency and accuracy of both the Monte Carlo simulation and neutron transport physics was established by benchmarking the TRIGA experiments. Analysis of neutron flux and reactivity experiments comprising control rod worths, critical rod height, excess reactivity and shutdown margin were used in the validation process. Calculations of fast neutron flux, and fuel and graphite element worths distribution are also presented. Good agreement between the experiments and MCNP calculations indicate that the simulation of TRIGA reactor is treated adequately. (author)

  11. Benchmark Calculations of OECD/NEA Reactivity-Initiated Accidents

    International Nuclear Information System (INIS)

    The benchmark- Phase I was done from 2011 to 2013 with a consistent set of four experiments on very similar highly irradiated fuel rods tested under different experimental conditions: low temperature, low pressure, stagnant water coolant, very short power pulse (NSRR VA-1), high temperature, medium pressure, stagnant water coolant, very short power pulse (NSRR VA-3), high temperature, low pressure, flowing sodium coolant, larger power pulse (CABRI CIP0-1), high temperature, high pressure, flowing water coolant, medium width power pulse (CABRI CIP3-1). Based on the importance of the thermal-hydraulics aspects revealed during the Phase I, the specifications of the benchmark-Phase II was elaborated in 2014. The benchmark-Phase II focused on the deeper understanding of the differences in modeling of the different codes. The work on the benchmark- Phase II program will last the end of 2015. The benchmark cases for RIA are simulated with the code of FRAPTRAN 1.5, in order to understand the phenomena during RIA and to check the capacity of the code itself. The results of enthalpy, cladding strain and outside temperature among 21 parameters asked by the benchmark program are summarized, and they seem to reasonably reflect the actual phenomena, except for them of case 6

  12. Benchmarking of human resources management

    Directory of Open Access Journals (Sweden)

    David M. Akinnusi

    2008-12-01

    Full Text Available This paper reviews the role of human resource management (HRM which, today, plays a strategic partnership role in management. The focus of the paper is on HRM in the public sector, where much hope rests on HRM as a means of transforming the public service and achieving much needed service delivery. However, a critical evaluation of HRM practices in the public sector reveals that these services leave much to be desired. The paper suggests the adoption of benchmarking as a process to revamp HRM in the public sector so that it is able to deliver on its promises. It describes the nature and process of benchmarking and highlights the inherent difficulties in applying benchmarking in HRM. It concludes with some suggestions for a plan of action. The process of identifying “best” practices in HRM requires the best collaborative efforts of HRM practitioners and academicians. If used creatively, benchmarking has the potential to bring about radical and positive changes in HRM in the public sector. The adoption of the benchmarking process is, in itself, a litmus test of the extent to which HRM in the public sector has grown professionally.

  13. ZZ IHEAS-BENCHMARKS, High-Energy Accelerator Shielding Benchmarks

    International Nuclear Information System (INIS)

    Description of program or function: Six kinds of Benchmark problems were selected for evaluating the model codes and the nuclear data for the intermediate and high energy accelerator shielding by the Shielding Subcommittee in the Research Committee on Reactor Physics. The benchmark problems contain three kinds of neutron production data from thick targets due to proton, alpha and electron, and three kinds of shielding data for secondary neutron and photon generated by proton. Neutron and photo-neutron reaction cross section data are also provided for neutrons up to 500 MeV and photons up to 300 MeV, respectively

  14. BigDataBench: a Big Data Benchmark Suite from Internet Services

    OpenAIRE

    Wang, Lei; Zhan, Jianfeng; Luo, Chunjie; Zhu, Yuqing; Yang, Qiang; He, Yongqiang; Gao, Wanling; Jia, Zhen; Shi, Yingjie; Zhang, Shujie; Zheng, Chen; Lu, Gang; Zhan, Kent; Li, Xiaona; Qiu, Bizhu

    2014-01-01

    As architecture, systems, and data management communities pay greater attention to innovative big data systems and architectures, the pressure of benchmarking and evaluating these systems rises. Considering the broad use of big data systems, big data benchmarks must include diversity of data and workloads. Most of the state-of-the-art big data benchmarking efforts target evaluating specific types of applications or system software stacks, and hence they are not qualified for serving the purpo...

  15. ENDF/B-V, LIB-V, and the CSEWG benchmarks

    International Nuclear Information System (INIS)

    A 70-group library, LIB-V, generated with the NJOY processing code from ENDF/B-V, is tested on most of the Cross Section Evaluation Working Group (CSEWG) fast reactor benchmarks. Every experimental measurement reported in the benchmark specifications is compared to both diffusion theory and transport theory calculations. Several comparisons with prior benchmark calculations attempt to assess the effects of data and code improvements

  16. Microbiota aeróbia conjuntival nas conjuntivites adenovirais Ocular flora in adenoviral conjunctivitis

    Directory of Open Access Journals (Sweden)

    Eliane Mayumi Nakano

    2002-06-01

    Full Text Available Objetivos: Estudar a microbiota aeróbica conjuntival em pacientes com quadro clínico de conjuntivite viral aguda. Método: Trinta pacientes entre 18 e 40 anos portadores de conjuntivite adenoviral e 30 pacientes sem a doença foram submetidos à colheita de material da conjuntiva para cultura. Os portadores de conjuntivite adenoviral foram submetidos ao exame até 3 dias após o início dos sintomas. As culturas foram realizadas utilizando-se os meios de ágar-sangue e ágar-chocolate. Pacientes em uso de medicação tópica ou sistêmica, usuários de lentes de contato e aqueles com doença ocular prévia ou doença sistêmica foram excluídos. Resultados: Houve positividade significantemente maior das culturas de conjuntiva nos pacientes com conjuntivite adenoviral (33,3%, sendo Haemophylus influenzae em 50% e Streptococcus pneumoniae em 50% quando comparados ao grupo controle (6,6% - Staphylococcus coagulase negativo. O grupo de pacientes com conjuntivite e que apresentaram culturas positivas, não diferiu em nenhum dos critérios clínicos analisados do grupo com culturas negativas. Conclusão: Pacientes com conjuntivite adenoviral apresentaram maior freqüência de exames de cultura de amostra de conjuntiva positivas quando comparados aos controles normais. Os pacientes com conjuntivite adenoviral com cultura positiva apresentaram evolução clínica semelhante aos pacientes com cultura negativa. Os agentes isolados na microbiota conjuntival no grupo com conjuntivite foram diferentes do observado no grupo normal. Porém o resultado das culturas não apresentou correlação com a evolução clínica.Purpose: To study the aerobic bacterial conjunctival flora in patients with clinical diagnosis of acute viral conjunctivitis. Method: Thirty patients between 18 and 30 years with acute adenoviral conjunctivitis and 30 normal subjects underwent conjunctival culture examination. Material from the conjunc-tiva of patients with conjunctivitis was

  17. Information about AER working group A on improvement, extension and validation of parametrized few-group libraries for WWER-440 and WWER-1000

    International Nuclear Information System (INIS)

    Joint AER WG A and WG B held their seventeenth meeting in Modra-Harmonia (near NPP Jaslovske Bohunice), Slovak Republic, during the period of 14-15 April 2008. The objectives of the meeting content of presentations and future activities are shortly described in this paper. (Author)

  18. Effect of Doppler Radial Velocity Data Assimilation on the Simulation of a Typhoon Approaching Taiwan: A Case Study of Typhoon Aere (2004

    Directory of Open Access Journals (Sweden)

    Hsin-Hung Lin

    2011-01-01

    Full Text Available Compared to conventional data, radar observations have an advantage of high spatial and temporal resolutions, and Doppler radars are capable of capturing detailed characteristics of flow fields, including typhoon circulation. In this study, the possible improvement of short-term typhoon predictions near Taiwan, particularly with regard to related rainfall forecasts over the mountainous island, using Doppler radial wind observations is explored. The case of Typhoon Aere (2004 was chosen for study, and a series of experiments were carried out using the Penn State University/National Center for Atmospheric Research (PSU/NCAR Mesoscale Model Version 5 (MM5 with its three-dimensional variational (3D-VAR data assimilation system. The results show that once the Doppler radial velocities were assimilated into the model, the _ circulation intensified within one hour. However, when Typhoon Aere approached from the east and only the western half of its core area could be observed by the radar, the assimilation caused the typhoon to deflect southward due to the incomplete and uneven data coverage. In another experiment in which Doppler radar data assimilation did not start until Typhoon Aere moved closer, such that its entire core region could be observed. A similar track deflection was avoided. Overall, the assimilation of Doppler radial velocity data reduced the intensity error (in wind speed by about 25%. Furthermore, the improvements in location, intensity, and circulation structure of Typhoon Aere lead to better rainfall prediction over the island of Taiwan.

  19. 75 FR 27332 - AER NY-Gen, LLC; Eagle Creek Hydro Power, LLC; Eagle Creek Water Resources, LLC; Eagle Creek Land...

    Science.gov (United States)

    2010-05-14

    ... Energy Regulatory Commission AER NY-Gen, LLC; Eagle Creek Hydro Power, LLC; Eagle Creek Water Resources... Creek Hydro Power, LLC, Eagle Creek Water Resources, LLC, and Eagle Creek Land Resources, LLC.... For the transferee: Mr. Paul Ho, Eagle Creek Hydro Power, LLC, Eagle Creek Water Resources, LLC,...

  20. 77 FR 13592 - AER NY-Gen, LLC; Eagle Creek Hydro Power, LLC, Eagle Creek Water Resources, LLC, Eagle Creek Land...

    Science.gov (United States)

    2012-03-07

    ... Energy Regulatory Commission AER NY-Gen, LLC; Eagle Creek Hydro Power, LLC, Eagle Creek Water Resources... Power, LLC, Eagle Creek Water Resources, LLC, and Eagle Creek Land Resources, LLC (transferees) filed an...) 805-1469. Transferees: Mr. Bernard H. Cherry, Eagle Creek Hydro Power, LLC, Eagle Creek...

  1. Nova proposta de teste incremental de remada na avaliação aeróbia de surfistas

    Directory of Open Access Journals (Sweden)

    Felipe Bercht CANOZZI

    2015-09-01

    Full Text Available ResumoOs objetivos desse estudo foram: 1 verificar as respostas do lactato sanguíneo e da frequência cardíaca (FC durante um protocolo de campo específico de remada no surfe; 2 correlacionar os índices de capacidade e potência aeróbia determinados nesse protocolo específico com o tempo de prática da modalidade e variáveis antropométricas. Participaram deste estudo nove sujeitos (24 ± 4,5 anos; 72,2 ± 6,7 kg; 178,4 ± 4,8 cm que foram submetidos a um teste progressivo intermitente de remada sobre a sua própria prancha de surfe, do tipo vai e vêm, com velocidades iniciais entre 1-1,1 m/s e incrementos de 0,05 m/s a cada 3 min até a exaustão voluntária. Uma resposta linear e exponencial foram observadas para a FC e o lactato sanguíneo, respectivamente, durante o protocolo incremental. Este comportamento foi semelhante ao demonstrado durante protocolos incrementais com objetivo de avaliar a capacidade e a potência aeróbia realizados em outras modalidades cíclicas. Além disso, foram encontradas correlações significantes entre o pico de velocidade (PV e a velocidade correspondente ao início do acúmulo de lactato no sangue (vOBLA (r = 0,87, p = 0,005 e do PV com tempo de prática de surfe (r = 0,70, p = 0,03. No entanto, não foram encontradas correlações significativas entre PV e vOBLA com nenhuma das variáveis antropométricas mensuradas. Assim, podemos concluir que o protocolo incremental específico de remada no surfe utilizado no presente estudo poderia ser uma ferramenta útil na determinação de índices relacionados à capacidade (vOBLA e potência (PV aeróbia de surfistas.

  2. Radiation Detection Computational Benchmark Scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Shaver, Mark W.; Casella, Andrew M.; Wittman, Richard S.; McDonald, Ben S.

    2013-09-24

    Modeling forms an important component of radiation detection development, allowing for testing of new detector designs, evaluation of existing equipment against a wide variety of potential threat sources, and assessing operation performance of radiation detection systems. This can, however, result in large and complex scenarios which are time consuming to model. A variety of approaches to radiation transport modeling exist with complementary strengths and weaknesses for different problems. This variety of approaches, and the development of promising new tools (such as ORNL’s ADVANTG) which combine benefits of multiple approaches, illustrates the need for a means of evaluating or comparing different techniques for radiation detection problems. This report presents a set of 9 benchmark problems for comparing different types of radiation transport calculations, identifying appropriate tools for classes of problems, and testing and guiding the development of new methods. The benchmarks were drawn primarily from existing or previous calculations with a preference for scenarios which include experimental data, or otherwise have results with a high level of confidence, are non-sensitive, and represent problem sets of interest to NA-22. From a technical perspective, the benchmarks were chosen to span a range of difficulty and to include gamma transport, neutron transport, or both and represent different important physical processes and a range of sensitivity to angular or energy fidelity. Following benchmark identification, existing information about geometry, measurements, and previous calculations were assembled. Monte Carlo results (MCNP decks) were reviewed or created and re-run in order to attain accurate computational times and to verify agreement with experimental data, when present. Benchmark information was then conveyed to ORNL in order to guide testing and development of hybrid calculations. The results of those ADVANTG calculations were then sent to PNNL for

  3. PageRank Pipeline Benchmark: Proposal for a Holistic System Benchmark for Big-Data Platforms

    OpenAIRE

    Dreher, Patrick; Byun, Chansup; Hill, Chris; Gadepally, Vijay; Kuszmaul, Bradley; Kepner, Jeremy

    2016-01-01

    The rise of big data systems has created a need for benchmarks to measure and compare the capabilities of these systems. Big data benchmarks present unique scalability challenges. The supercomputing community has wrestled with these challenges for decades and developed methodologies for creating rigorous scalable benchmarks (e.g., HPC Challenge). The proposed PageRank pipeline benchmark employs supercomputing benchmarking methodologies to create a scalable benchmark that is reflective of many...

  4. Atomic Energy Research benchmark activity

    International Nuclear Information System (INIS)

    The test problems utilized in the validation and verification process of computer programs in Atomic Energie Research are collected into one bunch. This is the first step towards issuing a volume in which tests for VVER are collected, along with reference solutions and a number of solutions. The benchmarks do not include the ZR-6 experiments because they have been published along with a number of comparisons in the Final reports of TIC. The present collection focuses on operational and mathematical benchmarks which cover almost the entire range of reaktor calculation. (Author)

  5. 3-D neutron transport benchmarks

    International Nuclear Information System (INIS)

    A set of 3-D neutron transport benchmark problems proposed by the Osaka University to NEACRP in 1988 has been calculated by many participants and the corresponding results are summarized in this report. The results of Keff, control rod worth and region-averaged fluxes for the four proposed core models, calculated by using various 3-D transport codes are compared and discussed. The calculational methods used were: Monte Carlo, Discrete Ordinates (Sn), Spherical Harmonics (Pn), Nodal Transport and others. The solutions of the four core models are quite useful as benchmarks for checking the validity of 3-D neutron transport codes

  6. A Study on Benchmarking Models and Frameworks in Industrial SMEs: Challenges and Issues

    Directory of Open Access Journals (Sweden)

    Masoomeh Zeinalnezhad

    2011-01-01

    Full Text Available This paper is based on a literature review of recent publications in the field of benchmarking methodology implemented in small and medium enterprises with regards to measure and benchmark upstream, leading or developmental aspects of organizations. Benchmarking has been recognized as an essential tool for continuous improvement and competitiveness.  It can also help SMEs to improve their operational and financial performances. However, only few entrepreneurs turn to benchmarking implementation, due to lack of time and resources. In this study current benchmarking models (2005 onwards, dedicated specifically to the SMEs, have been identified and their characteristics and objectives have been discussed.  Key findings from this review confirm that this is an under-developed area of research and that most practitioner approaches are focused on benchmarking practices within SMEs. There is a need to extend theoretical and practical aspects of benchmarking in SMEs by studying the process of benchmarking with regards to the novel concept of lead benchmarking as a possible means of achieving increased radical and innovative transformation in organizational change.   From the review it emerged that, lead, forward looking and predictive benchmarking have not been considered in SMEs, and future researches could include them.

  7. Isolamento de bactérias aeróbias e sua sensibilidade a antimicrobianos em processos de osteomielite canina

    OpenAIRE

    Simionato A.C.; Ramos M.C.C.; Coutinho S.D.A.

    2003-01-01

    O objetivo deste trabalho foi estudar a presença de bactérias aeróbias em 20 cães com osteomielite, decorrente de exposição óssea. Na identificação das bactérias utilizou-se o sistema API-Bio Mérieux e testou-se a sensibilidade dos microrganismos in vitro, pelo método de difusão em ágar, frente a 14 diferentes antibacterianos. O osso mais acometido pela infecção foi a tíbia (35%). Isolaram-se 68,3% de bactérias Gram positivas e 31,7% de Gram negativas. Staphylococcus spp, Streptococcus spp e ...

  8. Benchmarking biodiversity performances of farmers

    NARCIS (Netherlands)

    Snoo, de G.R.; Lokhorst, A.M.; Dijk, van J.; Staats, H.; Musters, C.J.M.

    2010-01-01

    Farmers are the key players when it comes to the enhancement of farmland biodiversity. In this study, a benchmark system that focuses on improving farmers’ nature conservation was developed and tested among Dutch arable farmers in different social settings. The results show that especially tailored

  9. Benchmark calculations for EGS5

    International Nuclear Information System (INIS)

    In the past few years, EGS4 has undergone an extensive upgrade to EGS5, in particularly in the areas of low-energy electron physics, low-energy photon physics, PEGS cross section generation, and the coding from Mortran to Fortran programming. Benchmark calculations have been made to assure the accuracy, reliability and high quality of the EGS5 code system. This study reports three benchmark examples that show the successful upgrade from EGS4 to EGS5 based on the excellent agreements among EGS4, EGS5 and measurements. The first benchmark example is the 1969 Crannell Experiment to measure the three-dimensional distribution of energy deposition for 1-GeV electrons shower in water and aluminum tanks. The second example is the 1995 Compton-scattered spectra measurements for 20-40 keV, linearly polarized photon by Namito et. al., in KEK, which was a main part of the low-energy photon expansion work for both EGS4 and EGS5. The third example is the 1986 heterogeneity benchmark experiment by Shortt et. al., who used a monoenergetic 20-MeV electron beam to hit the front face of a water tank containing both air and aluminum cylinders and measured spatial depth dose distribution using a small solid-state detector. (author)

  10. Nominal GDP: Target or Benchmark?

    OpenAIRE

    Hetzel, Robert L.

    2015-01-01

    Some observers have argued that the Federal Reserve would best fulfill its mandate by adopting a target for nominal gross domestic product (GDP). Insights from the monetarist tradition suggest that nominal GDP targeting could be destabilizing. However, adopting benchmarks for both nominal and real GDP could offer useful information about when monetary policy is too tight or too loose.

  11. Monte Carlo photon benchmark problems

    International Nuclear Information System (INIS)

    Photon benchmark calculations have been performed to validate the MCNP Monte Carlo computer code. These are compared to both the COG Monte Carlo computer code and either experimental or analytic results. The calculated solutions indicate that the Monte Carlo method, and MCNP and COG in particular, can accurately model a wide range of physical problems. 8 refs., 5 figs

  12. Benchmarked Library Websites Comparative Study

    KAUST Repository

    Ramli, Rindra M.

    2015-01-01

    This presentation provides an analysis of services provided by the benchmarked library websites. The exploratory study includes comparison of these websites against a list of criterion and presents a list of services that are most commonly deployed by the selected websites. In addition to that, the investigators proposed a list of services that could be provided via the KAUST library website.

  13. Guideline for benchmarking thermal treatment systems for low-level mixed waste

    International Nuclear Information System (INIS)

    A process for benchmarking low-level mixed waste (LLMW) treatment technologies has been developed. When used in conjunction with the identification and preparation of surrogate waste mixtures, and with defined quality assurance and quality control procedures, the benchmarking process will effectively streamline the selection of treatment technologies being considered by the US Department of Energy (DOE) for LLMW cleanup and management. Following the quantitative template provided in the benchmarking process will greatly increase the technical information available for the decision-making process. The additional technical information will remove a large part of the uncertainty in the selection of treatment technologies. It is anticipated that the use of the benchmarking process will minimize technology development costs and overall treatment costs. In addition, the benchmarking process will enhance development of the most promising LLMW treatment processes and aid in transferring the technology to the private sector. To instill inherent quality, the benchmarking process is based on defined criteria and a structured evaluation format, which are independent of any specific conventional treatment or emerging process technology. Five categories of benchmarking criteria have been developed for the evaluation: operation/design; personnel health and safety; economics; product quality; and environmental quality. This benchmarking document gives specific guidance on what information should be included and how it should be presented. A standard format for reporting is included in Appendix A and B of this document. Special considerations for LLMW are presented and included in each of the benchmarking categories

  14. Guideline for benchmarking thermal treatment systems for low-level mixed waste

    Energy Technology Data Exchange (ETDEWEB)

    Hoffman, D.P.; Gibson, L.V. Jr.; Hermes, W.H. [Martin Marietta Energy Systems, Inc., Oak Ridge, TN (United States); Bastian, R.E. [Focus Environmental, Inc., Knoxville, TN (United States); Davis, W.T. [Tennessee Univ., Knoxville, TN (United States)

    1994-01-01

    A process for benchmarking low-level mixed waste (LLMW) treatment technologies has been developed. When used in conjunction with the identification and preparation of surrogate waste mixtures, and with defined quality assurance and quality control procedures, the benchmarking process will effectively streamline the selection of treatment technologies being considered by the US Department of Energy (DOE) for LLMW cleanup and management. Following the quantitative template provided in the benchmarking process will greatly increase the technical information available for the decision-making process. The additional technical information will remove a large part of the uncertainty in the selection of treatment technologies. It is anticipated that the use of the benchmarking process will minimize technology development costs and overall treatment costs. In addition, the benchmarking process will enhance development of the most promising LLMW treatment processes and aid in transferring the technology to the private sector. To instill inherent quality, the benchmarking process is based on defined criteria and a structured evaluation format, which are independent of any specific conventional treatment or emerging process technology. Five categories of benchmarking criteria have been developed for the evaluation: operation/design; personnel health and safety; economics; product quality; and environmental quality. This benchmarking document gives specific guidance on what information should be included and how it should be presented. A standard format for reporting is included in Appendix A and B of this document. Special considerations for LLMW are presented and included in each of the benchmarking categories.

  15. Validation of CENDL and JEFF evaluated nuclear data files for TRIGA calculations through the analysis of integral parameters of TRX and BAPL benchmark lattices of thermal reactors

    International Nuclear Information System (INIS)

    The aim of this paper is to present the validation of evaluated nuclear data files CENDL-2.2 and JEFF-3.1.1 through the analysis of the integral parameters of TRX and BAPL benchmark lattices of thermal reactors for neutronics analysis of TRIGA Mark-II Research Reactor at AERE, Bangladesh. In this process, the 69-group cross-section library for lattice code WIMS was generated using the basic evaluated nuclear data files CENDL-2.2 and JEFF-3.1.1 with the help of nuclear data processing code NJOY99.0. Integral measurements on the thermal reactor lattices TRX-1, TRX-2, BAPL-UO2-1, BAPL-UO2-2 and BAPL-UO2-3 served as standard benchmarks for testing nuclear data files and have also been selected for this analysis. The integral parameters of the said lattices were calculated using the lattice transport code WIMSD-5B based on the generated 69-group cross-section library. The calculated integral parameters were compared to the measured values as well as the results of Monte Carlo Code MCNP. It was found that in most cases, the values of integral parameters show a good agreement with the experiment and MCNP results. Besides, the group constants in WIMS format for the isotopes U-235 and U-238 between two data files have been compared using WIMS library utility code WILLIE and it was found that the group constants are identical with very insignificant difference. Therefore, this analysis reflects the validation of evaluated nuclear data files CENDL-2.2 and JEFF-3.1.1 through benchmarking the integral parameters of TRX and BAPL lattices and can also be essential to implement further neutronic analysis of TRIGA Mark-II research reactor at AERE, Dhaka, Bangladesh.

  16. CAVIAR: a 45k neuron, 5M synapse, 12G connects/s AER hardware sensory-processing- learning-actuating system for high-speed visual object recognition and tracking.

    Science.gov (United States)

    Serrano-Gotarredona, Rafael; Oster, Matthias; Lichtsteiner, Patrick; Linares-Barranco, Alejandro; Paz-Vicente, Rafael; Gomez-Rodriguez, Francisco; Camunas-Mesa, Luis; Berner, Raphael; Rivas-Perez, Manuel; Delbruck, Tobi; Liu, Shih-Chii; Douglas, Rodney; Hafliger, Philipp; Jimenez-Moreno, Gabriel; Civit Ballcels, Anton; Serrano-Gotarredona, Teresa; Acosta-Jimenez, Antonio J; Linares-Barranco, Bernabé

    2009-09-01

    This paper describes CAVIAR, a massively parallel hardware implementation of a spike-based sensing-processing-learning-actuating system inspired by the physiology of the nervous system. CAVIAR uses the asychronous address-event representation (AER) communication framework and was developed in the context of a European Union funded project. It has four custom mixed-signal AER chips, five custom digital AER interface components, 45k neurons (spiking cells), up to 5M synapses, performs 12G synaptic operations per second, and achieves millisecond object recognition and tracking latencies. PMID:19635693

  17. Niveles de intensidad de la música durante un torneo de resistencia aeróbica en Costa Rica Music intensity levels during an aerobics endurance tournament in Costa Rica

    OpenAIRE

    Yamileth Chacón Araya; José Moncada Jiménez

    2008-01-01

    El propósito del presente artículo es describir los niveles de ruido generados en una competencia de resistencia aeróbica y se analizan las posibles implicaciones para la salud de la contaminación por ruido. La danza aeróbica es un modo de ejercicio que se ha extendido por todo el mundo, con el fin de posibilitar la práctica de una actividad física que combina música y movimiento. Al incluir el elemento musical en las clases de danza aeróbica, se expone a las personas que practican esta modal...

  18. Benchmarking urban energy efficiency in the UK

    International Nuclear Information System (INIS)

    This study asks what is the ‘best’ way to measure urban energy efficiency. There has been recent interest in identifying efficient cities so that best practices can be shared, a process known as benchmarking. Previous studies have used relatively simple metrics that provide limited insight on the complexity of urban energy efficiency and arguably fail to provide a ‘fair’ measure of urban performance. Using a data set of 198 urban UK local administrative units, three methods are compared: ratio measures, regression residuals, and data envelopment analysis. The results show that each method has its own strengths and weaknesses regarding the ease of interpretation, ability to identify outliers and provide consistent rankings. Efficient areas are diverse but are notably found in low income areas of large conurbations such as London, whereas industrial areas are consistently ranked as inefficient. The results highlight the shortcomings of the underlying production-based energy accounts. Ideally urban energy efficiency benchmarks would be built on consumption-based accounts, but interim recommendations are made regarding the use of efficiency measures that improve upon current practice and facilitate wider conversations about what it means for a specific city to be energy-efficient within an interconnected economy. - Highlights: • Benchmarking is a potentially valuable method for improving urban energy performance. • Three different measures of urban energy efficiency are presented for UK cities. • Most efficient areas are diverse but include low-income areas of large conurbations. • Least efficient areas perform industrial activities of national importance. • Improve current practice with grouped per capita metrics or regression residuals

  19. Two benchmarks for qualification of pressure vessel fluence calculational methodology

    International Nuclear Information System (INIS)

    Two benchmarks for the qualification of the pressure vessel fluence calculational methodology were formulated and are briefly described. The Pool Critical Assembly (PCA) benchmark is based on the experiments performed at the PCA in Oak Ridge. The measured quantities to be compared against the calculated values are the equivalent fission fluxes at several locations in front, behind, and inside the pressure-vessel wall simulator. This benchmark is particularly suitable to test the capabilities of the calculational methodology and cross-section libraries to predict in-vessel gradients because only a few approximations are necessary in the analysis. The HBR-2 benchmark is based on the data for the H.B. Robinson-2 plant, which is a 2,300 MW (thermal) pressurized light-water reactor. The benchmark provides the reactor geometry, the material compositions, the core power distributions, and the power historical data. The quantities to be calculated are the specific activities of the radiometric monitors that were irradiated in the surveillance capsule and in the cavity location during one fuel cycle. The HBR-2 benchmark requires modeling approximations, power-to-neutron source conversion, and treatment of time dependant variations. It can therefore be used to test the overall performance and adequacy of the calculational methodology for power-reactor pressure-vessel flux calculations. Both benchmarks were analyzed with the DORT code and the BUGLE-96 cross-section library that is based on ENDF/B-VI evaluations. The calculations agreed with the measurements within 10%, and the calculations underpredicted the measurements in all the cases. This indicates that the ENDF/B-VI cross sections resolve most of the discrepancies between the measurements and calculations. The decrease of the CIM ratios with increased thickness of iron, which was typical for pre-ENDF/B-VI libraries, is almost completely removed

  20. International handbook of evaluated criticality safety benchmark experiments

    International Nuclear Information System (INIS)

    The Criticality Safety Benchmark Evaluation Project (CSBEP) was initiated in October of 1992 by the United States Department of Energy. The project quickly became an international effort as scientists from other interested countries became involved. The International Criticality Safety Benchmark Evaluation Project (ICSBEP) became an official activity of the Organization for Economic Cooperation and Development - Nuclear Energy Agency (OECD-NEA) in 1995. This handbook contains criticality safety benchmark specifications that have been derived from experiments performed at various nuclear critical facilities around the world. The benchmark specifications are intended for use by criticality safety engineers to validate calculational techniques used to establish minimum subcritical margins for operations with fissile material and to determine criticality alarm requirement and placement. Many of the specifications are also useful for nuclear data testing. Example calculations are presented; however, these calculations do not constitute a validation of the codes or cross section data. The evaluated criticality safety benchmark data are given in nine volumes. These volumes span over 55,000 pages and contain 516 evaluations with benchmark specifications for 4,405 critical, near critical, or subcritical configurations, 24 criticality alarm placement / shielding configurations with multiple dose points for each, and 200 configurations that have been categorized as fundamental physics measurements that are relevant to criticality safety applications. Experiments that are found unacceptable for use as criticality safety benchmark experiments are discussed in these evaluations; however, benchmark specifications are not derived for such experiments (in some cases models are provided in an appendix). Approximately 770 experimental configurations are categorized as unacceptable for use as criticality safety benchmark experiments. Additional evaluations are in progress and will be

  1. BONFIRE: benchmarking computers and computer networks

    OpenAIRE

    Bouckaert, Stefan; Vanhie-Van Gerwen, Jono; Moerman, Ingrid; Phillips, Stephen; Wilander, Jerker

    2011-01-01

    The benchmarking concept is not new in the field of computing or computer networking. With “benchmarking tools”, one usually refers to a program or set of programs, used to evaluate the performance of a solution under certain reference conditions, relative to the performance of another solution. Since the 1970s, benchmarking techniques have been used to measure the performance of computers and computer networks. Benchmarking of applications and virtual machines in an Infrastructure-as-a-Servi...

  2. How Benchmarking and Higher Education Came Together

    Science.gov (United States)

    Levy, Gary D.; Ronco, Sharron L.

    2012-01-01

    This chapter introduces the concept of benchmarking and how higher education institutions began to use benchmarking for a variety of purposes. Here, benchmarking is defined as a strategic and structured approach whereby an organization compares aspects of its processes and/or outcomes to those of another organization or set of organizations to…

  3. A Framework for Urban Transport Benchmarking

    OpenAIRE

    Henning, Theuns; Essakali, Mohammed Dalil; Oh, Jung Eun

    2011-01-01

    This report summarizes the findings of a study aimed at exploring key elements of a benchmarking framework for urban transport. Unlike many industries where benchmarking has proven to be successful and straightforward, the multitude of the actors and interactions involved in urban transport systems may make benchmarking a complex endeavor. It was therefore important to analyze what has bee...

  4. Benchmarking: Achieving the best in class

    Energy Technology Data Exchange (ETDEWEB)

    Kaemmerer, L

    1996-05-01

    Oftentimes, people find the process of organizational benchmarking an onerous task, or, because they do not fully understand the nature of the process, end up with results that are less than stellar. This paper presents the challenges of benchmarking and reasons why benchmarking can benefit an organization in today`s economy.

  5. The LDBC Social Network Benchmark: Interactive Workload

    NARCIS (Netherlands)

    Erling, O.; Averbuch, A.; Larriba-Pey, J.; Chafi, H.; Gubichev, A.; Prat, A.; Pham, M.D.; Boncz, P.A.

    2015-01-01

    The Linked Data Benchmark Council (LDBC) is now two years underway and has gathered strong industrial participation for its mission to establish benchmarks, and benchmarking practices for evaluating graph data management systems. The LDBC introduced a new choke-point driven methodology for developin

  6. IT-benchmarking of clinical workflows: concept, implementation, and evaluation.

    Science.gov (United States)

    Thye, Johannes; Straede, Matthias-Christopher; Liebe, Jan-David; Hübner, Ursula

    2014-01-01

    Due to the emerging evidence of health IT as opportunity and risk for clinical workflows, health IT must undergo a continuous measurement of its efficacy and efficiency. IT-benchmarks are a proven means for providing this information. The aim of this study was to enhance the methodology of an existing benchmarking procedure by including, in particular, new indicators of clinical workflows and by proposing new types of visualisation. Drawing on the concept of information logistics, we propose four workflow descriptors that were applied to four clinical processes. General and specific indicators were derived from these descriptors and processes. 199 chief information officers (CIOs) took part in the benchmarking. These hospitals were assigned to reference groups of a similar size and ownership from a total of 259 hospitals. Stepwise and comprehensive feedback was given to the CIOs. Most participants who evaluated the benchmark rated the procedure as very good, good, or rather good (98.4%). Benchmark information was used by CIOs for getting a general overview, advancing IT, preparing negotiations with board members, and arguing for a new IT project. PMID:24825693

  7. Methodology for Benchmarking IPsec Gateways

    Directory of Open Access Journals (Sweden)

    Adam Tisovský

    2012-08-01

    Full Text Available The paper analyses forwarding performance of IPsec gateway over the rage of offered loads. It focuses on the forwarding rate and packet loss particularly at the gateway’s performance peak and at the state of gateway’s overload. It explains possible performance degradation when the gateway is overloaded by excessive offered load. The paper further evaluates different approaches for obtaining forwarding performance parameters – a widely used throughput described in RFC 1242, maximum forwarding rate with zero packet loss and us proposed equilibrium throughput. According to our observations equilibrium throughput might be the most universal parameter for benchmarking security gateways as the others may be dependent on the duration of test trials. Employing equilibrium throughput would also greatly shorten the time required for benchmarking. Lastly, the paper presents methodology and a hybrid step/binary search algorithm for obtaining value of equilibrium throughput.

  8. A proposal to Asian countries with operating research reactors for making nuclear criticality safety benchmark evaluations

    International Nuclear Information System (INIS)

    The Criticality Safety Benchmark Evaluation Project (CSBEP) was initiated in 1992 by the United States Department of Energy. The project quickly became an international effort as scientists from other interested countries became involved. The International Criticality Safety Benchmark Evaluation Project (ICSBEP) is now an official activity of the Organization for Economic Cooperation and Development-Nuclear Energy Agency (OECD-NEA). 'International Handbook of Criticality Safety Benchmark Experiments' was prepared and is updated yearly by the working group of the project. This handbook contains criticality safety benchmark specifications that have been derived from experiments that were performed at various nuclear criticality facilities around the world. However, the handbook lacks criticality data of 20 wt%-enriched uranium fuel. The author proposes to make benchmark specifications derived from modern research reactors in Asia. Future evaluations of these reactors will facilitate to fill the 'enrichment gap'. (author)

  9. Gaming in a benchmarking environment. A non-parametric analysis of benchmarking in the water sector

    OpenAIRE

    De Witte, Kristof; Marques, Rui

    2009-01-01

    This paper discusses the use of benchmarking in general and its application to the drinking water sector. It systematizes the various classifications on performance measurement, discusses some of the pitfalls of benchmark studies and provides some examples of benchmarking in the water sector. After presenting in detail the institutional framework of the water sector of the Belgian region of Flanders (without benchmarking experiences), Wallonia (recently started a public benchmark) and the Net...

  10. Adapting benchmarking to project management : an analysis of project management processes, metrics, and benchmarking process models

    OpenAIRE

    Emhjellen, Kjetil

    1997-01-01

    Since the first publication on benchmarking in 1989 by Robert C. Camp of “Benchmarking: The search for Industry Best Practices that Lead to Superior Performance”, the improvement technique benchmarking has been established as an important tool in the process focused manufacturing or production environment. The use of benchmarking has expanded to other types of industry. Benchmarking has past the doorstep and is now in early trials in the project and construction environment....

  11. HS06 benchmark for an ARM server

    International Nuclear Information System (INIS)

    We benchmarked an ARM cortex-A9 based server system with a four-core CPU running at 1.1 GHz. The system used Ubuntu 12.04 as operating system and the HEPSPEC 2006 (HS06) benchmarking suite was compiled natively with gcc-4.4 on the system. The benchmark was run for various settings of the relevant gcc compiler options. We did not find significant influence from the compiler options on the benchmark result. The final HS06 benchmark result is 10.4.

  12. TRIGA Mark II benchmark experiment

    International Nuclear Information System (INIS)

    Experimental results of pulse parameters and control rod worth measurements at TRIGA Mark 2 reactor in Ljubljana are presented. The measurements were performed with a completely fresh, uniform, and compact core. Only standard fuel elements with 12 wt% uranium were used. Special efforts were made to get reliable and accurate results at well-defined experimental conditions, and it is proposed to use the results as a benchmark test case for TRIGA reactors

  13. Restaurant Energy Use Benchmarking Guideline

    Energy Technology Data Exchange (ETDEWEB)

    Hedrick, R.; Smith, V.; Field, K.

    2011-07-01

    A significant operational challenge for food service operators is defining energy use benchmark metrics to compare against the performance of individual stores. Without metrics, multiunit operators and managers have difficulty identifying which stores in their portfolios require extra attention to bring their energy performance in line with expectations. This report presents a method whereby multiunit operators may use their own utility data to create suitable metrics for evaluating their operations.

  14. Local Innovation Systems and Benchmarking

    OpenAIRE

    Cantner, Uwe

    2008-01-01

    This paper reviews approaches used for evaluating the performance of local or regional innovation systems. This evaluation is performed by a benchmarking approach in which a frontier production function can be determined, based on a knowledge production function relating innovation inputs and innovation outputs. In analyses on the regional level and especially when acknowledging regional innovation systems those approaches have to take into account cooperative invention and innovation - the c...

  15. INTEGRAL BENCHMARKS AVAILABLE THROUGH THE INTERNATIONAL REACTOR PHYSICS EXPERIMENT EVALUATION PROJECT AND THE INTERNATIONAL CRITICALITY SAFETY BENCHMARK EVALUATION PROJECT

    Energy Technology Data Exchange (ETDEWEB)

    J. Blair Briggs; Lori Scott; Enrico Sartori; Yolanda Rugama

    2008-09-01

    Interest in high-quality integral benchmark data is increasing as efforts to quantify and reduce calculational uncertainties accelerate to meet the demands of next generation reactor and advanced fuel cycle concepts. The International Reactor Physics Experiment Evaluation Project (IRPhEP) and the International Criticality Safety Benchmark Evaluation Project (ICSBEP) continue to expand their efforts and broaden their scope to identify, evaluate, and provide integral benchmark data for method and data validation. Benchmark model specifications provided by these two projects are used heavily by the international reactor physics, nuclear data, and criticality safety communities. Thus far, 14 countries have contributed to the IRPhEP, and 20 have contributed to the ICSBEP. The status of the IRPhEP and ICSBEP is discussed in this paper, and the future of the two projects is outlined and discussed. Selected benchmarks that have been added to the IRPhEP and ICSBEP handbooks since PHYSOR’06 are highlighted, and the future of the two projects is discussed.

  16. RISKIND verification and benchmark comparisons

    International Nuclear Information System (INIS)

    This report presents verification calculations and benchmark comparisons for RISKIND, a computer code designed to estimate potential radiological consequences and health risks to individuals and the population from exposures associated with the transportation of spent nuclear fuel and other radioactive materials. Spreadsheet calculations were performed to verify the proper operation of the major options and calculational steps in RISKIND. The program is unique in that it combines a variety of well-established models into a comprehensive treatment for assessing risks from the transportation of radioactive materials. Benchmark comparisons with other validated codes that incorporate similar models were also performed. For instance, the external gamma and neutron dose rate curves for a shipping package estimated by RISKIND were compared with those estimated by using the RADTRAN 4 code and NUREG-0170 methodology. Atmospheric dispersion of released material and dose estimates from the GENII and CAP88-PC codes. Verification results have shown the program to be performing its intended function correctly. The benchmark results indicate that the predictions made by RISKIND are within acceptable limits when compared with predictions from similar existing models

  17. Thermal Performance Benchmarking: Annual Report

    Energy Technology Data Exchange (ETDEWEB)

    Moreno, Gilbert

    2016-04-08

    The goal for this project is to thoroughly characterize the performance of state-of-the-art (SOA) automotive power electronics and electric motor thermal management systems. Information obtained from these studies will be used to: Evaluate advantages and disadvantages of different thermal management strategies; establish baseline metrics for the thermal management systems; identify methods of improvement to advance the SOA; increase the publicly available information related to automotive traction-drive thermal management systems; help guide future electric drive technologies (EDT) research and development (R&D) efforts. The performance results combined with component efficiency and heat generation information obtained by Oak Ridge National Laboratory (ORNL) may then be used to determine the operating temperatures for the EDT components under drive-cycle conditions. In FY15, the 2012 Nissan LEAF power electronics and electric motor thermal management systems were benchmarked. Testing of the 2014 Honda Accord Hybrid power electronics thermal management system started in FY15; however, due to time constraints it was not possible to include results for this system in this report. The focus of this project is to benchmark the thermal aspects of the systems. ORNL's benchmarking of electric and hybrid electric vehicle technology reports provide detailed descriptions of the electrical and packaging aspects of these automotive systems.

  18. Prismatic VHTR neutronic benchmark problems

    Energy Technology Data Exchange (ETDEWEB)

    Connolly, Kevin John, E-mail: connolly@gatech.edu [Nuclear and Radiological Engineering and Medical Physics Programs, George W. Woodruff School, Georgia Institute of Technology, Atlanta, GA (United States); Rahnema, Farzad, E-mail: farzad@gatech.edu [Nuclear and Radiological Engineering and Medical Physics Programs, George W. Woodruff School, Georgia Institute of Technology, Atlanta, GA (United States); Tsvetkov, Pavel V. [Department of Nuclear Engineering, Texas A& M University, College Station, TX (United States)

    2015-04-15

    Highlights: • High temperature gas-cooled reactor neutronics benchmark problems. • Description of a whole prismatic VHTR core in its full heterogeneity. • Modeled using continuous energy nuclear data at a representative hot operating temperature. • Benchmark results for core eigenvalue, block-averaged power, and some selected pin fission density results. - Abstract: This paper aims to fill an apparent scarcity of benchmarks based on high temperature gas-cooled reactors. Within is a description of a whole prismatic VHTR core in its full heterogeneity and modeling using continuous energy nuclear data at a representative hot operating temperature. Also included is a core which has been simplified for ease in modeling while attempting to preserve as faithfully as possible the neutron physics of the core. Fuel and absorber pins have been homogenized from the particle level, however, the blocks which construct the core remain strongly heterogeneous. A six group multigroup (discrete energy) cross section set has been developed via Monte Carlo using the original heterogeneous core as a basis. Several configurations of the core have been solved using these two cross section sets; eigenvalue results, block-averaged power results, and some selected pin fission density results are presented in this paper, along with the six-group cross section data, so that method developers may use these problems as a standard reference point.

  19. Prismatic VHTR neutronic benchmark problems

    International Nuclear Information System (INIS)

    Highlights: • High temperature gas-cooled reactor neutronics benchmark problems. • Description of a whole prismatic VHTR core in its full heterogeneity. • Modeled using continuous energy nuclear data at a representative hot operating temperature. • Benchmark results for core eigenvalue, block-averaged power, and some selected pin fission density results. - Abstract: This paper aims to fill an apparent scarcity of benchmarks based on high temperature gas-cooled reactors. Within is a description of a whole prismatic VHTR core in its full heterogeneity and modeling using continuous energy nuclear data at a representative hot operating temperature. Also included is a core which has been simplified for ease in modeling while attempting to preserve as faithfully as possible the neutron physics of the core. Fuel and absorber pins have been homogenized from the particle level, however, the blocks which construct the core remain strongly heterogeneous. A six group multigroup (discrete energy) cross section set has been developed via Monte Carlo using the original heterogeneous core as a basis. Several configurations of the core have been solved using these two cross section sets; eigenvalue results, block-averaged power results, and some selected pin fission density results are presented in this paper, along with the six-group cross section data, so that method developers may use these problems as a standard reference point

  20. Aptidão aeróbia e amplitude dos domínios de intensidade de exercício no ciclismo

    OpenAIRE

    Renato Aparecido Corrêa Caritá; Fabrizio Caputo; Camila Coelho Greco; Benedito Sérgio Denadai

    2013-01-01

    INTRODUÇÃO: A determinação dos domínios de intensidade de exercício tem importantes implicações na prescrição do treino aeróbio e na elaboração de delineamentos experimentais. OBJETIVO: Analisar os efeitos do nível de aptidão aeróbia sobre a amplitude dos domínios de intensidade de exercício durante o ciclismo. MÉTODOS: Doze ciclistas (CIC), 11 corredores (COR) e oito indivíduos não treinados (NT) foram submetidos aos seguintes protocolos em diferentes dias: 1) teste progressivo para determin...

  1. Measurement of Natural and Artificial Radioactivity in Soil at Some Selected Thanas around the TRIGA Mark-II Research Reactor at AERE, Savar, Dhaka

    OpenAIRE

    Shawpan C. Sarkar; Idris Ali; Debasish Paul; Mahbubur R. Bhuiyan; Sheikh M. A. Islam

    2011-01-01

    The activity concentration of natural and fallout radionuclides in the soil at some selected Thanas around the TRIGA Mark-II Research Reactor at Atomic Energy Research Establishment (AERE), Savar, Dhaka were measured by using a high purity germanium detector (HPGe). The study revealed that only natural radionuclides were present in the samples and no trace of any artificial radionuclide was found. The average activity concentration of 238U, 232Th ...

  2. Etileno e peróxido de hidrogênio na formação de aerênquima em milho tolerante a alagamento intermitente

    Directory of Open Access Journals (Sweden)

    Marinês Ferreira Pires

    2015-09-01

    Full Text Available Resumo:O objetivo deste trabalho foi avaliar o papel do etileno e do peróxido de hidrogênio (H2O2 na formação do aerênquima em ciclos de seleção genética da cultivar de milho BRS 4154, sob alagamento. Plantas dos ciclos C1 e C18 foram submetidas a alagamento por 7 dias, com coleta das raízes aos 0 (controle, sem alagamento, 1 e 7 dias. Foram analisados: a expressão gênica das enzimas ACC sintase (ACS, ACC oxidase (ACO, dismutase do superóxido (SOD e peroxidase do ascorbato (APX; a produção de etileno e o conteúdo de H2O2; a atividade da enzima ACO; e a proporção de aerênquima no córtex. Não houve expressão de ACS e ACO. Houve variação na atividade de ACO e na produção de etileno. A expressão da SOD foi maior em plantas C1 e a da APX, em C18, com redução aos 7 dias. O conteúdo de H2O2 não diferiu entre os tratamentos. A proporção de aerênquima aumentou com o tempo, tendo sido maior em plantas C18 e relacionada à taxa de formação do aerênquima. O tempo de alagamento e o nível de tolerância do ciclo de seleção influenciam a produção do etileno. A expressão da APX indica maior produção de H2O2 no início do alagamento.

  3. Benchmarks

    Data.gov (United States)

    Earth Data Analysis Center, University of New Mexico — The National Flood Hazard Layer (NFHL) data incorporates all Digital Flood Insurance Rate Map(DFIRM) databases published by FEMA, and any Letters Of Map Revision...

  4. PageRank Pipeline Benchmark: Proposal for a Holistic System Benchmark for Big-Data Platforms

    CERN Document Server

    Dreher, Patrick; Hill, Chris; Gadepally, Vijay; Kuszmaul, Bradley; Kepner, Jeremy

    2016-01-01

    The rise of big data systems has created a need for benchmarks to measure and compare the capabilities of these systems. Big data benchmarks present unique scalability challenges. The supercomputing community has wrestled with these challenges for decades and developed methodologies for creating rigorous scalable benchmarks (e.g., HPC Challenge). The proposed PageRank pipeline benchmark employs supercomputing benchmarking methodologies to create a scalable benchmark that is reflective of many real-world big data processing systems. The PageRank pipeline benchmark builds on existing prior scalable benchmarks (Graph500, Sort, and PageRank) to create a holistic benchmark with multiple integrated kernels that can be run together or independently. Each kernel is well defined mathematically and can be implemented in any programming environment. The linear algebraic nature of PageRank makes it well suited to being implemented using the GraphBLAS standard. The computations are simple enough that performance predictio...

  5. Benchmarking triple stores with biological data

    CERN Document Server

    Mironov, Vladimir; Blonde, Ward; Antezana, Erick; Lindi, Bjorn; Kuiper, Martin

    2010-01-01

    We have compared the performance of five non-commercial triple stores, Virtuoso-open source, Jena SDB, Jena TDB, SWIFT-OWLIM and 4Store. We examined three performance aspects: the query execution time, scalability and run-to-run reproducibility. The queries we chose addressed different ontological or biological topics, and we obtained evidence that individual store performance was quite query specific. We identified three groups of queries displaying similar behavior across the different stores: 1) relatively short response time, 2) moderate response time and 3) relatively long response time. OWLIM proved to be a winner in the first group, 4Store in the second and Virtuoso in the third. Our benchmarking showed Virtuoso to be a very balanced performer - its response time was better than average for all the 24 queries; it showed a very good scalability and a reasonable run-to-run reproducibility.

  6. Benchmarking: A tool for conducting self-assessment

    International Nuclear Information System (INIS)

    There is more information on nuclear plant performance available than can reasonably be assimilated and used effectively by plant management or personnel responsible for self-assessment. Also, it is becoming increasingly more important that an effective self-assessment program uses internal parameters not only to evaluate performance, but to incorporate lessons learned from other plants. Because of the quantity of information available, it is important to focus efforts and resources in areas where safety or performance is a concern and where the most improvement can be realized. One of the techniques that is being used to effectively accomplish this is benchmarking. Benchmarking involves the use of various sources of information to self-identify a plant's strengths and weaknesses, identify which plants are strong performers in specific areas, evaluate what makes a top performer, and incorporate the success factors into existing programs. The formality with which benchmarking is being implemented varies widely depending on the objective. It can be as simple as looking at a single indicator, such as systematic assessment of licensee performance (SALP) in engineering and technical support, then surveying the top performers with specific questions. However, a more comprehensive approach may include the performance of a detailed benchmarking study. Both operational and economic indicators may be used in this type of evaluation. Some of the indicators that may be considered and the limitations of each are discussed

  7. A comprehensive benchmarking system for evaluating global vegetation models

    Directory of Open Access Journals (Sweden)

    D. I. Kelley

    2012-11-01

    Full Text Available We present a benchmark system for global vegetation models. This system provides a quantitative evaluation of multiple simulated vegetation properties, including primary production; seasonal net ecosystem production; vegetation cover, composition and height; fire regime; and runoff. The benchmarks are derived from remotely sensed gridded datasets and site-based observations. The datasets allow comparisons of annual average conditions and seasonal and inter-annual variability, and they allow the impact of spatial and temporal biases in means and variability to be assessed separately. Specifically designed metrics quantify model performance for each process, and are compared to scores based on the temporal or spatial mean value of the observations and a "random" model produced by bootstrap resampling of the observations. The benchmark system is applied to three models: a simple light-use efficiency and water-balance model (the Simple Diagnostic Biosphere Model: SDBM, and the Lund-Potsdam-Jena (LPJ and Land Processes and eXchanges (LPX dynamic global vegetation models (DGVMs. SDBM reproduces observed CO2 seasonal cycles, but its simulation of independent measurements of net primary production (NPP is too high. The two DGVMs show little difference for most benchmarks (including the inter-annual variability in the growth rate and seasonal cycle of atmospheric CO2, but LPX represents burnt fraction demonstrably more accurately. Benchmarking also identified several weaknesses common to both DGVMs. The benchmarking system provides a quantitative approach for evaluating how adequately processes are represented in a model, identifying errors and biases, tracking improvements in performance through model development, and discriminating among models. Adoption of such a system would do much to improve confidence in terrestrial model predictions of climate change impacts and feedbacks.

  8. A comprehensive benchmarking system for evaluating global vegetation models

    Directory of Open Access Journals (Sweden)

    D. I. Kelley

    2013-05-01

    Full Text Available We present a benchmark system for global vegetation models. This system provides a quantitative evaluation of multiple simulated vegetation properties, including primary production; seasonal net ecosystem production; vegetation cover; composition and height; fire regime; and runoff. The benchmarks are derived from remotely sensed gridded datasets and site-based observations. The datasets allow comparisons of annual average conditions and seasonal and inter-annual variability, and they allow the impact of spatial and temporal biases in means and variability to be assessed separately. Specifically designed metrics quantify model performance for each process, and are compared to scores based on the temporal or spatial mean value of the observations and a "random" model produced by bootstrap resampling of the observations. The benchmark system is applied to three models: a simple light-use efficiency and water-balance model (the Simple Diagnostic Biosphere Model: SDBM, the Lund-Potsdam-Jena (LPJ and Land Processes and eXchanges (LPX dynamic global vegetation models (DGVMs. In general, the SDBM performs better than either of the DGVMs. It reproduces independent measurements of net primary production (NPP but underestimates the amplitude of the observed CO2 seasonal cycle. The two DGVMs show little difference for most benchmarks (including the inter-annual variability in the growth rate and seasonal cycle of atmospheric CO2, but LPX represents burnt fraction demonstrably more accurately. Benchmarking also identified several weaknesses common to both DGVMs. The benchmarking system provides a quantitative approach for evaluating how adequately processes are represented in a model, identifying errors and biases, tracking improvements in performance through model development, and discriminating among models. Adoption of such a system would do much to improve confidence in terrestrial model predictions of climate change impacts and feedbacks.

  9. HPGMG 1.0: A Benchmark for Ranking High Performance Computing Systems

    Energy Technology Data Exchange (ETDEWEB)

    Adams, Mark; Brown, Jed; Shalf, John; Straalen, Brian Van; Strohmaier, Erich; Williams, Sam

    2014-05-05

    This document provides an overview of the benchmark ? HPGMG ? for ranking large scale general purpose computers for use on the Top500 list [8]. We provide a rationale for the need for a replacement for the current metric HPL, some background of the Top500 list and the challenges of developing such a metric; we discuss our design philosophy and methodology, and an overview of the specification of the benchmark. The primary documentation with maintained details on the specification can be found at hpgmg.org and the Wiki and benchmark code itself can be found in the repository https://bitbucket.org/hpgmg/hpgmg.

  10. Microstructure and Mechanical Properties of AerMet 100 Ultra-high Strength Steel Joints by Laser Welding

    Institute of Scientific and Technical Information of China (English)

    LIU Fencheng; YU Xiaobin; HUANG Chunping; HE Lihua; CHEN Yuhua; BU Wende

    2015-01-01

    AerMet100 ultra-high strength steel plates with a thickness of 2 mm were welded using a CO2 laser welding system. The inlfuences of the welding process parameters on the morphology and microstructure of the welding joints were investigated, and the mechanical property of the welding joints was analyzed. The experimental results showed that the fusion zone of welding joint mainly consisted of columnar grains and a ifne dendrite substructure grew epitaxially from the matrix. With the other conditions remaining unchanged, a finer weld microstructure was along with the scanning speed increase. The solidification microstructure gradually transformed from cellular crystal into dendrite crystal and the spaces of dendrite secondary arms rose from the fusion line to the center of the fusion zone. In the fusion zone of the weld, the rapid cooling caused the formation of martensite, which led the microhardness of the fusion zone higher than that of the matrix and the heat affected zone. The tensile strength of the welding joints was tested as 1 700 MPa, which was about 87% of the matrix. However, the tensile strength of the welding joints without defects existed was tested as 1832 MPa, which was about 94% of the matrix.

  11. CFD Simulation of Thermal-Hydraulic Benchmark V1000CT-2 Using ANSYS CFX

    Directory of Open Access Journals (Sweden)

    Thomas Höhne

    2009-01-01

    Full Text Available Plant measured data from VVER-1000 coolant mixing experiments were used within the OECD/NEA and AER coupled code benchmarks for light water reactors to test and validate computational fluid dynamic (CFD codes. The task is to compare the various calculations with measured data, using specified boundary conditions and core power distributions. The experiments, which are provided for CFD validation, include single loop cooling down or heating-up by disturbing the heat transfer in the steam generator through the steam valves at low reactor power and with all main coolant pumps in operation. CFD calculations have been performed using a numerical grid model of 4.7 million tetrahedral elements. The Best Practice Guidelines in using CFD in nuclear reactor safety applications has been used. Different advanced turbulence models were utilized in the numerical simulation. The results show a clear sector formation of the affected loop at the downcomer, lower plenum and core inlet, which corresponds to the measured values. The maximum local values of the relative temperature rise in the calculation are in the same range of the experiment. Due to this result, it is now possible to improve the mixing models which are usually used in system codes.

  12. Gaia FGK benchmark stars: Metallicity

    Science.gov (United States)

    Jofré, P.; Heiter, U.; Soubiran, C.; Blanco-Cuaresma, S.; Worley, C. C.; Pancino, E.; Cantat-Gaudin, T.; Magrini, L.; Bergemann, M.; González Hernández, J. I.; Hill, V.; Lardo, C.; de Laverny, P.; Lind, K.; Masseron, T.; Montes, D.; Mucciarelli, A.; Nordlander, T.; Recio Blanco, A.; Sobeck, J.; Sordo, R.; Sousa, S. G.; Tabernero, H.; Vallenari, A.; Van Eck, S.

    2014-04-01

    Context. To calibrate automatic pipelines that determine atmospheric parameters of stars, one needs a sample of stars, or "benchmark stars", with well-defined parameters to be used as a reference. Aims: We provide detailed documentation of the iron abundance determination of the 34 FGK-type benchmark stars that are selected to be the pillars for calibration of the one billion Gaia stars. They cover a wide range of temperatures, surface gravities, and metallicities. Methods: Up to seven different methods were used to analyze an observed spectral library of high resolutions and high signal-to-noise ratios. The metallicity was determined by assuming a value of effective temperature and surface gravity obtained from fundamental relations; that is, these parameters were known a priori and independently from the spectra. Results: We present a set of metallicity values obtained in a homogeneous way for our sample of benchmark stars. In addition to this value, we provide detailed documentation of the associated uncertainties. Finally, we report a value of the metallicity of the cool giant ψ Phe for the first time. Based on NARVAL and HARPS data obtained within the Gaia DPAC (Data Processing and Analysis Consortium) and coordinated by the GBOG (Ground-Based Observations for Gaia) working group and on data retrieved from the ESO-ADP database.Tables 6-76 are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/564/A133

  13. NFS Tricks and Benchmarking Traps

    OpenAIRE

    Seltzer, Margo; Ellard, Daniel

    2003-01-01

    We describe two modi cations to the FreeBSD 4.6 NFS server to increase read throughput by improving the read-ahead heuristic to deal with reordered requests and stride access patterns. We show that for some stride access patterns, our new heuristics improve end-to-end NFS throughput by nearly a factor of two. We also show that benchmarking and experimenting with changes to an NFS server can be a subtle and challenging task, and that it is often difficult to distinguish the impact of a new ...

  14. TRIGA Mark II benchmark experiment

    International Nuclear Information System (INIS)

    The experimental results of startup tests after reconstruction and modification of the TRIGA Mark II reactor in Ljubljana are presented. The experiments were performed with a completely fresh, compact, and uniform core. The operating conditions were well defined and controlled, so that the results can be used as a benchmark test case for TRIGA reactor calculations. Both steady-state and pulse mode operation were tested. In this paper, the following steady-state experiments are treated: critical core and excess reactivity, control rod worths, fuel element reactivity worth distribution, fuel temperature distribution, and fuel temperature reactivity coefficient

  15. NASA Software Engineering Benchmarking Effort

    Science.gov (United States)

    Godfrey, Sally; Rarick, Heather

    2012-01-01

    Benchmarking was very interesting and provided a wealth of information (1) We did see potential solutions to some of our "top 10" issues (2) We have an assessment of where NASA stands with relation to other aerospace/defense groups We formed new contacts and potential collaborations (1) Several organizations sent us examples of their templates, processes (2) Many of the organizations were interested in future collaboration: sharing of training, metrics, Capability Maturity Model Integration (CMMI) appraisers, instructors, etc. We received feedback from some of our contractors/ partners (1) Desires to participate in our training; provide feedback on procedures (2) Welcomed opportunity to provide feedback on working with NASA

  16. Benchmark models, planes lines and points for future SUSY searches at the LHC

    International Nuclear Information System (INIS)

    We define benchmark models for SUSY searches at the LHC, including the CMSSM, NUHM, mGMSB, mAMSB, MM-AMSB and p19MSSM, as well as models with R-parity violation and the NMSSM. Within the parameter spaces of these models, we propose benchmark subspaces, including planes, lines and points along them. The planes may be useful for presenting results of the experimental searches in different SUSY scenarios, while the specific benchmark points may serve for more detailed detector performance tests and comparisons. We also describe algorithms for defining suitable benchmark points along the proposed lines in the parameter spaces, and we define a few benchmark points motivated by recent fits to existing experimental data.

  17. Benchmark models, planes lines and points for future SUSY searches at the LHC

    Energy Technology Data Exchange (ETDEWEB)

    AbdusSalam, S.S. [The Abdus Salam International Centre for Theoretical Physics, Trieste (Italy); Allanach, B.C. [Cambridge Univ. (United Kingdom). Dept. of Applied Mathematics and Theoretical Physics; Dreiner, H.K. [Bonn Univ. (DE). Bethe Center for Theoretical Physics and Physikalisches Inst.] (and others)

    2012-03-15

    We define benchmark models for SUSY searches at the LHC, including the CMSSM, NUHM, mGMSB, mAMSB, MM-AMSB and p19MSSM, as well as models with R-parity violation and the NMSSM. Within the parameter spaces of these models, we propose benchmark subspaces, including planes, lines and points along them. The planes may be useful for presenting results of the experimental searches in different SUSY scenarios, while the specific benchmark points may serve for more detailed detector performance tests and comparisons. We also describe algorithms for defining suitable benchmark points along the proposed lines in the parameter spaces, and we define a few benchmark points motivated by recent fits to existing experimental data.

  18. Benchmark models, planes, lines and points for future SUSY searches at the LHC

    International Nuclear Information System (INIS)

    We define benchmark models for SUSY searches at the LHC, including the CMSSM, NUHM, mGMSB, mAMSB, MM-AMSB and p19MSSM, as well as models with R-parity violation and the NMSSM. Within the parameter spaces of these models, we propose benchmark subspaces, including planes, lines and points along them. The planes may be useful for presenting results of the experimental searches in different SUSY scenarios, while the specific benchmark points may serve for more detailed detector performance tests and comparisons. We also describe algorithms for defining suitable benchmark points along the proposed lines in the parameter spaces, and we define a few benchmark points motivated by recent fits to existing experimental data. (orig.)

  19. A Privacy-Preserving Benchmarking Platform

    OpenAIRE

    Kerschbaum, Florian

    2010-01-01

    A privacy-preserving benchmarking platform is practically feasible, i.e. its performance is tolerable to the user on current hardware while fulfilling functional and security requirements. This dissertation designs, architects, and evaluates an implementation of such a platform. It contributes a novel (secure computation) benchmarking protocol, a novel method for computing peer groups, and a realistic evaluation of the first ever privacy-preserving benchmarking platform.

  20. Rethinking benchmark dates in international relations

    OpenAIRE

    Buzan, Barry; Lawson, George

    2014-01-01

    International Relations (IR) has an ‘orthodox set’ of benchmark dates by which much of its research and teaching is organized: 1500, 1648, 1919, 1945 and 1989. This article argues that IR scholars need to question the ways in which these orthodox dates serve as internal and external points of reference, think more critically about how benchmark dates are established, and generate a revised set of benchmark dates that better reflects macro-historical international dynamics. The first part of t...

  1. WIPP benchmark II results using SANCHO

    International Nuclear Information System (INIS)

    Results of the second Benchmark problem in the WIPP code evaluation series using the finite element dynamic relaxation code SANCHO are presented. A description of SANCHO and its model for sliding interfaces is given, along with a discussion of the various small routines used for generating stress plot data. Conclusions and a discussion of this benchmark problem, as well as recommendations for a possible third benchmark problem are presented

  2. Benchmarking for Excellence and the Nursing Process

    Science.gov (United States)

    Sleboda, Claire

    1999-01-01

    Nursing is a service profession. The services provided are essential to life and welfare. Therefore, setting the benchmark for high quality care is fundamental. Exploring the definition of a benchmark value will help to determine a best practice approach. A benchmark is the descriptive statement of a desired level of performance against which quality can be judged. It must be sufficiently well understood by managers and personnel in order that it may serve as a standard against which to measure value.

  3. The design and analysis of benchmark experiments

    OpenAIRE

    Hothorn, Torsten; Leisch, Friedrich; Zeileis, Achim; Hornik, Kurt

    2003-01-01

    The assessment of the performance of learners by means of benchmark experiments is established exercise. In practice, benchmark studies are a tool to compare the performance of several competing algorithms for a certain learning problem. Cross-validation or resampling techniques are commonly used to derive point estimates of the performances which are compared to identify algorithms with good properties. For several benchmarking problems, test procedures taking the variability of those point ...

  4. Computational Chemistry Comparison and Benchmark Database

    Science.gov (United States)

    SRD 101 NIST Computational Chemistry Comparison and Benchmark Database (Web, free access)   The NIST Computational Chemistry Comparison and Benchmark Database is a collection of experimental and ab initio thermochemical properties for a selected set of molecules. The goals are to provide a benchmark set of molecules for the evaluation of ab initio computational methods and allow the comparison between different ab initio computational methods for the prediction of thermochemical properties.

  5. Pynamic: the Python Dynamic Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Lee, G L; Ahn, D H; de Supinksi, B R; Gyllenhaal, J C; Miller, P J

    2007-07-10

    Python is widely used in scientific computing to facilitate application development and to support features such as computational steering. Making full use of some of Python's popular features, which improve programmer productivity, leads to applications that access extremely high numbers of dynamically linked libraries (DLLs). As a result, some important Python-based applications severely stress a system's dynamic linking and loading capabilities and also cause significant difficulties for most development environment tools, such as debuggers. Furthermore, using the Python paradigm for large scale MPI-based applications can create significant file IO and further stress tools and operating systems. In this paper, we present Pynamic, the first benchmark program to support configurable emulation of a wide-range of the DLL usage of Python-based applications for large scale systems. Pynamic has already accurately reproduced system software and tool issues encountered by important large Python-based scientific applications on our supercomputers. Pynamic provided insight for our system software and tool vendors, and our application developers, into the impact of several design decisions. As we describe the Pynamic benchmark, we will highlight some of the issues discovered in our large scale system software and tools using Pynamic.

  6. Method and system for benchmarking computers

    Science.gov (United States)

    Gustafson, John L.

    1993-09-14

    A testing system and method for benchmarking computer systems. The system includes a store containing a scalable set of tasks to be performed to produce a solution in ever-increasing degrees of resolution as a larger number of the tasks are performed. A timing and control module allots to each computer a fixed benchmarking interval in which to perform the stored tasks. Means are provided for determining, after completion of the benchmarking interval, the degree of progress through the scalable set of tasks and for producing a benchmarking rating relating to the degree of progress for each computer.

  7. Characterizing universal gate sets via dihedral benchmarking

    Science.gov (United States)

    Carignan-Dugas, Arnaud; Wallman, Joel J.; Emerson, Joseph

    2015-12-01

    We describe a practical experimental protocol for robustly characterizing the error rates of non-Clifford gates associated with dihedral groups, including small single-qubit rotations. Our dihedral benchmarking protocol is a generalization of randomized benchmarking that relaxes the usual unitary 2-design condition. Combining this protocol with existing randomized benchmarking schemes enables practical universal gate sets for quantum information processing to be characterized in a way that is robust against state-preparation and measurement errors. In particular, our protocol enables direct benchmarking of the π /8 gate even under the gate-dependent error model that is expected in leading approaches to fault-tolerant quantum computation.

  8. Analysis of VENUS-3 benchmark experiment

    International Nuclear Information System (INIS)

    The paper presents the revision and the analysis of VENUS-3 benchmark experiment performed at CEN/SCK, Mol (Belgium). This benchmark was found to be particularly suitable for validation of current calculation tools like 3-D neutron transport codes, and in particular of the 3D sensitivity and uncertainty analysis code developed within the EFF project. The compilation of the integral experiment was integrated into the SINBAD electronic data base for storing and retrieving information about the shielding experiments for nuclear systems. SINBAD now includes 33 reviewed benchmark descriptions and several compilations waiting for the review, among them many benchmarks relevant for pressure vessel dosimetry system validation.(author)

  9. Benchmark for evaluation and validation of reactor simulations (BEAVRS)

    International Nuclear Information System (INIS)

    Advances in parallel computing have made possible the development of high-fidelity tools for the design and analysis of nuclear reactor cores, and such tools require extensive verification and validation. This paper introduces BEAVRS, a new multi-cycle full-core Pressurized Water Reactor (PWR) depletion benchmark based on two operational cycles of a commercial nuclear power plant that provides a detailed description of fuel assemblies, burnable absorbers, in-core fission detectors, core loading patterns, and numerous in-vessel components. This benchmark enables analysts to develop extremely detailed reactor core models that can be used for testing and validation of coupled neutron transport, thermal-hydraulics, and fuel isotopic depletion. The benchmark also provides measured reactor data for Hot Zero Power (HZP) physics tests, boron letdown curves, and three-dimensional in-core flux maps from fifty-eight instrumented assemblies. Initial comparisons between calculations performed with MIT's OpenMC Monte Carlo neutron transport code and measured cycle 1 HZP test data are presented, and these results display an average deviation of approximately 100 pcm for the various critical configurations and control rod worth measurements. Computed HZP radial fission detector flux maps also agree reasonably well with the available measured data. All results indicate that this benchmark will be extremely useful in validation of coupled-physics codes and uncertainty quantification of in-core physics computational predictions. The detailed BEAVRS specification and its associated data package is hosted online at the MIT Computational Reactor Physics Group web site (http://crpg.mit.edu/), where future revisions and refinements to the benchmark specification will be made publicly available. (authors)

  10. The Zoo, Benchmarks & You: How To Reach the Oregon State Benchmarks with Zoo Resources.

    Science.gov (United States)

    2002

    This document aligns Oregon state educational benchmarks and standards with Oregon Zoo resources. Benchmark areas examined include English, mathematics, science, social studies, and career and life roles. Brief descriptions of the programs offered by the zoo are presented. (SOE)

  11. Benchmarking Implementations of Functional Languages with ``Pseudoknot'', a Float-Intensive Benchmark

    NARCIS (Netherlands)

    Hartel, P.H.; Feeley, M.; Alt, M.; Augustsson, L.

    1996-01-01

    Over 25 implementations of different functional languages are benchmarked using the same program, a floatingpoint intensive application taken from molecular biology. The principal aspects studied are compile time and execution time for the various implementations that were benchmarked. An important

  12. Discussion of OECD LWR Uncertainty Analysis in Modelling Benchmark

    International Nuclear Information System (INIS)

    The demand for best estimate calculations in nuclear reactor design and safety evaluations has increased in recent years. Uncertainty quantification has been highlighted as part of the best estimate calculations. The modelling aspects of uncertainty and sensitivity analysis are to be further developed and validated on scientific grounds in support of their performance and application to multi-physics reactor simulations. The Organization for Economic Co-operation and Development (OECD) / Nuclear Energy Agency (NEA) Nuclear Science Committee (NSC) has endorsed the creation of an Expert Group on Uncertainty Analysis in Modelling (EGUAM). Within the framework of activities of EGUAM/NSC the OECD/NEA initiated the Benchmark for Uncertainty Analysis in Modelling for Design, Operation, and Safety Analysis of Light Water Reactor (OECD LWR UAM benchmark). The general objective of the benchmark is to propagate the predictive uncertainties of code results through complex coupled multi-physics and multi-scale simulations. The benchmark is divided into three phases with Phase I highlighting the uncertainty propagation in stand-alone neutronics calculations, while Phase II and III are focused on uncertainty analysis of reactor core and system respectively. This paper discusses the progress made in Phase I calculations, the Specifications for Phase II and the incoming challenges in defining Phase 3 exercises. The challenges of applying uncertainty quantification to complex code systems, in particular the time-dependent coupled physics models are the large computational burden and the utilization of non-linear models (expected due to the physics coupling). (authors)

  13. Benchmarking: A tool to enhance performance

    Energy Technology Data Exchange (ETDEWEB)

    Munro, J.F. [Oak Ridge National Lab., TN (United States); Kristal, J. [USDOE Assistant Secretary for Environmental Management, Washington, DC (United States); Thompson, G.; Johnson, T. [Los Alamos National Lab., NM (United States)

    1996-12-31

    The Office of Environmental Management is bringing Headquarters and the Field together to implement process improvements throughout the Complex through a systematic process of organizational learning called benchmarking. Simply stated, benchmarking is a process of continuously comparing and measuring practices, processes, or methodologies with those of other private and public organizations. The EM benchmarking program, which began as the result of a recommendation from Xerox Corporation, is building trust and removing barriers to performance enhancement across the DOE organization. The EM benchmarking program is designed to be field-centered with Headquarters providing facilitatory and integrative functions on an ``as needed`` basis. One of the main goals of the program is to assist Field Offices and their associated M&O/M&I contractors develop the capabilities to do benchmarking for themselves. In this regard, a central precept is that in order to realize tangible performance benefits, program managers and staff -- the ones closest to the work - must take ownership of the studies. This avoids the ``check the box`` mentality associated with some third party studies. This workshop will provide participants with a basic level of understanding why the EM benchmarking team was developed and the nature and scope of its mission. Participants will also begin to understand the types of study levels and the particular methodology the EM benchmarking team is using to conduct studies. The EM benchmarking team will also encourage discussion on ways that DOE (both Headquarters and the Field) can team with its M&O/M&I contractors to conduct additional benchmarking studies. This ``introduction to benchmarking`` is intended to create a desire to know more and a greater appreciation of how benchmarking processes could be creatively employed to enhance performance.

  14. HPC Analytics Support. Requirements for Uncertainty Quantification Benchmarks

    Energy Technology Data Exchange (ETDEWEB)

    Paulson, Patrick R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Purohit, Sumit [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Rodriguez, Luke R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2015-05-01

    This report outlines techniques for extending benchmark generation products so they support uncertainty quantification by benchmarked systems. We describe how uncertainty quantification requirements can be presented to candidate analytical tools supporting SPARQL. We describe benchmark data sets for evaluating uncertainty quantification, as well as an approach for using our benchmark generator to produce data sets for generating benchmark data sets.

  15. General benchmarks for quantum repeaters

    CERN Document Server

    Pirandola, Stefano

    2015-01-01

    Using a technique based on quantum teleportation, we simplify the most general adaptive protocols for key distribution, entanglement distillation and quantum communication over a wide class of quantum channels in arbitrary dimension. Thanks to this method, we bound the ultimate rates for secret key generation and quantum communication through single-mode Gaussian channels and several discrete-variable channels. In particular, we derive exact formulas for the two-way assisted capacities of the bosonic quantum-limited amplifier and the dephasing channel in arbitrary dimension, as well as the secret key capacity of the qubit erasure channel. Our results establish the limits of quantum communication with arbitrary systems and set the most general and precise benchmarks for testing quantum repeaters in both discrete- and continuous-variable settings.

  16. Human factors reliability Benchmark exercise

    International Nuclear Information System (INIS)

    The Joint Research Centre of the European Commission has organized a Human Factors Reliability Benchmark Exercise (HF-RBE) with the aim of assessing the state of the art in human reliability modelling and assessment. Fifteen teams from eleven countries, representing industry, utilities, licensing organisations and research institutes, participated in the HF-RBE. The HF-RBE was organized around two study cases: (1) analysis of routine functional Test and Maintenance (T and M) procedures: with the aim of assessing the probability of test induced failures, the probability of failures to remain unrevealed and the potential to initiate transients because of errors performed in the test; (2) analysis of human actions during an operational transient: with the aim of assessing the probability that the operators will correctly diagnose the malfunctions and take proper corrective action. This report contains the final summary reports produced by the participants in the exercise

  17. Experimental and computational benchmark tests

    International Nuclear Information System (INIS)

    A program involving principally NIST, LANL, and ORNL has been in progress for about four years now to establish a series of benchmark measurements and calculations related to the moderation and leakage of 252Cf neutrons from a source surrounded by spherical aqueous moderators of various thicknesses and compositions. The motivation for these studies comes from problems in criticality calculations concerning arrays of multiplying components, where the leakage from one component acts as a source for the other components. This talk compares experimental and calculated values for the fission rates of four nuclides - 235U, 239Pu, 238U, and 237Np - in the leakage spectrum from moderator spheres of diameters 76.2 mm, 101.6 mm, and 127.0 mm, with either pure water or enriched B-10 solutions as the moderator. Very detailed Monte Carlo calculations were done with the MCNP code, using a open-quotes light waterclose quotes S(α,β) scattering kernel

  18. Benchmark scenarios for the NMSSM

    CERN Document Server

    Djouadi, A; Ellwanger, U; Godbole, R; Hugonie, C; King, S F; Lehti, S; Moretti, S; Nikitenko, A; Rottlander, I; Schumacher, M; Teixeira, A

    2008-01-01

    We discuss constrained and semi--constrained versions of the next--to--minimal supersymmetric extension of the Standard Model (NMSSM) in which a singlet Higgs superfield is added to the two doublet superfields that are present in the minimal extension (MSSM). This leads to a richer Higgs and neutralino spectrum and allows for many interesting phenomena that are not present in the MSSM. In particular, light Higgs particles are still allowed by current constraints and could appear as decay products of the heavier Higgs states, rendering their search rather difficult at the LHC. We propose benchmark scenarios which address the new phenomenological features, consistent with present constraints from colliders and with the dark matter relic density, and with (semi--)universal soft terms at the GUT scale. We present the corresponding spectra for the Higgs particles, their couplings to gauge bosons and fermions and their most important decay branching ratios. A brief survey of the search strategies for these states a...

  19. VHTRC temperature coefficient benchmark problem

    International Nuclear Information System (INIS)

    As an activity of IAEA Coordinated Research Programme, a benchmark problem is proposed for verifications of neutronic calculation codes for a low enriched uranium fuel high temperature gas-cooled reactor. Two problems are given on the base of heating experiments at the VHTRC which is a pin-in-block type core critical assembly loaded mainly with 4% enriched uranium coated particle fuel. One problem, VH1-HP, asks to calculate temperature coefficient of reactivity from the subcritical reactivity values at five temperature steps between an room temperature where the assembly is nearly at critical state and 200degC. The other problem, VH1-HC, asks to calculate the effective multiplication factor of nearly critical loading cores at the room temperature and 200degC. Both problems further ask to calculate cell parameters such as migration area and spectral indices. Experimental results corresponding to main calculation items are also listed for comparison. (author)

  20. Benchmarking Learning and Teaching: Developing a Method

    Science.gov (United States)

    Henderson-Smart, Cheryl; Winning, Tracey; Gerzina, Tania; King, Shalinie; Hyde, Sarah

    2006-01-01

    Purpose: To develop a method for benchmarking teaching and learning in response to an institutional need to validate a new program in Dentistry at the University of Sydney, Australia. Design/methodology/approach: After a collaborative partner, University of Adelaide, was identified, the areas of teaching and learning to be benchmarked, PBL…

  1. Beyond Benchmarking: Value-Adding Metrics

    Science.gov (United States)

    Fitz-enz, Jac

    2007-01-01

    HR metrics has grown up a bit over the past two decades, moving away from simple benchmarking practices and toward a more inclusive approach to measuring institutional performance and progress. In this article, the acknowledged "father" of human capital performance benchmarking provides an overview of several aspects of today's HR metrics…

  2. Evaluating software verification systems: benchmarks and competitions

    NARCIS (Netherlands)

    Beyer, Dirk; Huisman, Marieke; Klebanov, Vladimir; Monahan, Rosemary

    2014-01-01

    This report documents the program and the outcomes of Dagstuhl Seminar 14171 “Evaluating Software Verification Systems: Benchmarks and Competitions”. The seminar brought together a large group of current and future competition organizers and participants, benchmark maintainers, as well as practition

  3. Benchmark Two-Good Utility Functions

    NARCIS (Netherlands)

    de Jaegher, K.

    2007-01-01

    Benchmark two-good utility functions involving a good with zero income elasticity and unit income elasticity are well known. This paper derives utility functions for the additional benchmark cases where one good has zero cross-price elasticity, unit own-price elasticity, and zero own price elasticit

  4. Benchmarking for controllere: Metoder, teknikker og muligheder

    DEFF Research Database (Denmark)

    Bukh, Per Nikolaj; Sandalgaard, Niels; Dietrichson, Lars

    2008-01-01

    Der vil i artiklen blive stillet skarpt på begrebet benchmarking ved at præsentere og diskutere forskellige facetter af det. Der vil blive redegjort for fire forskellige anvendelser af benchmarking for at vise begrebets bredde og væsentligheden af at klarlægge formålet med et benchmarkingprojekt...

  5. The Linked Data Benchmark Council Project

    NARCIS (Netherlands)

    Boncz, P.A.; Fundulaki, I.; Gubichev, A.; Larriba-Pey, J.; Neumann, T.

    2013-01-01

    Despite the fast growth and increasing popularity, the broad field of RDF and Graph database systems lacks an independent authority for developing benchmarks, and for neutrally assessing benchmark results through industry-strength auditing which would allow to quantify and compare the performance of

  6. The role of benchmarking for yardstick competition

    International Nuclear Information System (INIS)

    With the increasing interest in yardstick regulation, there is a need to understand the most appropriate method for realigning tariffs at the outset. Benchmarking is the tool used for such realignment and is therefore a necessary first-step in the implementation of yardstick competition. A number of concerns have been raised about the application of benchmarking, making some practitioners reluctant to move towards yardstick based regimes. We assess five of the key concerns often discussed and find that, in general, these are not as great as perceived. The assessment is based on economic principles and experiences with applying benchmarking to regulated sectors, e.g. in the electricity and water industries in the UK, The Netherlands, Austria and Germany in recent years. The aim is to demonstrate that clarity on the role of benchmarking reduces the concern about its application in different regulatory regimes. We find that benchmarking can be used in regulatory settlements, although the range of possible benchmarking approaches that are appropriate will be small for any individual regulatory question. Benchmarking is feasible as total cost measures and environmental factors are better defined in practice than is commonly appreciated and collusion is unlikely to occur in environments with more than 2 or 3 firms (where shareholders have a role in monitoring and rewarding performance). Furthermore, any concern about companies under-recovering costs is a matter to be determined through the regulatory settlement and does not affect the case for using benchmarking as part of that settlement. (author)

  7. An Effective Approach for Benchmarking Implementation

    Directory of Open Access Journals (Sweden)

    B. M. Deros

    2011-01-01

    Full Text Available Problem statement: The purpose of this study is to present a benchmarking guideline, conceptual framework and computerized mini program to assists companies achieve better performance in terms of quality, cost, delivery, supply chain and eventually increase their competitiveness in the market. The study begins with literature review on benchmarking definition, barriers and advantages from the implementation and the study of benchmarking framework. Approach: Thirty respondents were involved in the case study. They comprise of industrial practitioners, which had assessed usability and practicability of the guideline, conceptual framework and computerized mini program. Results: A guideline and template were proposed to simplify the adoption of benchmarking techniques. A conceptual framework was proposed by integrating the Deming’s PDCA and Six Sigma DMAIC theory. It was provided a step-by-step method to simplify the implementation and to optimize the benchmarking results. A computerized mini program was suggested to assist the users in adopting the technique as part of improvement project. As the result from the assessment test, the respondents found that the implementation method provided an idea for company to initiate benchmarking implementation and it guides them to achieve the desired goal as set in a benchmarking project. Conclusion: The result obtained and discussed in this study can be applied in implementing benchmarking in a more systematic way for ensuring its success.

  8. Repeated Results Analysis for Middleware Regression Benchmarking

    Czech Academy of Sciences Publication Activity Database

    Bulej, Lubomír; Kalibera, T.; Tůma, P.

    2005-01-01

    Roč. 60, - (2005), s. 345-358. ISSN 0166-5316 R&D Projects: GA ČR GA102/03/0672 Institutional research plan: CEZ:AV0Z10300504 Keywords : middleware benchmarking * regression benchmarking * regression testing Subject RIV: JD - Computer Applications, Robotics Impact factor: 0.756, year: 2005

  9. Benchmarking Analysis of Institutional University Autonomy in Denmark, Lithuania, Romania, Scotland, and Sweden

    DEFF Research Database (Denmark)

    respective evaluation criteria and searched for similarities and differences in approaches to higher education sectors and respective autonomy regimes in these countries. The consolidated report that precedes the benchmark reports summarises the process and key findings from the four benchmark reports....... Specifically, it presents (i) the methodology and methods employed for data collection and data analysis; (ii) the comparative analysis of higher education sectors and respective education systems in these countries; and (iii) the executive summaries of the benchmark reports and key emerging patterns. The...

  10. A performance benchmark test for geodynamo simulations

    Science.gov (United States)

    Matsui, H.; Heien, E. M.

    2013-12-01

    In the last ten years, a number of numerical dynamo models have successfully represented basic characteristics of the geomagnetic field. As new models and numerical methods continue to be developed, it is important to update and extend benchmarks for testing these models. The first dynamo benchmark of Christensen et al. (2001) was applied to models based on spherical harmonic expansion methods. However, only a few groups have reported results of the dynamo benchmark using local methods (Harder and Hansen, 2005; Matsui and Okuda, 2005; Chan et al., 2007) because of the difficulty treating magnetic boundary conditions based on the local methods. On the other hand, spherical harmonics expansion methods perform poorly on massively parallel computers because global data communications are required for the spherical harmonics expansions to evaluate nonlinear terms. We perform benchmark tests to asses various numerical methods for the next generation of geodynamo simulations. The purpose of this benchmark test is to assess numerical geodynamo models on a massively parallel computational platform. To compare among many numerical methods as possible, we consider the model with the insulated magnetic boundary by Christensen et al. (2001) and with the pseudo vacuum magnetic boundary, because the pseudo vacuum boundaries are implemented easier by using the local method than the magnetic insulated boundaries. In the present study, we consider two kinds of benchmarks, so-called accuracy benchmark and performance benchmark. In the accuracy benchmark, we compare the dynamo models by using modest Ekman and Rayleigh numbers proposed by Christensen et. al. (2001). We investigate a required spatial resolution for each dynamo code to obtain less than 1% difference from the suggested solution of the benchmark test using the two magnetic boundary conditions. In the performance benchmark, we investigate computational performance under the same computational environment. We perform these

  11. A proposed benchmark problem for cargo nuclear threat monitoring

    International Nuclear Information System (INIS)

    There is currently a great deal of technical and political effort focused on reducing the risk of potential attacks on the United States involving radiological dispersal devices or nuclear weapons. This paper proposes a benchmark problem for gamma-ray and X-ray cargo monitoring with results calculated using MCNP5, v1.51. The primary goal is to provide a benchmark problem that will allow researchers in this area to evaluate Monte Carlo models for both speed and accuracy in both forward and inverse calculational codes and approaches for nuclear security applications. A previous benchmark problem was developed by one of the authors (RPG) for two similar oil well logging problems (Gardner and Verghese, 1991, ). One of those benchmarks has recently been used by at least two researchers in the nuclear threat area to evaluate the speed and accuracy of Monte Carlo codes combined with variance reduction techniques. This apparent need has prompted us to design this benchmark problem specifically for the nuclear threat researcher. This benchmark consists of conceptual design and preliminary calculational results using gamma-ray interactions on a system containing three thicknesses of three different shielding materials. A point source is placed inside the three materials lead, aluminum, and plywood. The first two materials are in right circular cylindrical form while the third is a cube. The entire system rests on a sufficiently thick lead base so as to reduce undesired scattering events. The configuration was arranged in such a manner that as gamma-ray moves from the source outward it first passes through the lead circular cylinder, then the aluminum circular cylinder, and finally the wooden cube before reaching the detector. A 2 in.x4 in.x16 in. box style NaI (Tl) detector was placed 1 m from the point source located in the center with the 4 in.x16 in. side facing the system. The two sources used in the benchmark are 137Cs and 235U.

  12. A proposed benchmark problem for cargo nuclear threat monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Holmes, Thomas Wesley, E-mail: twholmes@ncsu.edu [Center for Engineering Applications of Radioisotopes, Nuclear Engineering Department, North Carolina State University, Raleigh, NC 27695-7909 (United States); Calderon, Adan; Peeples, Cody R.; Gardner, Robin P. [Center for Engineering Applications of Radioisotopes, Nuclear Engineering Department, North Carolina State University, Raleigh, NC 27695-7909 (United States)

    2011-10-01

    There is currently a great deal of technical and political effort focused on reducing the risk of potential attacks on the United States involving radiological dispersal devices or nuclear weapons. This paper proposes a benchmark problem for gamma-ray and X-ray cargo monitoring with results calculated using MCNP5, v1.51. The primary goal is to provide a benchmark problem that will allow researchers in this area to evaluate Monte Carlo models for both speed and accuracy in both forward and inverse calculational codes and approaches for nuclear security applications. A previous benchmark problem was developed by one of the authors (RPG) for two similar oil well logging problems (Gardner and Verghese, 1991, ). One of those benchmarks has recently been used by at least two researchers in the nuclear threat area to evaluate the speed and accuracy of Monte Carlo codes combined with variance reduction techniques. This apparent need has prompted us to design this benchmark problem specifically for the nuclear threat researcher. This benchmark consists of conceptual design and preliminary calculational results using gamma-ray interactions on a system containing three thicknesses of three different shielding materials. A point source is placed inside the three materials lead, aluminum, and plywood. The first two materials are in right circular cylindrical form while the third is a cube. The entire system rests on a sufficiently thick lead base so as to reduce undesired scattering events. The configuration was arranged in such a manner that as gamma-ray moves from the source outward it first passes through the lead circular cylinder, then the aluminum circular cylinder, and finally the wooden cube before reaching the detector. A 2 in.x4 in.x16 in. box style NaI (Tl) detector was placed 1 m from the point source located in the center with the 4 in.x16 in. side facing the system. The two sources used in the benchmark are {sup 137}Cs and {sup 235}U.

  13. A proposed benchmark problem for cargo nuclear threat monitoring

    Science.gov (United States)

    Wesley Holmes, Thomas; Calderon, Adan; Peeples, Cody R.; Gardner, Robin P.

    2011-10-01

    There is currently a great deal of technical and political effort focused on reducing the risk of potential attacks on the United States involving radiological dispersal devices or nuclear weapons. This paper proposes a benchmark problem for gamma-ray and X-ray cargo monitoring with results calculated using MCNP5, v1.51. The primary goal is to provide a benchmark problem that will allow researchers in this area to evaluate Monte Carlo models for both speed and accuracy in both forward and inverse calculational codes and approaches for nuclear security applications. A previous benchmark problem was developed by one of the authors (RPG) for two similar oil well logging problems (Gardner and Verghese, 1991, [1]). One of those benchmarks has recently been used by at least two researchers in the nuclear threat area to evaluate the speed and accuracy of Monte Carlo codes combined with variance reduction techniques. This apparent need has prompted us to design this benchmark problem specifically for the nuclear threat researcher. This benchmark consists of conceptual design and preliminary calculational results using gamma-ray interactions on a system containing three thicknesses of three different shielding materials. A point source is placed inside the three materials lead, aluminum, and plywood. The first two materials are in right circular cylindrical form while the third is a cube. The entire system rests on a sufficiently thick lead base so as to reduce undesired scattering events. The configuration was arranged in such a manner that as gamma-ray moves from the source outward it first passes through the lead circular cylinder, then the aluminum circular cylinder, and finally the wooden cube before reaching the detector. A 2 in.×4 in.×16 in. box style NaI (Tl) detector was placed 1 m from the point source located in the center with the 4 in.×16 in. side facing the system. The two sources used in the benchmark are 137Cs and 235U.

  14. Vver-1000 Mox core computational benchmark

    International Nuclear Information System (INIS)

    The NEA Nuclear Science Committee has established an Expert Group that deals with the status and trends of reactor physics, fuel performance and fuel cycle issues related to disposing of weapons-grade plutonium in mixed-oxide fuel. The objectives of the group are to provide NEA member countries with up-to-date information on, and to develop consensus regarding, core and fuel cycle issues associated with burning weapons-grade plutonium in thermal water reactors (PWR, BWR, VVER-1000, CANDU) and fast reactors (BN-600). These issues concern core physics, fuel performance and reliability, and the capability and flexibility of thermal water reactors and fast reactors to dispose of weapons-grade plutonium in standard fuel cycles. The activities of the NEA Expert Group on Reactor-based Plutonium Disposition are carried out in close co-operation (jointly, in most cases) with the NEA Working Party on Scientific Issues in Reactor Systems (WPRS). A prominent part of these activities include benchmark studies. At the time of preparation of this report, the following benchmarks were completed or in progress: VENUS-2 MOX Core Benchmarks: carried out jointly with the WPRS (formerly the WPPR) (completed); VVER-1000 LEU and MOX Benchmark (completed); KRITZ-2 Benchmarks: carried out jointly with the WPRS (formerly the WPPR) (completed); Hollow and Solid MOX Fuel Behaviour Benchmark (completed); PRIMO MOX Fuel Performance Benchmark (ongoing); VENUS-2 MOX-fuelled Reactor Dosimetry Calculation (ongoing); VVER-1000 In-core Self-powered Neutron Detector Calculational Benchmark (started); MOX Fuel Rod Behaviour in Fast Power Pulse Conditions (started); Benchmark on the VENUS Plutonium Recycling Experiments Configuration 7 (started). This report describes the detailed results of the benchmark investigating the physics of a whole VVER-1000 reactor core using two-thirds low-enriched uranium (LEU) and one-third MOX fuel. It contributes to the computer code certification process and to the

  15. Benchmarking--Measuring and Comparing for Continuous Improvement.

    Science.gov (United States)

    Henczel, Sue

    2002-01-01

    Discussion of benchmarking focuses on the use of internal and external benchmarking by special librarians. Highlights include defining types of benchmarking; historical development; benefits, including efficiency, improved performance, increased competitiveness, and better decision making; problems, including inappropriate adaptation; developing a…

  16. Mejora de defensas antioxidantes mediante ejercicio aeróbico en mujeres con síndrome metabólico

    Directory of Open Access Journals (Sweden)

    Manuel Rosety-Rodríguez

    2012-02-01

    Full Text Available En la actualidad se acepta que el daño oxidativo juega un papel esencial en la patogénesis del síndrome metabólico. Estudios recientes proponen al daño oxidativo como diana terapéutica frente al síndrome metabólico. Precisamente nuestro objetivo fue mejorar el estatus total antioxidante (TAS de mujeres con síndrome metabólico mediante ejercicio aeróbico. Participaron voluntariamente 100 mujeres con síndrome metabólico de acuerdo con los criterios del National Cholesterol Educational Program (Adult-Treatment-Panel-III distribuidas aleatoriamente en grupo experimental (n = 60 y control (n = 40. El grupo experimental desarrolló un programa de entrenamiento aeróbico sobre tapiz rodante de intensidad ligera/moderada de 12 semanas (5 sesiones/semana. La determinación del TAS plasmático se realizó mediante espectrofotometría utilizando kits comercializados por Randox Lab. Este protocolo fue aprobado por un Comité de Etica Institucional. Tras completar el programa de entrenamiento se incrementó significativamente el TAS (0.79 ± 0.05 vs.1.01 ± 0.03 mmol/l; p = 0.027. No hubo cambios en grupo control. El ejercicio aeróbico de intensidad ligera/moderada aumenta las defensas antioxidantes en mujeres con síndrome metabólico. Son necesarios futuros estudios longitudinales para conocer su impacto en la evolución clínica.

  17. Benchmarking Domain-Specific Compiler Optimizations for Variational Forms

    CERN Document Server

    Kirby, Robert C

    2012-01-01

    We examine the effect of using complexity-reducing relations to generate optimized code for the evaluation of finite element variational forms. The optimizations are implemented in a prototype code named FErari, which has been integrated as an optimizing backend to the FEniCS Form Compiler, FFC. In some cases, FErari provides very little speedup, while in other cases, we obtain reduced local operation counts of a factor of as much as 7.9 and speedups for the assembly of the global sparse matrix of as much as a factor of 2.8.

  18. Cross Section Evaluation Working Group benchmark specifications. Volume 2. Supplement

    International Nuclear Information System (INIS)

    Neutron and photon flux spectra have been measured and calculated for the case of neutrons produced by D-T reactions streaming through a cylindrical iron duct surrounded by concrete. Measurements and calculations have also been obtained when the iron duct is partially filled by a laminated stainless steel and borated polyethylene shadow bar. Schematic diagrams of the experimental apparatus is included

  19. Deliverable 1.2 Specification of industrial benchmark tests

    DEFF Research Database (Denmark)

    Arentoft, Mogens; Ravn, Bjarne Gottlieb

    Technical report for the Growth project: IMPRESS, Improvement of precision in forming by simultaneous modelling of deflections in workpiece-die-press system - Output from WP1: Numerical simulation of deflections in workpiece-die-press system.......Technical report for the Growth project: IMPRESS, Improvement of precision in forming by simultaneous modelling of deflections in workpiece-die-press system - Output from WP1: Numerical simulation of deflections in workpiece-die-press system....

  20. Validation of gadolinium burnout using PWR benchmark specification

    International Nuclear Information System (INIS)

    Graphical abstract: - Highlights: • We present methodology for validation of gadolinium burnout in PWR. • We model 17 × 17 PWR fuel assembly using MCB code. • We demonstrate C/E ratios of measured and calculated concentrations of Gd isotopes. • The C/E for Gd154, Gd156, Gd157, Gd158 and Gd160 shows good agreement of ±10%. • The C/E for Gd152 and Gd155 shows poor agreement below ±10%. - Abstract: The paper presents comparative analysis of measured and calculated concentrations of gadolinium isotopes in spent nuclear fuel from the Japanese Ohi-2 PWR. The irradiation of the 17 × 17 fuel assembly containing pure uranium and gadolinia bearing fuel pins was numerically reconstructed using the Monte Carlo Continuous Energy Burnup Code – MCB. The reference concentrations of gadolinium isotopes were measured in early 1990s at Japan Atomic Energy Research Institute. It seems that the measured concentrations were never used for validation of gadolinium burnout. In our study we fill this gap and assess quality of both: applied numerical methodology and experimental data. Additionally we show time evolutions of infinite neutron multiplication factor Kinf, FIMA burnup, U235 and Gd155–Gd158. Gadolinium-based materials are commonly used in thermal reactors as burnable absorbers due to large neutron absorption cross-section of Gd155 and Gd157

  1. Validation of gadolinium burnout using PWR benchmark specification

    Energy Technology Data Exchange (ETDEWEB)

    Oettingen, Mikołaj, E-mail: moettin@agh.edu.pl; Cetnar, Jerzy, E-mail: cetnar@mail.ftj.agh.edu.pl

    2014-07-01

    Graphical abstract: - Highlights: • We present methodology for validation of gadolinium burnout in PWR. • We model 17 × 17 PWR fuel assembly using MCB code. • We demonstrate C/E ratios of measured and calculated concentrations of Gd isotopes. • The C/E for Gd154, Gd156, Gd157, Gd158 and Gd160 shows good agreement of ±10%. • The C/E for Gd152 and Gd155 shows poor agreement below ±10%. - Abstract: The paper presents comparative analysis of measured and calculated concentrations of gadolinium isotopes in spent nuclear fuel from the Japanese Ohi-2 PWR. The irradiation of the 17 × 17 fuel assembly containing pure uranium and gadolinia bearing fuel pins was numerically reconstructed using the Monte Carlo Continuous Energy Burnup Code – MCB. The reference concentrations of gadolinium isotopes were measured in early 1990s at Japan Atomic Energy Research Institute. It seems that the measured concentrations were never used for validation of gadolinium burnout. In our study we fill this gap and assess quality of both: applied numerical methodology and experimental data. Additionally we show time evolutions of infinite neutron multiplication factor K{sub inf}, FIMA burnup, U235 and Gd155–Gd158. Gadolinium-based materials are commonly used in thermal reactors as burnable absorbers due to large neutron absorption cross-section of Gd155 and Gd157.

  2. IKS Deliverable - D1.1 Report: Design of the Semantic Benchmark Experiment

    OpenAIRE

    Kowatsch, Tobias; Maass, Wolfgang; Damjanovic, Violeta; Behrendt, Wernher; Gruber, Andreas; Nagel, Benjamin; Sauer, Stefan; Engels, Gregor

    2009-01-01

    Public Deliverable - The objective of this deliverable is to design a benchmark model for Content Management Systems (CMSs) in order to identify relevant requirements for the Interactive Knowledge Stack (IKS). The IKS will be a layered set of software components and specifications with the goal to improve the interaction with knowledge objects of CMSs by using Semantic Web technologies. In contrast to projects that benchmarked rather technical aspects of CMSs, we propose a model that evaluate...

  3. LITMUS: An Open Extensible Framework for Benchmarking RDF Data Management Solutions

    OpenAIRE

    Thakkar, Harsh; Dubey, Mohnish; Sejdiu, Gezim; Ngomo, Axel-Cyrille Ngonga; Debattista, Jeremy; Lange, Christoph; Lehmann, Jens; Auer, Sören; Vidal, Maria-Esther

    2016-01-01

    Developments in the context of Open, Big, and Linked Data have led to an enormous growth of structured data on the Web. To keep up with the pace of efficient consumption and management of the data at this rate, many data Management solutions have been developed for specific tasks and applications. We present LITMUS, a framework for benchmarking data management solutions. LITMUS goes beyond classical storage benchmarking frameworks by allowing for analysing the performance of frameworks across...

  4. Influencia del ritmo circadiano sobre el rendimiento físico en ejercicios aeróbicos y anaeróbicos. Una revisión

    OpenAIRE

    Bueno Pérez, Ángel Javier

    2015-01-01

    La cronobiología es la ciencia que se encarga del estudio de los cambios fisiológicos dependientes de los ritmos circadianos, estos se refieren a las variaciones internas que se repetirán cada 24 horas. El objetivo del presente estudio fue realizar una revisión sistemática sobre la influencia provocada por la variabilidad circadiana en el rendimiento cardiorrespiratorio y motor tanto aeróbico como anaeróbico. Los resultados de esta revisión indican que el rendimiento deportivo se ve afectado ...

  5. Efecto del ejercicio físico aeróbico sobre los niveles séricos de adiponectina y leptina en mujeres posmenopáusicas

    OpenAIRE

    Aranzález, Luz Helena; Mockus Sivickas, Ismena; Ramírez, Doris; Mancera, Erica; García, Óscar

    2011-01-01

    Antecedentes. Las variaciones de peso corporal se acompañan de modificaciones en los niveles circulantes de adipocinas como la adiponectina y la leptina.  Durante la posmenopausia se presenta una tendencia al incremento de peso. Se recomienda ejercicio físico, que tiene efectos sobre tejido adiposo y los factores de riesgo cardiovascular, como parte del tratamiento del sobrepeso y obesidad. Objetivo. Determinar los efectos del ejercicio físico aeróbico controlado sobre los niveles séricos de ...

  6. Desarrollo de la potencia aeróbica con dos diferentes métodos de entrenamiento en alumnos del polimodal

    OpenAIRE

    Díaz, Miguel Ángel

    2009-01-01

    El propósito del trabajo es determinar si dos estímulos semanales durante 6 semanas, utilizando 2 diferentes métodos de entrenamiento de resistencia, desarrollan la potencia aeróbica en jóvenes varones de 16 y 17 años que cursan el polimodal. Los Criterios de exclusión fueron, edad, practicar deporte (cíclicos o acíclicos) de manera federada o padecer alguna enfermedad cardiovascular, diabetes o asma. Se dividió la muestra en dos grupos experimentales de 30 sujetos y un grupo control de 20. C...

  7. Mejora de defensas antioxidantes mediante ejercicio aeróbico en mujeres con síndrome metabólico

    OpenAIRE

    Manuel Rosety-Rodríguez; Antonio Díaz-Ordoñez; Ignacio Rosety; Gabriel Fornieles; Alejandra Camacho-Molina; Natalia García; Miguel Angel Rosety; Francisco J. Ordoñez

    2012-01-01

    En la actualidad se acepta que el daño oxidativo juega un papel esencial en la patogénesis del síndrome metabólico. Estudios recientes proponen al daño oxidativo como diana terapéutica frente al síndrome metabólico. Precisamente nuestro objetivo fue mejorar el estatus total antioxidante (TAS) de mujeres con síndrome metabólico mediante ejercicio aeróbico. Participaron voluntariamente 100 mujeres con síndrome metabólico de acuerdo con los criterios del National Cholesterol Educational Program ...

  8. El funcionamiento de un pulso político : discurso, endeudamiento y política en el "de aere alieno, de vi et de ambitu" de Clodio

    OpenAIRE

    Rosilló López, Cristina

    2007-01-01

    El discurso de Cicerón, De aere alieno Milonis, fue pronunciado en respuesta a un fuerte ataque contra Milón de parte de Clodio. Una de las acusaciones, basada en la distorsión deliberada del montante de sus deudas, no tiene paralelo en la política romana de la época. Este artículo analiza cómo esta imputación, lejos de ser inocua, contenía una potente ofensiva contra Milón que podría conllevar su condena antes de las elecciones. Al mismo tiempo, se muestran l...

  9. Benchmark analyses of prediction models for pipe wall thinning

    International Nuclear Information System (INIS)

    In recent years, the importance of utilizing a prediction model or code for the management of pipe wall thinning has been recognized. In Japan Society of Mechanical Engineers (JSME), a working group on prediction methods has been set up within a research committee for studying the management of pipe wall-thinning. Some prediction models for pipe wall thinning were reviewed by benchmark analyses in terms of their prediction characteristics and the specifications required for their use in the management of pipe wall thinning in power generation facilities. This paper introduces the prediction models selected from the existing flow-accelerated corrosion and/or liquid droplet impingement erosion models. The experimental results and example of the results of wall thickness measurement used as benchmark data are also mentioned. (author)

  10. Developing integrated benchmarks for DOE performance measurement

    Energy Technology Data Exchange (ETDEWEB)

    Barancik, J.I.; Kramer, C.F.; Thode, Jr. H.C.

    1992-09-30

    The objectives of this task were to describe and evaluate selected existing sources of information on occupational safety and health with emphasis on hazard and exposure assessment, abatement, training, reporting, and control identifying for exposure and outcome in preparation for developing DOE performance benchmarks. Existing resources and methodologies were assessed for their potential use as practical performance benchmarks. Strengths and limitations of current data resources were identified. Guidelines were outlined for developing new or improved performance factors, which then could become the basis for selecting performance benchmarks. Data bases for non-DOE comparison populations were identified so that DOE performance could be assessed relative to non-DOE occupational and industrial groups. Systems approaches were described which can be used to link hazards and exposure, event occurrence, and adverse outcome factors, as needed to generate valid, reliable, and predictive performance benchmarks. Data bases were identified which contain information relevant to one or more performance assessment categories . A list of 72 potential performance benchmarks was prepared to illustrate the kinds of information that can be produced through a benchmark development program. Current information resources which may be used to develop potential performance benchmarks are limited. There is need to develop an occupational safety and health information and data system in DOE, which is capable of incorporating demonstrated and documented performance benchmarks prior to, or concurrent with the development of hardware and software. A key to the success of this systems approach is rigorous development and demonstration of performance benchmark equivalents to users of such data before system hardware and software commitments are institutionalized.

  11. The implementation of benchmarking process in marketing education services by Ukrainian universities

    Directory of Open Access Journals (Sweden)

    G.V. Okhrimenko

    2016-03-01

    Full Text Available The aim of the article. The consideration of theoretical and practical aspects of benchmarking at universities is the main task of this research. At first, the researcher identified the essence of benchmarking. It involves comparing the characteristics of college or university leading competitors in the industry and copying of proven designs. Benchmarking tries to eliminate the fundamental problem of comparison – the impossibility of being better than the one from whom they borrow solution. Benchmarking involves, therefore, self-evaluation including systematic collection of data and information with the view to making relevant comparisons of strengths and weaknesses of performance aspects. Benchmarking identifies gaps in performance, seeks new approaches for improvements, monitors progress, reviews benefits and assures adoption of good practices. The results of the analysis. There are five types of benchmarking: internal, competitive, functional, procedural and general. Benchmarking is treated as a process of systematically applied and has specific stages: 1 identification of study object; 2 identification of businesses for comparison; 3 selection of data collection methods; 4 determining variations in terms of efficiency and determination of the levels of future results; 5 communicating of the results of benchmarking; 6 development of implementation plan, initiating the implementation, monitoring implementation; 7 new benchmarks definition. The researcher gave the results of practical use of the benchmarking algorithm at universities. In particular, the monitoring and SWOT-analysis were identified competitive practices used at Ukrainian universities. The main criteria for determining the potential for benchmarking of universities were: 1 the presence of new teaching methods at universities; 2 the involvement of foreign lecturers, partners of other universities for cooperation; 3 promoting education services for target groups; 4 violation of

  12. Human factors reliability benchmark exercise

    International Nuclear Information System (INIS)

    The Joint Research Centre of the European Commission has organised a Human Factors Reliability Benchmark Exercise (HF-RBE) with the aim of assessing the state of the art in human reliability modelling and assessment. Fifteen teams from eleven countries, representing industry, utilities, licensing organisations and research institutes, participated in the HF-RBE. The HF-RBE was organised around two study cases: (1) analysis of routine functional Test and Maintenance (TPM) procedures: with the aim of assessing the probability of test induced failures, the probability of failures to remain unrevealed and the potential to initiate transients because of errors performed in the test; (2) analysis of human actions during an operational transient: with the aim of assessing the probability that the operators will correctly diagnose the malfunctions and take proper corrective action. This report summarises the contributions received from the participants and analyses these contributions on a comparative basis. The aim of this analysis was to compare the procedures, modelling techniques and quantification methods used, to obtain insight in the causes and magnitude of the variability observed in the results, to try to identify preferred human reliability assessment approaches and to get an understanding of the current state of the art in the field identifying the limitations that are still inherent to the different approaches

  13. Benchmarking in healthcare using aggregated indicators

    DEFF Research Database (Denmark)

    Traberg, Andreas; Jacobsen, Peter

    2010-01-01

    databases, the model is constructed as a comprehensive hierarchy of indicators. By aggregating the outcome of each indicator, the model is able to benchmark healthcare providing units. By assessing performance deeper in the hierarchy, a more detailed view of performance is obtained. The validity test of the...... model is performed at a Danish non-profit hospital, where four radiological sites are benchmarked against each other. Because of the multifaceted perspective on performance, the model proved valuable both as a benchmarking tool and as an internal decision support system....

  14. LAPUR-K BWR stability benchmark

    International Nuclear Information System (INIS)

    This paper documents the stability benchmark of the LAPUR-K code using the measurements taken at the Ringhals Unit 1 plant over four cycles of operation. This benchmark was undertaken to demonstrate the ability of LAPUR-K to calculate the decay ratios for both core-wide and regional mode oscillations. This benchmark contributes significantly to assuring that LAPUR-K can be used to define the exclusion region for the Monticello Plant in response to recent US Nuclear Regulatory Commission notices concerning oscillation observed at Boiling Water Reactor plants. Stability is part of Northern States Power Reload Safety Evaluation of the Monticello Plant

  15. Cause-specific long-term mortality in survivors of childhood cancer in Switzerland: A population-based study.

    Science.gov (United States)

    Schindler, Matthias; Spycher, Ben D; Ammann, Roland A; Ansari, Marc; Michel, Gisela; Kuehni, Claudia E

    2016-07-15

    Survivors of childhood cancer have a higher mortality than the general population. We describe cause-specific long-term mortality in a population-based cohort of childhood cancer survivors. We included all children diagnosed with cancer in Switzerland (1976-2007) at age 0-14 years, who survived ≥5 years after diagnosis and followed survivors until December 31, 2012. We obtained causes of death (COD) from the Swiss mortality statistics and used data from the Swiss general population to calculate age-, calendar year-, and sex-standardized mortality ratios (SMR), and absolute excess risks (AER) for different COD, by Poisson regression. We included 3,965 survivors and 49,704 person years at risk. Of these, 246 (6.2%) died, which was 11 times higher than expected (SMR 11.0). Mortality was particularly high for diseases of the respiratory (SMR 14.8) and circulatory system (SMR 12.7), and for second cancers (SMR 11.6). The pattern of cause-specific mortality differed by primary cancer diagnosis, and changed with time since diagnosis. In the first 10 years after 5-year survival, 78.9% of excess deaths were caused by recurrence of the original cancer (AER 46.1). Twenty-five years after diagnosis, only 36.5% (AER 9.1) were caused by recurrence, 21.3% by second cancers (AER 5.3) and 33.3% by circulatory diseases (AER 8.3). Our study confirms an elevated mortality in survivors of childhood cancer for at least 30 years after diagnosis with an increased proportion of deaths caused by late toxicities of the treatment. The results underline the importance of clinical follow-up continuing years after the end of treatment for childhood cancer. PMID:26950898

  16. RIA Fuel Codes Benchmark - Volume 1

    International Nuclear Information System (INIS)

    Reactivity-initiated accident (RIA) fuel rod codes have been developed for a significant period of time and they all have shown their ability to reproduce some experimental results with a certain degree of adequacy. However, they sometimes rely on different specific modelling assumptions the influence of which on the final results of the calculations is difficult to evaluate. The NEA Working Group on Fuel Safety (WGFS) is tasked with advancing the understanding of fuel safety issues by assessing the technical basis for current safety criteria and their applicability to high burnup and to new fuel designs and materials. The group aims at facilitating international convergence in this area, including the review of experimental approaches as well as the interpretation and use of experimental data relevant for safety. As a contribution to this task, WGFS conducted a RIA code benchmark based on RIA tests performed in the Nuclear Safety Research Reactor in Tokai, Japan and tests performed or planned in CABRI reactor in Cadarache, France. Emphasis was on assessment of different modelling options for RIA fuel rod codes in terms of reproducing experimental results as well as extrapolating to typical reactor conditions. This report provides a summary of the results of this task. (authors)

  17. BENCHMARKING OF CT FOR PATIENT EXPOSURE OPTIMISATION.

    Science.gov (United States)

    Racine, Damien; Ryckx, Nick; Ba, Alexandre; Ott, Julien G; Bochud, François O; Verdun, Francis R

    2016-06-01

    Patient dose optimisation in computed tomography (CT) should be done using clinically relevant tasks when dealing with image quality assessments. In the present work, low-contrast detectability for an average patient morphology was assessed on 56 CT units, using a model observer applied on images acquired with two specific protocols of an anthropomorphic phantom containing spheres. Images were assessed using the channelised Hotelling observer (CHO) with dense difference of Gaussian channels. The results were computed by performing receiver operating characteristics analysis (ROC) and using the area under the ROC curve (AUC) as a figure of merit. The results showed a small disparity at a volume computed tomography dose index (CTDIvol) of 15 mGy depending on the CT units for the chosen image quality criterion. For 8-mm targets, AUCs were 0.999 ± 0.018 at 20 Hounsfield units (HU) and 0.927 ± 0.054 at 10 HU. For 5-mm targets, AUCs were 0.947 ± 0.059 and 0.702 ± 0.068 at 20 and 10 HU, respectively. The robustness of the CHO opens the way for CT protocol benchmarking and optimisation processes. PMID:26940439

  18. Statistical benchmark for BosonSampling

    Science.gov (United States)

    Walschaers, Mattia; Kuipers, Jack; Urbina, Juan-Diego; Mayer, Klaus; Tichy, Malte Christopher; Richter, Klaus; Buchleitner, Andreas

    2016-03-01

    Boson samplers—set-ups that generate complex many-particle output states through the transmission of elementary many-particle input states across a multitude of mutually coupled modes—promise the efficient quantum simulation of a classically intractable computational task, and challenge the extended Church-Turing thesis, one of the fundamental dogmas of computer science. However, as in all experimental quantum simulations of truly complex systems, one crucial problem remains: how to certify that a given experimental measurement record unambiguously results from enforcing the claimed dynamics, on bosons, fermions or distinguishable particles? Here we offer a statistical solution to the certification problem, identifying an unambiguous statistical signature of many-body quantum interference upon transmission across a multimode, random scattering device. We show that statistical analysis of only partial information on the output state allows to characterise the imparted dynamics through particle type-specific features of the emerging interference patterns. The relevant statistical quantifiers are classically computable, define a falsifiable benchmark for BosonSampling, and reveal distinctive features of many-particle quantum dynamics, which go much beyond mere bunching or anti-bunching effects.

  19. Entrenamiento de la capacidad aeróbica por medio de la terapia acuática en niños con parálisis cerebral tipo diplejía espástica

    OpenAIRE

    Nandy Fajardo-López; Fabiola Moscoso-Alvarado

    2013-01-01

    Antecedentes. La parálisis cerebral tipo diplejía espática genera cambios en el sistema cardiovascular que afectan la capacidad aeróbica. La terapia acuática es una estrategia terapéutica óptima tanto para el manejo de la población como para el entrenamiento de la capacidad aeróbica, por las respuestas fisiológicas que genera y porque brinda la facilidad de generar mayores cargas al sistema cardiovascular con menores riesgos que en tierra. Objetivo. Identificar las características que debe te...

  20. Efectos de la distribución y secuencia en la organización de distintas tareas de entrenamiento para la mejora de la resistencia aeróbica

    OpenAIRE

    Clemente Suárez, Vicente Javier

    2010-01-01

    Numerosos autores han investigado el efecto de diferentes entrenamientos en el rendimiento de deportistas de resistencia, pero poco se ha estudiado sobre el efecto de la distribución y la secuenciación de las tareas de entrenamiento en la mejora de la resistencia aeróbica tanto en el rendimiento aeróbico, como en variables espirométricas, parámetros de fuerza explosiva e isocinética de piernas, recuperación y fatiga del sistema nervioso central. Por ello, esta tesis doctoral pretende analizar...

  1. Efecto del ejercicio físico aeróbico sobre el consumo de oxígeno de mujeres primigestantes saludables: Estudio clínico aleatorizado

    OpenAIRE

    Robinson Ramírez-Vélez; Ana C. Aguilar de Plata; Mildrey Mosquera-Escudero; José G Ortega; Blanca Salazar; Isabella Echeverri; Wilmar Saldarriaga-Gil

    2011-01-01

    Objetivo: evaluar, en mujeres primigestantes saludables, el efecto del ejercicio aeróbico sobre el consumo de oxígeno. Materiales y métodos: estudio clínico aleatorizado en 64 mujeres saludables, primigestantes, entre 16 y 20 semanas de gestación. Grupo de intervención: ejercicio aeróbico entre el 50% y 65% de la frecuencia cardíaca máxima, durante 45 min, 3 veces por semana durante 16 semanas. Grupo control: actividad física habitual. Mediciones: consumo de oxígeno VO2max por prueba de camin...

  2. International Handbook of Evaluated Criticality Safety Benchmark Experiments - ICSBEP (DVD), Version 2013

    International Nuclear Information System (INIS)

    The Criticality Safety Benchmark Evaluation Project (CSBEP) was initiated in October of 1992 by the United States Department of Energy. The project quickly became an international effort as scientists from other interested countries became involved. The International Criticality Safety Benchmark Evaluation Project (ICSBEP) became an official activity of the Organisation for Economic Co-operation and Development (OECD) Nuclear Energy Agency (NEA) in 1995. This handbook contains criticality safety benchmark specifications that have been derived from experiments performed at various nuclear critical experiment facilities around the world. The benchmark specifications are intended for use by criticality safety engineers to validate calculational techniques used to establish minimum subcritical margins for operations with fissile material and to determine criticality alarm requirement and placement. Many of the specifications are also useful for nuclear data testing. Example calculations are presented; however, these calculations do not constitute a validation of the codes or cross section data. The evaluated criticality safety benchmark data are given in nine volumes. These volumes span nearly 66,000 pages and contain 558 evaluations with benchmark specifications for 4,798 critical, near critical or subcritical configurations, 24 criticality alarm placement/shielding configurations with multiple dose points for each and 200 configurations that have been categorised as fundamental physics measurements that are relevant to criticality safety applications. New to the Handbook are benchmark specifications for Critical, Bare, HEU(93.2)- Metal Sphere experiments referred to as ORSphere that were performed by a team of experimenters at Oak Ridge National Laboratory in the early 1970's. A photograph of this assembly is shown on the front cover

  3. Resistance and uptake of cadmium by yeast, Pichia hampshirensis 4Aer, isolated from industrial effluent and its potential use in decontamination of wastewater.

    Science.gov (United States)

    Khan, Zaman; Rehman, Abdul; Hussain, Syed Z

    2016-09-01

    Pichia hampshirensis 4Aer is first ever used yeast for the bioremediation of environmental cadmium (Cd(+2)) which could maximally remove 22 mM/g and 28 mM/g Cd(+2) from aqueous medium at lab and large scales, respectively. The biosorption was found to be the function of temperature, pH of solution, initial Cd(+2) concentration and biomass dosage. Competitive biosorption was investigated in binary and multi-metal system which indicated the decrease in Cd(+2) biosorption with increasing the competitive metal ions attributed to their higher electronegativity and larger radius. FTIR analysis revealed the active participation of amide and carbonyl moieties in Cd(+2) adsorption confirmed by EDX analysis. Electron micrographs summoned further surface adsorption and increased cell size due to intracellular Cd(+2) accumulation. Cd(+2) was the causative agent of some metal binding proteins as well as prodigious increase in glutathione and other non-protein thiols levels which is the crucial for the yeast to thrive oxidative stress generated by Cd(+2). Our experimental data were consistent with Langmuir as well as Freundlich isotherm models. The yeast obeyed pseudo second order kinetic model which makes it an effective biosorbent for Cd(+2). High bioremediation potential and spontaneity and feasibility of the process make P. hampshirensis 4Aer an impending foundation for green chemistry to exterminate environmental Cd(+2). PMID:27268792

  4. Avaliação da biotratabilidade do efluentede branqueamento de polpa celulósicapor processos aeróbios e anaeróbios

    Directory of Open Access Journals (Sweden)

    Míriam Cristina Santos Amaral

    2013-09-01

    Full Text Available Os efluentes da planta de branqueamento da produção de pasta celulósica apresentam, além de elevadas concentrações de matéria orgânica em termos de Demanda Química de Oxigênio (DQO e Demanda Bioquímica de Oxigênio (DBO e cor, compostos com elevada toxicidade, o que torna o tratamento destes efluentes problemático. O objetivo do presente artigo é avaliar a biotratabilidade dos efluentes de branqueamento ácido e alcalino de polpa celulósica kraft por processos aeróbios e anaeróbios por meio da caracterização, utilizando parâmetros convencionais e coletivos. Os resultados de DQO inerte, biodegradabilidade aeróbia e anaeróbia, distribuição de massas molares, produtos microbianos solúveis e substâncias poliméricas extracelulares indicaram a baixa biotratabilidade dos efluentes

  5. Reactor fuel depletion benchmark of TINDER

    International Nuclear Information System (INIS)

    Highlights: • A reactor burnup benchmark of TINDER, coupling MCNP6 to CINDER2008, was performed. • TINDER is a poor candidate for fuel depletion calculations using its current libraries. • Data library modification is necessary if fuel depletion is desired from TINDER. - Abstract: Accurate burnup calculations are key to proper nuclear reactor design, fuel cycle modeling, and disposal estimations. The TINDER code, originally designed for activation analyses, has been modified to handle full burnup calculations, including the widely used predictor–corrector feature. In order to properly characterize the performance of TINDER for this application, a benchmark calculation was performed. Although the results followed the trends of past benchmarked codes for a UO2 PWR fuel sample from the Takahama-3 reactor, there were obvious deficiencies in the final result, likely in the nuclear data library that was used. Isotopic comparisons versus experiment and past code benchmarks are given, as well as hypothesized areas of deficiency and future work

  6. Numerical methods: Analytical benchmarking in transport theory

    International Nuclear Information System (INIS)

    Numerical methods applied to reactor technology have reached a high degree of maturity. Certainly one- and two-dimensional neutron transport calculations have become routine, with several programs available on personal computer and the most widely used programs adapted to workstation and minicomputer computational environments. With the introduction of massive parallelism and as experience with multitasking increases, even more improvement in the development of transport algorithms can be expected. Benchmarking an algorithm is usually not a very pleasant experience for the code developer. Proper algorithmic verification by benchmarking involves the following considerations: (1) conservation of particles, (2) confirmation of intuitive physical behavior, and (3) reproduction of analytical benchmark results. By using today's computational advantages, new basic numerical methods have been developed that allow a wider class of benchmark problems to be considered

  7. Medicare Contracting - Redacted Benchmark Metric Reports

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Centers for Medicare and Medicaid Services has compiled aggregate national benchmark cost and workload metrics using data submitted to CMS by the AB MACs and...

  8. Benchmarking Optimization Software with Performance Profiles

    OpenAIRE

    Dolan, Elizabeth D.; Moré, Jorge J.

    2001-01-01

    We propose performance profiles-distribution functions for a performance metric-as a tool for benchmarking and comparing optimization software. We show that performance profiles combine the best features of other tools for performance evaluation.

  9. Results of the isotopic concentrations of WWER calculation Burnup Credit Benchmark NO.2 (CB2)

    International Nuclear Information System (INIS)

    The purpose of this document is to present the results of the nuclide concentrations of the WWER Burnup Credit Benchmark NO.2 (CB2) that were performed in The Nuclear Technology Center of Cuba with available codes and libraries. The CB2 benchmark specification as the second phase of the WWER burnup credit benchmark is summarized in [1]. The CB2 benchmark focused on WWER burnup credit study proposed on the 97' Atomic Energy Research symposium [2]. The obtained results are isotopic concentrations of spent fuel as a function of the burnup and cooling time. The depletion point 'ORIGEN2'[3] code was used for the calculation of the spent fuel concentration. This work also comprises the results obtained by other codes [4]. (Author)

  10. The Development of a Benchmark Tool for NoSQL Databases

    Directory of Open Access Journals (Sweden)

    Ion LUNGU

    2013-07-01

    Full Text Available The aim of this article is to describe a proposed benchmark methodology and software application targeted at measuring the performance of both SQL and NoSQL databases. These represent the results obtained during PhD research (being actually a part of a larger application intended for NoSQL database management. A reason for aiming at this particular subject is the complete lack of benchmarking tools for NoSQL databases, except for YCBS [1] and a benchmark tool made specifically to compare Redis to RavenDB. While there are several well-known benchmarking systems for classical relational databases (starting with the canon TPC-C, TPC-E and TPC-H, on the other side of databases world such tools are mostly missing and seriously needed.

  11. The Activities of the International Criticality Safety Benchmark Evaluation Project (ICSBEP)

    Energy Technology Data Exchange (ETDEWEB)

    Briggs, Joseph Blair

    2001-10-01

    The International Criticality Safety Benchmark Evaluation Project (ICSBEP) was initiated in 1992 by the United States Department of Energy. The ICSBEP became an official activity of the Organization for Economic Cooperation and Development (OECD) – Nuclear Energy Agency (NEA) in 1995. Representatives from the United States, United Kingdom, France, Japan, the Russian Federation, Hungary, Republic of Korea, Slovenia, Yugoslavia, Kazakhstan, Spain, and Israel are now participating. The purpose of the ICSBEP is to identify, evaluate, verify, and formally document a comprehensive and internationally peer-reviewed set of criticality safety benchmark data. The work of the ICSBEP is published as an OECD handbook entitled “International Handbook of Evaluated Criticality Safety Benchmark Experiments”. The 2001 Edition of the Handbook contains benchmark specifications for 2642 critical or subcritical configurations that are intended for use in validation efforts and for testing basic nuclear data.

  12. Lesson learned from the SARNET wall condensation benchmarks

    International Nuclear Information System (INIS)

    Highlights: • The results of the benchmarking activity on wall condensation are reported. • The work was performed in the frame of SARNET. • General modelling techniques for condensation are discussed. • Results of University of Pisa and of other benchmark participants are discussed. • The lesson learned is drawn. - Abstract: The prediction of condensation in the presence of noncondensable gases has received continuing attention in the frame of the Severe Accident Research Network of Excellence, both in the first (2004–2008) and in the second (2009–2013) EC integrated projects. Among the different reasons for considering so relevant this basic phenomenon, coped with by classical treatments dated in the first decades of the last century, there is the interest for developing updated CFD models for reactor containment analysis, requiring validating at a different level the available modelling techniques. In the frame of SARNET, benchmarking activities were undertaken taking advantage of the work performed at different institutions in setting up and developing models for steam condensation in conditions of interest for nuclear reactor containment. Four steps were performed in the activity, involving: (1) an idealized problem freely inspired at the actual conditions occurring in an experimental facility, CONAN, installed at the University of Pisa; (2) a first comparison with experimental data purposely collected by the CONAN facility; (3) a second comparison with data available from experimental campaigns performed in the same apparatus before the inclusion of the activities in SARNET; (4) a third exercise involving data obtained at lower mixture velocity than in previous campaigns, aimed at providing conditions closer to those addressed in reactor containment analyses. The last step of the benchmarking activity required to change the configuration of the experimental apparatus to achieve the lower flow rates involved in the new test specifications. The

  13. IKE contribution to the one-dimensional LWR shielding benchmark of ANS

    Energy Technology Data Exchange (ETDEWEB)

    Al Malah, K.

    1982-04-01

    The IKE computational methodology of solving radiation transport problems is applied to determine the radiation levels at specific locations in a one dimensional LWR representation. Solutions are submitted for two variations of a PWR problem. They contain detailed descriptions of approach and appropriate calculational parameters. The objective of the benchmark problem is: to provide a documented specification to permit intercomparisons of computational techniques tested with this benchmark, to determine fluence levels at reactor pressure vessel, to calculate radiation induced changes in the mechanical properties and to evaluate adequacy of specific cross section data sets.

  14. Benchmarking carbon emissions performance in supply chains

    OpenAIRE

    Acquaye, Adolf; Genovese, Andrea; Barrett, John W.; Koh, Lenny

    2014-01-01

    Purpose – The paper aims to develop a benchmarking framework to address issues such as supply chain complexity and visibility, geographical differences and non-standardized data, ensuring that the entire supply chain environmental impact (in terms of carbon) and resource use for all tiers, including domestic and import flows, are evaluated. Benchmarking has become an important issue in supply chain management practice. However, challenges such as supply chain complexity and visibility, geogra...

  15. EPRI depletion benchmark calculations using PARAGON

    International Nuclear Information System (INIS)

    Highlights: • PARAGON depletion calculations are benchmarked against the EPRI reactivity decrement experiments. • Benchmarks cover a wide range of enrichments, burnups, cooling times, and burnable absorbers, and different depletion and storage conditions. • Results from PARAGON-SCALE scheme are more conservative relative to the benchmark data. • ENDF/B-VII based data reduces the excess conservatism and brings the predictions closer to benchmark reactivity decrement values. - Abstract: In order to conservatively apply burnup credit in spent fuel pool criticality analyses, code validation for both fresh and used fuel is required. Fresh fuel validation is typically done by modeling experiments from the “International Handbook.” A depletion validation can determine a bias and bias uncertainty for the worth of the isotopes not found in the fresh fuel critical experiments. Westinghouse’s burnup credit methodology uses PARAGON™ (Westinghouse 2-D lattice physics code) and its 70-group cross-section library, which have been benchmarked, qualified, and licensed both as a standalone transport code and as a nuclear data source for core design simulations. A bias and bias uncertainty for the worth of depletion isotopes, however, are not available for PARAGON. Instead, the 5% decrement approach for depletion uncertainty is used, as set forth in the Kopp memo. Recently, EPRI developed a set of benchmarks based on a large set of power distribution measurements to ascertain reactivity biases. The depletion reactivity has been used to create 11 benchmark cases for 10, 20, 30, 40, 50, and 60 GWd/MTU and 3 cooling times 100 h, 5 years, and 15 years. These benchmark cases are analyzed with PARAGON and the SCALE package and sensitivity studies are performed using different cross-section libraries based on ENDF/B-VI.3 and ENDF/B-VII data to assess that the 5% decrement approach is conservative for determining depletion uncertainty

  16. A framework for benchmarking land models

    Directory of Open Access Journals (Sweden)

    Y. Q. Luo

    2012-10-01

    Full Text Available Land models, which have been developed by the modeling community in the past few decades to predict future states of ecosystems and climate, have to be critically evaluated for their performance skills of simulating ecosystem responses and feedback to climate change. Benchmarking is an emerging procedure to measure performance of models against a set of defined standards. This paper proposes a benchmarking framework for evaluation of land model performances and, meanwhile, highlights major challenges at this infant stage of benchmark analysis. The framework includes (1 targeted aspects of model performance to be evaluated, (2 a set of benchmarks as defined references to test model performance, (3 metrics to measure and compare performance skills among models so as to identify model strengths and deficiencies, and (4 model improvement. Land models are required to simulate exchange of water, energy, carbon and sometimes other trace gases between the atmosphere and land surface, and should be evaluated for their simulations of biophysical processes, biogeochemical cycles, and vegetation dynamics in response to climate change across broad temporal and spatial scales. Thus, one major challenge is to select and define a limited number of benchmarks to effectively evaluate land model performance. The second challenge is to develop metrics of measuring mismatches between models and benchmarks. The metrics may include (1 a priori thresholds of acceptable model performance and (2 a scoring system to combine data–model mismatches for various processes at different temporal and spatial scales. The benchmark analyses should identify clues of weak model performance to guide future development, thus enabling improved predictions of future states of ecosystems and climate. The near-future research effort should be on development of a set of widely acceptable benchmarks that can be used to objectively, effectively, and reliably evaluate fundamental properties

  17. The MCNP6 Analytic Criticality Benchmark Suite

    Energy Technology Data Exchange (ETDEWEB)

    Brown, Forrest B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Monte Carlo Codes Group

    2016-06-16

    Analytical benchmarks provide an invaluable tool for verifying computer codes used to simulate neutron transport. Several collections of analytical benchmark problems [1-4] are used routinely in the verification of production Monte Carlo codes such as MCNP® [5,6]. Verification of a computer code is a necessary prerequisite to the more complex validation process. The verification process confirms that a code performs its intended functions correctly. The validation process involves determining the absolute accuracy of code results vs. nature. In typical validations, results are computed for a set of benchmark experiments using a particular methodology (code, cross-section data with uncertainties, and modeling) and compared to the measured results from the set of benchmark experiments. The validation process determines bias, bias uncertainty, and possibly additional margins. Verification is generally performed by the code developers, while validation is generally performed by code users for a particular application space. The VERIFICATION_KEFF suite of criticality problems [1,2] was originally a set of 75 criticality problems found in the literature for which exact analytical solutions are available. Even though the spatial and energy detail is necessarily limited in analytical benchmarks, typically to a few regions or energy groups, the exact solutions obtained can be used to verify that the basic algorithms, mathematics, and methods used in complex production codes perform correctly. The present work has focused on revisiting this benchmark suite. A thorough review of the problems resulted in discarding some of them as not suitable for MCNP benchmarking. For the remaining problems, many of them were reformulated to permit execution in either multigroup mode or in the normal continuous-energy mode for MCNP. Execution of the benchmarks in continuous-energy mode provides a significant advance to MCNP verification methods.

  18. Benchmark Two-Good Utility Functions

    OpenAIRE

    de Jaegher, K.

    2007-01-01

    Benchmark two-good utility functions involving a good with zero income elasticity and unit income elasticity are well known. This paper derives utility functions for the additional benchmark cases where one good has zero cross-price elasticity, unit own-price elasticity, and zero own price elasticity. It is shown how each of these utility functions arises from a simple graphical construction based on a single given indifference curve. Also, it is shown that possessors of such utility function...

  19. Bundesländer-Benchmarking 2002

    OpenAIRE

    Blancke, Susanne; Hedrich, Horst; Schmid, Josef

    2002-01-01

    Das Bundesländer Benchmarking 2002 basiert auf einer Untersuchung ausgewählter Arbeitsmarkt- und Wirtschaftsindikatoren in den deutschen Bundesländern. Hierfür wurden drei Benchmarkings nach der Radar-Chart Methode vorgenommen: Eines welches nur Arbeitsmarktindikatoren betrachtet; eines, welches nur Wirtschaftsindikatoren betrachtet; und eines welches gemischte Arbeitsmarkt- und Wirtschaftsindikatoren beleuchtet. Verglichen wurden die Länder untereinander im Querschnitt zu zwei Zeitpunkten –...

  20. Benchmarking Deep Reinforcement Learning for Continuous Control

    OpenAIRE

    Duan, Yan; Chen, Xi; Houthooft, Rein; Schulman, John; Abbeel, Pieter

    2016-01-01

    Recently, researchers have made significant progress combining the advances in deep learning for learning feature representations with reinforcement learning. Some notable examples include training agents to play Atari games based on raw pixel data and to acquire advanced manipulation skills using raw sensory inputs. However, it has been difficult to quantify progress in the domain of continuous control due to the lack of a commonly adopted benchmark. In this work, we present a benchmark suit...

  1. Distributional benchmarking in tax policy evaluations

    OpenAIRE

    Thor O. Thoresen; Zhiyang Jia; Peter J. Lambert

    2013-01-01

    Given an objective to exploit cross-sectional micro data to evaluate the distributional effects of tax policies over a time period, the practitioner of public economics will find that the relevant literature offers a wide variety of empirical approaches. For example, studies vary with respect to the definition of individual well-being and to what extent explicit benchmarking techniques are utilized to describe policy effects. The present paper shows how the concept of distributional benchmark...

  2. Features and technology of enterprise internal benchmarking

    OpenAIRE

    A. V. Dubodelova; Yurynets, O. V.

    2013-01-01

    The aim of the article. The aim of the article is to generalize characteristics, objectives, advantages of internal benchmarking. The stages sequence of internal benchmarking technology is formed. It is focused on continuous improvement of process of the enterprise by implementing existing best practices.The results of the analysis. Business activity of domestic enterprises in crisis business environment has to focus on the best success factors of their structural units by using standard rese...

  3. Overview of CSEWG shielding benchmark problems

    Energy Technology Data Exchange (ETDEWEB)

    Maerker, R.E.

    1979-01-01

    The fundamental philosophy behind the choosing of CSEWG shielding benchmarks is that the accuracy of a certain range of cross section data be adequately tested. The benchmarks, therefore, consist of measurements and calculations of these measurements. Calculations for which there are no measurements provide little information on the adequacy of the data, although they can perhaps indicate the sensitivity of results to variations in data.

  4. Dukovany NPP fuel cycle benchmark definition

    International Nuclear Information System (INIS)

    The new benchmark based on Dukovany NPP Unit-2 history is defined. The main goal of this benchmark is to compare results obtained by different codes used for neutron-physics calculation in organisations which are interested in this task. All needed are described in this paper or there are given references, where it is possible to obtain this information. Input data are presented in tables, requested output data format for automatic processing is described (Authors)

  5. Benchmarking Danish Vocational Education and Training Programmes

    DEFF Research Database (Denmark)

    Bogetoft, Peter; Wittrup, Jesper

    This study paper discusses methods whereby Danish vocational education and training colleges can be benchmarked, and presents results from a number of models. It is conceptually complicated to benchmark vocational colleges, as the various colleges in Denmark offer a wide range of course programmes......-related achievement. We attempt to summarise the various effects that the colleges have in two relevant figures, namely retention rates of students and employment rates among students who have completed training programmes....

  6. Under Pressure Benchmark for DDBMS Availability

    OpenAIRE

    Fior, Alessandro Gustavo; Meira, Jorge Augusto; Cunha De Almeida, Eduardo; Coelho, Ricardo Gonçalves; Didonet Del Fabro, Marcos; Le Traon, Yves

    2013-01-01

    The availability of Distributed Database Management Systems (DDBMS) is related to the probability of being up and running at a given point in time, and managing failures. One well-known and widely used mechanism to ensure availability is replication, which includes performance impact on maintaining data replicas across the DDBMS's machine nodes. Benchmarking can be used to measure such impact. In this article, we present a benchmark that evaluates the performance of DDBMS, considering availab...

  7. DWEB: A Data Warehouse Engineering Benchmark

    OpenAIRE

    Darmont, Jérôme; Bentayeb, Fadila; Boussaïd, Omar

    2005-01-01

    Data warehouse architectural choices and optimization techniques are critical to decision support query performance. To facilitate these choices, the performance of the designed data warehouse must be assessed. This is usually done with the help of benchmarks, which can either help system users comparing the performances of different systems, or help system engineers testing the effect of various design choices. While the TPC standard decision support benchmarks address the first point, they ...

  8. MPI Benchmarking Revisited: Experimental Design and Reproducibility

    OpenAIRE

    Hunold, Sascha; Carpen-Amarie, Alexandra

    2015-01-01

    The Message Passing Interface (MPI) is the prevalent programming model used on today's supercomputers. Therefore, MPI library developers are looking for the best possible performance (shortest run-time) of individual MPI functions across many different supercomputer architectures. Several MPI benchmark suites have been developed to assess the performance of MPI implementations. Unfortunately, the outcome of these benchmarks is often neither reproducible nor statistically sound. To overcome th...

  9. Big Data in AER

    Science.gov (United States)

    Kregenow, Julia M.

    2016-01-01

    Penn State University teaches Introductory Astronomy to more undergraduates than any other institution in the U.S. Using a standardized assessment instrument, we have pre-/post- tested over 20,000 students in the last 8 years in both resident and online instruction. This gives us a rare opportunity to look for long term trends in the performance of our students during a period in which online instruction has burgeoned.

  10. Karma1.1 benchmark calculations for the numerical benchmark problems and the critical experiments

    International Nuclear Information System (INIS)

    The transport lattice code KARMA 1.1 has been developed at KAERI for the reactor physics analysis of the pressurized water reactor. This program includes the multi-group library processed from ENDF/B-VI R8 and also utilizes the macroscopic cross sections for the benchmark problems. Benchmark calculations were performed for the C5G7 and the KAERI benchmark problems given with seven group cross sections, for various fuels loaded in the operating pressurized water reactors in South Korea, and for the critical experiments including CE, B and W and KRITZ. Benchmark results show that KARMA 1.1 is working reasonably. (author)

  11. Benchmarking for Cost Improvement. Final report

    Energy Technology Data Exchange (ETDEWEB)

    1993-09-01

    The US Department of Energy`s (DOE) Office of Environmental Restoration and Waste Management (EM) conducted the Benchmarking for Cost Improvement initiative with three objectives: Pilot test benchmarking as an EM cost improvement tool; identify areas for cost improvement and recommend actions to address these areas; provide a framework for future cost improvement. The benchmarking initiative featured the use of four principal methods (program classification, nationwide cost improvement survey, paired cost comparison and component benchmarking). Interested parties contributed during both the design and execution phases. The benchmarking initiative was conducted on an accelerated basis. Of necessity, it considered only a limited set of data that may not be fully representative of the diverse and complex conditions found at the many DOE installations. The initiative generated preliminary data about cost differences and it found a high degree of convergence on several issues. Based on this convergence, the report recommends cost improvement strategies and actions. This report describes the steps taken as part of the benchmarking initiative and discusses the findings and recommended actions for achieving cost improvement. The results and summary recommendations, reported below, are organized by the study objectives.

  12. Clinically meaningful performance benchmarks in MS

    Science.gov (United States)

    Motl, Robert W.; Scagnelli, John; Pula, John H.; Sosnoff, Jacob J.; Cadavid, Diego

    2013-01-01

    Objective: Identify and validate clinically meaningful Timed 25-Foot Walk (T25FW) performance benchmarks in individuals living with multiple sclerosis (MS). Methods: Cross-sectional study of 159 MS patients first identified candidate T25FW benchmarks. To characterize the clinical meaningfulness of T25FW benchmarks, we ascertained their relationships to real-life anchors, functional independence, and physiologic measurements of gait and disease progression. Candidate T25FW benchmarks were then prospectively validated in 95 subjects using 13 measures of ambulation and cognition, patient-reported outcomes, and optical coherence tomography. Results: T25FW of 6 to 7.99 seconds was associated with a change in occupation due to MS, occupational disability, walking with a cane, and needing “some help” with instrumental activities of daily living; T25FW ≥8 seconds was associated with collecting Supplemental Security Income and government health care, walking with a walker, and inability to do instrumental activities of daily living. During prospective benchmark validation, we trichotomized data by T25FW benchmarks (10 seconds) ranges of performance. PMID:24174581

  13. Action-Oriented Benchmarking: Concepts and Tools

    Energy Technology Data Exchange (ETDEWEB)

    California Energy Commission; Mathew, Paul; Mills, Evan; Mathew, Paul; Piette, Mary Ann; Bourassa, Norman; Brook, Martha

    2008-02-13

    Most energy benchmarking tools provide static feedback on how one building compares to a larger set of loosely similar buildings, without providing information at the end-use level or on what can be done to reduce consumption, cost, or emissions. In this article--Part 1 of a two-part series--we describe an 'action-oriented benchmarking' approach, which extends whole-building energy benchmarking to include analysis of system and component energy use metrics and features. Action-oriented benchmarking thereby allows users to generate more meaningful metrics and to identify, screen and prioritize potential efficiency improvements. This opportunity assessment process can then be used to inform and optimize a full-scale audit or commissioning process. We introduce a new web-based action-oriented benchmarking system and associated software tool-EnergyIQ. The benchmarking methods, visualizations, and user interface design are informed by an end-user needs assessment survey and best-practice guidelines from ASHRAE.

  14. OECD/DOE/CEA VVER-1000 Coolant Transient Benchmark. Summary Record of the First Workshop (V1000-CT1)

    International Nuclear Information System (INIS)

    The first workshop for the VVER-1000 Coolant Transient Benchmark TT Benchmark was hosted by the Commissariat a l'Energie Atomique, Centre d'Etudes de Saclay, France. The V1000CT benchmark defines standard problems for validation of coupled three-dimensional (3-D) neutron-kinetics/system thermal-hydraulics codes for application to Soviet-designed VVER-1000 reactors using actual plant data without any scaling. The overall objective is to access computer codes used in the safety analysis of VVER power plants, specifically for their use in reactivity transient simulations in a VVER-1000. The V1000CT benchmark consists of two phases: V1000CT-1 - simulation of the switching on of one main coolant pump (MCP) while the other three MCP are in operation, and V1000CT- 2 - calculation of coolant mixing tests and Main Steam Line Break (MSLB) scenario. Further background information on this benchmark can be found at the OECD/NEA benchmark web site . The purpose of the first workshop was to review the benchmark activities after the Starter Meeting held last year in Dresden, Germany: to discuss the participants' feedback and modifications introduced in the Benchmark Specifications on Phase 1; to present and to discuss modelling issues and preliminary results from the three exercises of Phase 1; to discuss the modelling issues of Exercise 1 of Phase 2; and to define work plan and schedule in order to complete the two phases

  15. Benchmarking von Krankenhausinformationssystemen – eine vergleichende Analyse deutschsprachiger Benchmarkingcluster

    Directory of Open Access Journals (Sweden)

    Jahn, Franziska

    2015-08-01

    Full Text Available Benchmarking is a method of strategic information management used by many hospitals today. During the last years, several benchmarking clusters have been established within the German-speaking countries. They support hospitals in comparing and positioning their information system’s and information management’s costs, performance and efficiency against other hospitals. In order to differentiate between these benchmarking clusters and to provide decision support in selecting an appropriate benchmarking cluster, a classification scheme is developed. The classification scheme observes both general conditions and examined contents of the benchmarking clusters. It is applied to seven benchmarking clusters which have been active in the German-speaking countries within the last years. Currently, performance benchmarking is the most frequent benchmarking type, whereas the observed benchmarking clusters differ in the number of benchmarking partners and their cooperation forms. The benchmarking clusters also deal with different benchmarking subjects. Assessing costs and quality application systems, physical data processing systems, organizational structures of information management and IT services processes are the most frequent benchmarking subjects. There is still potential for further activities within the benchmarking clusters to measure strategic and tactical information management, IT governance and quality of data and data-processing processes. Based on the classification scheme and the comparison of the benchmarking clusters, we derive general recommendations for benchmarking of hospital information systems.

  16. Validation study of SRAC2006 code system based on evaluated nuclear data libraries for TRIGA calculations by benchmarking integral parameters of TRX and BAPL lattices of thermal reactors

    International Nuclear Information System (INIS)

    Highlights: ► To validate the SRAC2006 code system for TRIGA neutronics calculations. ► TRX and BAPL lattices are treated as standard benchmarks for this purpose. ► To compare the calculated results with experiment as well as MCNP values in this study. ► The study demonstrates a good agreement with the experiment and the MCNP results. ► Thus, this analysis reflects the validation study of the SRAC2006 code system. - Abstract: The goal of this study is to present the validation study of the SRAC2006 code system based on evaluated nuclear data libraries ENDF/B-VII.0 and JENDL-3.3 for neutronics analysis of TRIGA Mark-II Research Reactor at AERE, Bangladesh. This study is achieved through the analysis of integral parameters of TRX and BAPL benchmark lattices of thermal reactors. In integral measurements, the thermal reactor lattices TRX-1, TRX-2, BAPL-UO2-1, BAPL-UO2-2 and BAPL-UO2-3 are treated as standard benchmarks for validating/testing the SRAC2006 code system as well as nuclear data libraries. The integral parameters of the said lattices are calculated using the collision probability transport code PIJ of the SRAC2006 code system at room temperature 20 °C based on the above libraries. The calculated integral parameters are compared to the measured values as well as the MCNP values based on the Chinese evaluated nuclear data library CENDL-3.0. It was found that in most cases, the values of integral parameters demonstrate a good agreement with the experiment and the MCNP results. In addition, the group constants in SRAC format for TRX and BAPL lattices in fast and thermal energy range respectively are compared between the above libraries and it was found that the group constants are identical with very insignificant difference. Therefore, this analysis reflects the validation study of the SRAC2006 code system based on evaluated nuclear data libraries JENDL-3.3 and ENDF/B-VII.0 and can also be essential to implement further neutronics calculations of

  17. Full sphere hydrodynamic and dynamo benchmarks

    KAUST Repository

    Marti, P.

    2014-01-26

    Convection in planetary cores can generate fluid flow and magnetic fields, and a number of sophisticated codes exist to simulate the dynamic behaviour of such systems. We report on the first community activity to compare numerical results of computer codes designed to calculate fluid flow within a whole sphere. The flows are incompressible and rapidly rotating and the forcing of the flow is either due to thermal convection or due to moving boundaries. All problems defined have solutions that alloweasy comparison, since they are either steady, slowly drifting or perfectly periodic. The first two benchmarks are defined based on uniform internal heating within the sphere under the Boussinesq approximation with boundary conditions that are uniform in temperature and stress-free for the flow. Benchmark 1 is purely hydrodynamic, and has a drifting solution. Benchmark 2 is a magnetohydrodynamic benchmark that can generate oscillatory, purely periodic, flows and magnetic fields. In contrast, Benchmark 3 is a hydrodynamic rotating bubble benchmark using no slip boundary conditions that has a stationary solution. Results from a variety of types of code are reported, including codes that are fully spectral (based on spherical harmonic expansions in angular coordinates and polynomial expansions in radius), mixed spectral and finite difference, finite volume, finite element and also a mixed Fourier-finite element code. There is good agreement between codes. It is found that in Benchmarks 1 and 2, the approximation of a whole sphere problem by a domain that is a spherical shell (a sphere possessing an inner core) does not represent an adequate approximation to the system, since the results differ from whole sphere results. © The Authors 2014. Published by Oxford University Press on behalf of The Royal Astronomical Society.

  18. Higher education information technology management benchmarking in Europe

    OpenAIRE

    Juult, Janne

    2013-01-01

    Objectives of the Study: This study aims to facilitate the rapprochement of the European higher education benchmarking projects towards a unified European benchmarking project. Total of four higher education IT benchmarking projects are analysed by comparing their categorisation of benchmarking indicators and their data manipulation processes. Four select benchmarking projects are compared in this fashion for the first time. The focus is especially on the Finnish Bencheit project's point o...

  19. Revaluering benchmarking - A topical theme for the construction industry

    DEFF Research Database (Denmark)

    Rasmussen, Grane Mikael Gregaard

    2011-01-01

    Over the past decade, benchmarking has increasingly gained foothold in the construction industry. The predominant research, perceptions and uses of benchmarking are valued so strongly and uniformly, that what may seem valuable, is actually abstaining researchers and practitioners from studying and...... questioning the concept objectively. This paper addresses the underlying nature of benchmarking, and accounts for the importance of focusing attention on the sociological impacts benchmarking has in organizations. To understand these sociological impacts, benchmarking research needs to transcend the...

  20. Criteria of benchmark selection for efficient flexible multibody system formalisms

    Directory of Open Access Journals (Sweden)

    Valášek M.

    2007-10-01

    Full Text Available The paper deals with the selection process of benchmarks for testing and comparing efficient flexible multibody formalisms. The existing benchmarks are briefly summarized. The purposes for benchmark selection are investigated. The result of this analysis is the formulation of the criteria of benchmark selection for flexible multibody formalisms. Based on them the initial set of suitable benchmarks is described. Besides that the evaluation measures are revised and extended.

  1. Summary of twelfth session of the AER Working Group F - 'Spent Fuel Transmutations' and third meeting of INPRO Project RMI - 'Meeting energy needs in the period of raw materials insufficiency during the twenty first century'

    International Nuclear Information System (INIS)

    Information about the development in the problems spent fuel transmutation and future nuclear reactors development during the last years 2009-2010. Some critical views on the coming works tendencies are given by the coordinator of works within AER cooperation. (Author)

  2. Exercício aeróbico agudo restaura a concentração de triptofano em cérebro de ratos com hiperfenilalaninemia

    Directory of Open Access Journals (Sweden)

    Priscila Nicolao Mazzola

    2012-10-01

    Full Text Available INTRODUÇÃO: A fenilcetonúria (PKU é caracterizada pela deficiência da enzima fenilalanina hidroxilase, causando acúmulo de fenilalanina. O diagnóstico precoce e a subordinação à dieta pobre em fenilalanina são importantes para prevenir os efeitos prejudiciais da hiperfenilalaninemia. Não aderir estritamente à dieta provoca, entre outros efeitos, um desequilíbrio entre os aminoácidos neutros que usam o mesmo transportador da fenilalanina na barreira hematoencefálica, causando, então, a diminuição da entrada de triptofano, o precursor de serotonina no cérebro. Esse neurotransmissor tem sido implicado na regulação dos estados de humor, sendo sua alta produção ligada à fadiga central em indivíduos submetidos a exercício prolongado. O exercício físico aumenta os níveis de triptofano livre no sangue, o que facilita seu influxo no cérebro, podendo, portanto, ser útil nos estados hiperfenilalaninêmicos. OBJETIVO: Avaliar se o exercício aeróbico é capaz de normalizar as concentrações de triptofano no cérebro de ratos com hiperfenilalaninemia. MÉTODOS: Trinta e dois ratos foram separados nos grupos sedentário (Sed e exercício (Exe, e cada um deles subdividido em controle (SAL e hiperfenilalaninemia (PKU. A hiperfenilalaninemia foi induzida pela administração de alfa-metilfenilalanina e fenilalanina durante três dias, enquanto os grupos SAL receberam salina. Os grupos Exe realizaram uma sessão de exercício aeróbico com duração de 60min e velocidade de 12m.min-1. RESULTADOS: A concentração de triptofano no cérebro nos grupos PKU foi significativamente menor que nos grupos SAL, tanto Sed como Exe, compatível com a condição hiperfenilalaninêmica. O exercício aumentou a concentração cerebral de triptofano comparada aos animais sedentários. O achado mais interessante foi que a concentração cerebral de triptofano no grupo ExePKU não foi diferente do SedSAL. CONCLUSÃO: Os resultados indicam um

  3. Aptidão aeróbia e amplitude dos domínios de intensidade de exercício no ciclismo

    Directory of Open Access Journals (Sweden)

    Renato Aparecido Corrêa Caritá

    2013-08-01

    Full Text Available INTRODUÇÃO: A determinação dos domínios de intensidade de exercício tem importantes implicações na prescrição do treino aeróbio e na elaboração de delineamentos experimentais. OBJETIVO: Analisar os efeitos do nível de aptidão aeróbia sobre a amplitude dos domínios de intensidade de exercício durante o ciclismo. MÉTODOS: Doze ciclistas (CIC, 11 corredores (COR e oito indivíduos não treinados (NT foram submetidos aos seguintes protocolos em diferentes dias: 1 teste progressivo para determinação do limiar de lactato (LL, consumo máximo de oxigênio (VO2máx e sua respectiva intensidade (IVO2máx; 2 três testes de carga constante até a exaustão a 95, 100 e 110% IVO2máx para a determinação da potência crítica (PC; 3 testes até a exaustão para determinar a intensidade superior do domínio severo (Isup. As amplitudes dos domínios (moderado pesado severo < Isup foram expressas como percentual da Isup (VO2. RESULTADOS: A amplitude do domínio moderado foi similar entre CIC (52 ± 8% e COR (47 ± 4% e significantemente maior no CIC em relação ao NT (41 ± 7%. O domínio pesado foi significantemente menor no CIC (17 ± 6% em relação ao COR (27 ± 6% e NT (27 ± 9%. Em relação ao domínio severo não foram encontradas diferenças significantes entre os CIC (31 ± 7%, COR (26 ± 5% e NT (31 ± 7%. CONCLUSÃO: O domínio pesado de exercício é mais sensível a mudanças determinadas pelo nível de aptidão aeróbia, existindo a necessidade de que se atenda ao princípio da especificidade do movimento, quando se pretende obter um elevado grau de adaptação fisiológica.

  4. Treinamento aeróbico prévio à compressão nervosa: análise da morfometria muscular de ratos

    Directory of Open Access Journals (Sweden)

    Elisangela Lourdes Artifon

    2013-02-01

    Full Text Available INTRODUÇÃO: Ciatalgia origina-se da compressão do nervo isquiático e implica em dor, parestesia, diminuição da força muscular e hipotrofia. O exercício físico é reconhecido na prevenção e reabilitação de lesões, porém quando em sobrecargas pode aumentar o risco de lesões e consequente déficit funcional. OBJETIVO: Avaliar efeitos de treinamento aeróbico prévio a modelo experimental de ciatalgia em relação a parâmetros morfométricos dos músculos sóleos de ratos. MATERIAIS E MÉTODOS: 18 ratos divididos em três grupos: simulacro (mergulho, 30 segundos; exercício regular (natação, dez minutos diários; e treinamento aeróbico progressivo (natação em tempos progressivos de dez a 60 minutos diários. Ao final de seis semanas de exercício, os ratos foram submetidos ao modelo experimental da ciatalgia. No terceiro dia após a lesão, foram eutanasiados e tiveram seus músculos sóleos dissecados, pesados e preparados para análise histológica. Variáveis analisadas: peso muscular, área de secção transversa e diâmetro médio das fibras musculares. RESULTADOS: Observou-se diferença estatisticamente significativa para todos os grupos quando se comparou músculo controle e aquele submetido à lesão isquiática. A análise intergrupos não apresentou diferença estatisticamente significativa para nenhuma das variáveis analisadas. CONCLUSÃO: Tanto o exercício físico regular quanto o treinamento aeróbico não produziram efeitos preventivos ou agravantes às consequências musculares da inatividade funcional após ciatalgia.

  5. Atividade do sistema antioxidante e desenvolvimento de aerênquima em raízes de milho 'Saracura' Antioxidant system activity and aerenchyma formation in 'Saracura' maize roots

    Directory of Open Access Journals (Sweden)

    Fabricio José Pereira

    2010-05-01

    Full Text Available Este trabalho teve como objetivo avaliar a influência de sucessivos ciclos de seleção do milho 'Saracura' na atividade das enzimas do sistema antioxidante, e a relação dessas enzimas com a capacidade dessa variedade em desenvolver aerênquima. Sementes de 18 ciclos de seleção intercalados do milho 'Saracura' e da cultivar BR 107, sensível à hipoxia, foram semeadas em vasos e em casa de vegetação. As plantas foram submetidas ao alagamento intermitente de dois em dois dias. As amostras de raízes foram coletadas após 60 dias e analisaram-se as atividades das enzimas peroxidase do guaiacol, peroxidase do ascorbato e catalase, além da capacidade das plantas de cada ciclo desenvolverem aerênquima. Ao longo dos ciclos, as plantas apresentaram modificações na atividade das enzimas, com aumento na de peroxidase do ascorbato e diminuição na de catalase e de peroxidase do guaiacol. Observou-se, ainda, maior capacidade de desenvolver aerênquima nos últimos ciclos de seleção. A redução na atividade das enzimas do sistema antioxidante parece estar relacionada a um desbalanço na decomposição de H2O2.This work aimed to assess the influence of successive selection cycles in 'Saracura' maize on the enzyme activity of the antioxidant system and the relationship of these enzymes with the aerenchyma development capacity of this variety. Seeds of 18 intercalated selection cycles of the 'Saracura' maize and of the cultivar BR 107, sensitive to hipoxia, were sown in pots in the greenhouse. Plants were submitted to intermittent soil flooding each two days. After 60 days, the roots were sampled and analysis were done for the guaiacol peroxidase, ascorbate peroxidase, and catalase activities and for the capacity of the plants of each cycle to develop aerenchyma. The plants showed modifications in enzyme activity along the cycles, increasing the ascorbate peroxidase activity and decreasing the catalase and guaiacol peroxidase ones. A greater

  6. Features and technology of enterprise internal benchmarking

    Directory of Open Access Journals (Sweden)

    A.V. Dubodelova

    2013-06-01

    Full Text Available The aim of the article. The aim of the article is to generalize characteristics, objectives, advantages of internal benchmarking. The stages sequence of internal benchmarking technology is formed. It is focused on continuous improvement of process of the enterprise by implementing existing best practices.The results of the analysis. Business activity of domestic enterprises in crisis business environment has to focus on the best success factors of their structural units by using standard research assessment of their performance and their innovative experience in practice. Modern method of those needs satisfying is internal benchmarking. According to Bain & Co internal benchmarking is one the three most common methods of business management.The features and benefits of benchmarking are defined in the article. The sequence and methodology of implementation of individual stages of benchmarking technology projects are formulated.The authors define benchmarking as a strategic orientation on the best achievement by comparing performance and working methods with the standard. It covers the processes of researching, organization of production and distribution, management and marketing methods to reference objects to identify innovative practices and its implementation in a particular business.Benchmarking development at domestic enterprises requires analysis of theoretical bases and practical experience. Choice best of experience helps to develop recommendations for their application in practice.Also it is essential to classificate species, identify characteristics, study appropriate areas of use and development methodology of implementation. The structure of internal benchmarking objectives includes: promoting research and establishment of minimum acceptable levels of efficiency processes and activities which are available at the enterprise; identification of current problems and areas that need improvement without involvement of foreign experience

  7. Toxicological benchmarks for wildlife: 1994 Revision

    International Nuclear Information System (INIS)

    The process by which ecological risks of environmental contaminants are evaluated is two-tiered. The first tier is a screening assessment where concentrations of contaminants in the environment are compared to toxicological benchmarks which represent concentrations of chemicals in environmental media (water, sediment, soil, food, etc.) that are presumed to be nonhazardous to the surrounding biota. The second tier is a baseline ecological risk assessment where toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. The report presents toxicological benchmarks for assessment of effects of 76 chemicals on 8 representative mammalian wildlife species and 31 chemicals on 9 avian wildlife species. The chemicals are some of those that occur at United States Department of Energy waste sites; the wildlife species were chosen because they are widely distributed and provide a representative range of body sizes and diets. Further descriptions of the chosen wildlife species and chemicals are provided in the report. The benchmarks presented in this report represent values believed to be nonhazardous for the listed wildlife species. These benchmarks only consider contaminant exposure through oral ingestion of contaminated media; exposure through inhalation or direct dermal exposure are not considered in this report

  8. Toxicological benchmarks for wildlife: 1994 Revision

    Energy Technology Data Exchange (ETDEWEB)

    Opresko, D.M.; Sample, B.E.; Suter, G.W. II

    1994-09-01

    The process by which ecological risks of environmental contaminants are evaluated is two-tiered. The first tier is a screening assessment where concentrations of contaminants in the environment are compared to toxicological benchmarks which represent concentrations of chemicals in environmental media (water, sediment, soil, food, etc.) that are presumed to be nonhazardous to the surrounding biota. The second tier is a baseline ecological risk assessment where toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. The report presents toxicological benchmarks for assessment of effects of 76 chemicals on 8 representative mammalian wildlife species and 31 chemicals on 9 avian wildlife species. The chemicals are some of those that occur at United States Department of Energy waste sites; the wildlife species were chosen because they are widely distributed and provide a representative range of body sizes and diets. Further descriptions of the chosen wildlife species and chemicals are provided in the report. The benchmarks presented in this report represent values believed to be nonhazardous for the listed wildlife species. These benchmarks only consider contaminant exposure through oral ingestion of contaminated media; exposure through inhalation or direct dermal exposure are not considered in this report.

  9. Benchmarks and statistics of entanglement dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Tiersch, Markus

    2009-09-04

    In the present thesis we investigate how the quantum entanglement of multicomponent systems evolves under realistic conditions. More specifically, we focus on open quantum systems coupled to the (uncontrolled) degrees of freedom of an environment. We identify key quantities that describe the entanglement dynamics, and provide efficient tools for its calculation. For quantum systems of high dimension, entanglement dynamics can be characterized with high precision. In the first part of this work, we derive evolution equations for entanglement. These formulas determine the entanglement after a given time in terms of a product of two distinct quantities: the initial amount of entanglement and a factor that merely contains the parameters that characterize the dynamics. The latter is given by the entanglement evolution of an initially maximally entangled state. A maximally entangled state thus benchmarks the dynamics, and hence allows for the immediate calculation or - under more general conditions - estimation of the change in entanglement. Thereafter, a statistical analysis supports that the derived (in-)equalities describe the entanglement dynamics of the majority of weakly mixed and thus experimentally highly relevant states with high precision. The second part of this work approaches entanglement dynamics from a topological perspective. This allows for a quantitative description with a minimum amount of assumptions about Hilbert space (sub-)structure and environment coupling. In particular, we investigate the limit of increasing system size and density of states, i.e. the macroscopic limit. In this limit, a universal behaviour of entanglement emerges following a ''reference trajectory'', similar to the central role of the entanglement dynamics of a maximally entangled state found in the first part of the present work. (orig.)

  10. BN-600 hybrid core benchmark analyses

    International Nuclear Information System (INIS)

    Benchmark analyses for the hybrid BN-600 reactor that contains three uranium enrichment zones and one plutonium zone in the core, have been performed within the frame of an IAEA sponsored Coordinated Research Project. The results for several relevant reactivity parameters obtained by the participants with their own state-of-the-art basic data and codes, were compared in terms of calculational uncertainty, and their effects on the ULOF transient behavior of the hybrid BN-600 core were evaluated. The comparison of the diffusion and transport results obtained for the homogeneous representation generally shows good agreement for most parameters between the RZ and HEX-Z models. The burnup effect and the heterogeneity effect on most reactivity parameters also show good agreement for the HEX-Z diffusion and transport theory results. A large difference noticed for the sodium and steel density coefficients is mainly due to differences in the spatial coefficient predictions for non fuelled regions. The burnup reactivity loss was evaluated to be 0.025 (4.3 $) within ∼ 5.0% standard deviation. The heterogeneity effect on most reactivity coefficients was estimated to be small. The heterogeneity treatment reduced the control rod worth by 2.3%. The heterogeneity effect on the k-eff and control rod worth appeared to differ strongly depending on the heterogeneity treatment method. A substantial spread noticed for several reactivity coefficients did not give a significant impact on the transient behavior prediction. This result is attributable to compensating effects between several reactivity effects and the specific design of the partially MOX fuelled hybrid core. (author)

  11. Benchmarks and statistics of entanglement dynamics

    International Nuclear Information System (INIS)

    In the present thesis we investigate how the quantum entanglement of multicomponent systems evolves under realistic conditions. More specifically, we focus on open quantum systems coupled to the (uncontrolled) degrees of freedom of an environment. We identify key quantities that describe the entanglement dynamics, and provide efficient tools for its calculation. For quantum systems of high dimension, entanglement dynamics can be characterized with high precision. In the first part of this work, we derive evolution equations for entanglement. These formulas determine the entanglement after a given time in terms of a product of two distinct quantities: the initial amount of entanglement and a factor that merely contains the parameters that characterize the dynamics. The latter is given by the entanglement evolution of an initially maximally entangled state. A maximally entangled state thus benchmarks the dynamics, and hence allows for the immediate calculation or - under more general conditions - estimation of the change in entanglement. Thereafter, a statistical analysis supports that the derived (in-)equalities describe the entanglement dynamics of the majority of weakly mixed and thus experimentally highly relevant states with high precision. The second part of this work approaches entanglement dynamics from a topological perspective. This allows for a quantitative description with a minimum amount of assumptions about Hilbert space (sub-)structure and environment coupling. In particular, we investigate the limit of increasing system size and density of states, i.e. the macroscopic limit. In this limit, a universal behaviour of entanglement emerges following a ''reference trajectory'', similar to the central role of the entanglement dynamics of a maximally entangled state found in the first part of the present work. (orig.)

  12. Benchmarking ENDF/B-VII.1, JENDL-4.0 and JEFF-3.1.1 with MCNP6

    Science.gov (United States)

    van der Marck, Steven C.

    2012-12-01

    Recent releases of three major world nuclear reaction data libraries, ENDF/B-VII.1, JENDL-4.0, and JEFF-3.1.1, have been tested extensively using benchmark calculations. The calculations were performed with the latest release of the continuous energy Monte Carlo neutronics code MCNP, i.e. MCNP6. Three types of benchmarks were used, viz. criticality safety benchmarks, (fusion) shielding benchmarks, and reference systems for which the effective delayed neutron fraction is reported. For criticality safety, more than 2000 benchmarks from the International Handbook of Criticality Safety Benchmark Experiments were used. Benchmarks from all categories were used, ranging from low-enriched uranium, compound fuel, thermal spectrum ones (LEU-COMP-THERM), to mixed uranium-plutonium, metallic fuel, fast spectrum ones (MIX-MET-FAST). For fusion shielding many benchmarks were based on IAEA specifications for the Oktavian experiments (for Al, Co, Cr, Cu, LiF, Mn, Mo, Si, Ti, W, Zr), Fusion Neutronics Source in Japan (for Be, C, N, O, Fe, Pb), and Pulsed Sphere experiments at Lawrence Livermore National Laboratory (for 6Li, 7Li, Be, C, N, O, Mg, Al, Ti, Fe, Pb, D2O, H2O, concrete, polyethylene and teflon). The new functionality in MCNP6 to calculate the effective delayed neutron fraction was tested by comparison with more than thirty measurements in widely varying systems. Among these were measurements in the Tank Critical Assembly (TCA in Japan) and IPEN/MB-01 (Brazil), both with a thermal spectrum, two cores in Masurca (France) and three cores in the Fast Critical Assembly (FCA, Japan), all with fast spectra. The performance of the three libraries, in combination with MCNP6, is shown to be good. The results for the LEU-COMP-THERM category are on average very close to the benchmark value. Also for most other categories the results are satisfactory. Deviations from the benchmark values do occur in certain benchmark series, or in isolated cases within benchmark series. Such

  13. BN-600 hybrid core benchmark Phase III results

    International Nuclear Information System (INIS)

    The main objective of the CRP on Updated Codes and Methods to Reduce the Calculational Uncertainties of the LMFR Reactivity Effects, is to validate, verify and improve methodologies and computer codes used for the calculation of reactivity coefficients in fast reactors aiming at using weapons-grade plutonium for energy production in fast reactors. BN-600 hybrid reactor taken as benchmark. Earlier, two dimensional and three dimensional diffusion theory BN-600 benchmark calculations were done. This report describes the results of the burnup and heterogeneous calculations done for the proposed BN-600 hybrid core model as a part of Phase III benchmark. BN-600 benchmark has been analyzed at beginning of cycle (BOC) with XSET98 data set and 2-D and 3-D diffusion codes. The 2-D results are compared with the earlier results using the older CV2M data set. The core has been burnt for one cycle using 3-D burnup code FARCOBAB. The burnt core parameter has also been analyzed in 3-D. Heterogeneity effects on reactivity have been computed at BOC. Relative to the use of CV2M data, use of XSET98 data results in increased magnitudes of fuel Doppler worth and sodium density worth. Compared to 2-D results , in the 3-D results, the Keff is lower by about 220 pcm, sodium density worth is higher by about 30% and steel density worth becomes nearly zero or small positive from a negative value in 2-D. The conversion ratio at BOC is 0.669 as computed in 3-D. The burnup reactivity loss due to 140 days at full power (1470 MWt) is 0.0252. The conversion ratio at end of cycle (EOC) is 0.701. The other parameters have been estimated with SHR up condition as desired in the phase III benchmark specifications. Fuel Doppler worth is 7% more negative, sodium density worth is 16% less positive and steel density worth is more negative at EOC compared to BOC. Absorber rod (SHR) worth is higher by 4.9 % at EOC. Heterogeneity effect (core and SHR combined) on multiplication factor is small. For mid SHR

  14. Standardized benchmarking in the quest for orthologs.

    Science.gov (United States)

    Altenhoff, Adrian M; Boeckmann, Brigitte; Capella-Gutierrez, Salvador; Dalquen, Daniel A; DeLuca, Todd; Forslund, Kristoffer; Huerta-Cepas, Jaime; Linard, Benjamin; Pereira, Cécile; Pryszcz, Leszek P; Schreiber, Fabian; da Silva, Alan Sousa; Szklarczyk, Damian; Train, Clément-Marie; Bork, Peer; Lecompte, Odile; von Mering, Christian; Xenarios, Ioannis; Sjölander, Kimmen; Jensen, Lars Juhl; Martin, Maria J; Muffato, Matthieu; Gabaldón, Toni; Lewis, Suzanna E; Thomas, Paul D; Sonnhammer, Erik; Dessimoz, Christophe

    2016-05-01

    Achieving high accuracy in orthology inference is essential for many comparative, evolutionary and functional genomic analyses, yet the true evolutionary history of genes is generally unknown and orthologs are used for very different applications across phyla, requiring different precision-recall trade-offs. As a result, it is difficult to assess the performance of orthology inference methods. Here, we present a community effort to establish standards and an automated web-based service to facilitate orthology benchmarking. Using this service, we characterize 15 well-established inference methods and resources on a battery of 20 different benchmarks. Standardized benchmarking provides a way for users to identify the most effective methods for the problem at hand, sets a minimum requirement for new tools and resources, and guides the development of more accurate orthology inference methods. PMID:27043882

  15. Standardized benchmarking in the quest for orthologs

    DEFF Research Database (Denmark)

    Altenhoff, Adrian M; Boeckmann, Brigitte; Capella-Gutierrez, Salvador;

    2016-01-01

    Achieving high accuracy in orthology inference is essential for many comparative, evolutionary and functional genomic analyses, yet the true evolutionary history of genes is generally unknown and orthologs are used for very different applications across phyla, requiring different precision......-recall trade-offs. As a result, it is difficult to assess the performance of orthology inference methods. Here, we present a community effort to establish standards and an automated web-based service to facilitate orthology benchmarking. Using this service, we characterize 15 well-established inference methods...... and resources on a battery of 20 different benchmarks. Standardized benchmarking provides a way for users to identify the most effective methods for the problem at hand, sets a minimum requirement for new tools and resources, and guides the development of more accurate orthology inference methods....

  16. Energy benchmarking of South Australian WWTPs.

    Science.gov (United States)

    Krampe, J

    2013-01-01

    Optimising the energy consumption and energy generation of wastewater treatment plants (WWTPs) is a topic with increasing importance for water utilities in times of rising energy costs and pressures to reduce greenhouse gas (GHG) emissions. Assessing the energy efficiency and energy optimisation of a WWTP are difficult tasks as most plants vary greatly in size, process layout and other influencing factors. To overcome these limits it is necessary to compare energy efficiency with a statistically relevant base to identify shortfalls and optimisation potential. Such energy benchmarks have been successfully developed and used in central Europe over the last two decades. This paper demonstrates how the latest available energy benchmarks from Germany have been applied to 24 WWTPs in South Australia. It shows how energy benchmarking can be used to identify shortfalls in current performance, prioritise detailed energy assessments and help inform decisions on capital investment. PMID:23656950

  17. AGENT code - neutron transport benchmark examples

    International Nuclear Information System (INIS)

    The paper focuses on description of representative benchmark problems to demonstrate the versatility and accuracy of the AGENT (Arbitrary Geometry Neutron Transport) code. AGENT couples the method of characteristics and R-functions allowing true modeling of complex geometries. AGENT is optimized for robustness, accuracy, and computational efficiency for 2-D assembly configurations. The robustness of R-function based geometry generator is achieved through the hierarchical union of the simple primitives into more complex shapes. The accuracy is comparable to Monte Carlo codes and is obtained by following neutron propagation through true geometries. The computational efficiency is maintained through a set of acceleration techniques introduced in all important calculation levels. The selected assembly benchmark problems discussed in this paper are: the complex hexagonal modular high-temperature gas-cooled reactor, the Purdue University reactor and the well known C5G7 benchmark model. (author)

  18. Shielding Integral Benchmark Archive and Database (SINBAD)

    Energy Technology Data Exchange (ETDEWEB)

    Kirk, Bernadette Lugue [ORNL; Grove, Robert E [ORNL; Kodeli, I. [International Atomic Energy Agency (IAEA); Sartori, Enrico [ORNL; Gulliford, J. [OECD Nuclear Energy Agency

    2011-01-01

    The Shielding Integral Benchmark Archive and Database (SINBAD) collection of benchmarks was initiated in the early 1990 s. SINBAD is an international collaboration between the Organization for Economic Cooperation and Development s Nuclear Energy Agency Data Bank (OECD/NEADB) and the Radiation Safety Information Computational Center (RSICC) at Oak Ridge National Laboratory (ORNL). SINBAD is a major attempt to compile experiments and corresponding computational models with the goal of preserving institutional knowledge and expertise that need to be handed down to future scientists. SINBAD is also a learning tool for university students and scientists who need to design experiments or gain expertise in modeling and simulation. The SINBAD database is currently divided into three categories fission, fusion, and accelerator benchmarks. Where possible, each experiment is described and analyzed using deterministic or probabilistic (Monte Carlo) radiation transport software.

  19. Shielding Integral Benchmark Archive and Database (SINBAD)

    International Nuclear Information System (INIS)

    The Shielding Integral Benchmark Archive and Database (SINBAD) collection of benchmarks was initiated in the early 1990s. SINBAD is an international collaboration between the Organization for Economic Cooperation and Development's Nuclear Energy Agency Data Bank (OECD/NEADB) and the Radiation Safety Information Computational Center (RSICC) at Oak Ridge National Laboratory (ORNL). SINBAD is a major attempt to compile experiments and corresponding computational models with the goal of preserving institutional knowledge and expertise that need to be handed down to future scientists. SINBAD is also a learning tool for university students and scientists who need to design experiments or gain expertise in modeling and simulation. The SINBAD database is currently divided into three categories fission, fusion, and accelerator benchmarks. Where possible, each experiment is described and analyzed using deterministic or probabilistic (Monte Carlo) radiation transport software.

  20. Benchmarking optimization solvers for structural topology optimization

    DEFF Research Database (Denmark)

    Rojas Labanda, Susana; Stolpe, Mathias

    2015-01-01

    The purpose of this article is to benchmark different optimization solvers when applied to various finite element based structural topology optimization problems. An extensive and representative library of minimum compliance, minimum volume, and mechanism design problem instances for different...... sizes is developed for this benchmarking. The problems are based on a material interpolation scheme combined with a density filter. Different optimization solvers including Optimality Criteria (OC), the Method of Moving Asymptotes (MMA) and its globally convergent version GCMMA, the interior point...... profiles conclude that general solvers are as efficient and reliable as classical structural topology optimization solvers. Moreover, the use of the exact Hessians in SAND formulations, generally produce designs with better objective function values. However, with the benchmarked implementations solving...

  1. Benchmark field study of deep neutron penetration

    Science.gov (United States)

    Morgan, J. F.; Sale, K.; Gold, R.; Roberts, J. H.; Preston, C. C.

    1991-06-01

    A unique benchmark neutron field has been established at the Lawrence Livermore National Laboratory (LLNL) to study deep penetration neutron transport. At LLNL, a tandem accelerator is used to generate a monoenergetic neutron source that permits investigation of deep neutron penetration under conditions that are virtually ideal to model, namely the transport of mono-energetic neutrons through a single material in a simple geometry. General features of the Lawrence Tandem (LATAN) benchmark field are described with emphasis on neutron source characteristics and room return background. The single material chosen for the first benchmark, LATAN-1, is a steel representative of Light Water Reactor (LWR) Pressure Vessels (PV). Also included is a brief description of the Little Boy replica, a critical reactor assembly designed to mimic the radiation doses from the atomic bomb dropped on Hiroshima, and its us in neutron spectrometry.

  2. Computational benchmark for deep penetration in iron

    International Nuclear Information System (INIS)

    A benchmark for calculation of neutron transport through iron is now available based upon a rigorous Monte Carlo treatment of ENDF/B-IV and ENDF/B-V cross sections. The currents, flux, and dose (from monoenergetic 2, 14, and 40 MeV sources) have been tabulated at various distances through the slab using a standard energy group structure. This tabulation is available in a Los Alamos Scientific Laboratory report. The benchmark is simple to model and should be useful for verifying the adequacy of one-dimensional transport codes and multigroup libraries for iron. This benchmark also provides useful insights regarding neutron penetration through iron and displays differences in fluxes calculated with ENDF/B-IV and ENDF/B-V data bases

  3. A Benchmarking System for Domestic Water Use

    Directory of Open Access Journals (Sweden)

    Dexter V. L. Hunt

    2014-05-01

    Full Text Available The national demand for water in the UK is predicted to increase, exacerbated by a growing UK population, and home-grown demands for energy and food. When set against the context of overstretched existing supply sources vulnerable to droughts, particularly in increasingly dense city centres, the delicate balance of matching minimal demands with resource secure supplies becomes critical. When making changes to "internal" demands the role of technological efficiency and user behaviour cannot be ignored, yet existing benchmarking systems traditionally do not consider the latter. This paper investigates the practicalities of adopting a domestic benchmarking system (using a band rating that allows individual users to assess their current water use performance against what is possible. The benchmarking system allows users to achieve higher benchmarks through any approach that reduces water consumption. The sensitivity of water use benchmarks are investigated by making changes to user behaviour and technology. The impact of adopting localised supplies (i.e., Rainwater harvesting—RWH and Grey water—GW and including "external" gardening demands are investigated. This includes the impacts (in isolation and combination of the following: occupancy rates (1 to 4; roof size (12.5 m2 to 100 m2; garden size (25 m2 to 100 m2 and geographical location (North West, Midlands and South East, UK with yearly temporal effects (i.e., rainfall and temperature. Lessons learnt from analysis of the proposed benchmarking system are made throughout this paper, in particular its compatibility with the existing Code for Sustainable Homes (CSH accreditation system. Conclusions are subsequently drawn for the robustness of the proposed system.

  4. Efecto de una intervención ACT sobre la resistencia aeróbica y evitación experiencial en marchistas

    Directory of Open Access Journals (Sweden)

    María Clara Rodríguez Salazar

    2015-12-01

    Full Text Available El presente estudio tuvo como propósito identificar el efecto de la intervención en Terapia de Aceptación y Compromiso (ACT sobre la resistencia aeróbica y conducta de evitación experiencial en un grupo de marchistas de Bogotá. Se utilizó un diseño pretest-postest con grupo control. La muestra estuvo compuesta por diez marchistas de ambos sexos, con un promedio de edad de 16.70 y un rango entre los 15 y 20 años de edad, pertenecientes a la Liga de Atletismo de Bogotá. Se eligieron por conveniencia. Se emplearon como instrumentos de medición el test de los 3000 m y el Cuestionario de Aceptación Acción (AAQ. La intervención en ACT se realizó en cuatro sesiones en las que se desarrollaron los contenidos definidos por los autores de la intervención (Wilson y Luciano, 2002. Para el análisis de los datos, se empleó estadística no paramétrica a través de la prueba U de Mann-Whitney. Los resultados señalan una mayor resistencia aeróbica en la prueba de los 3000 m en el postest del grupo experimental con respecto al grupo control, así como una mayoraceptación de los eventos internos negativos.

  5. Reactor group constants and benchmark test

    International Nuclear Information System (INIS)

    The evaluated nuclear data files such as JENDL, ENDF/B-VI and JEF-2 are validated by analyzing critical mock-up experiments for various type reactors and assessing applicability for nuclear characteristics such as criticality, reaction rates, reactivities, etc. This is called Benchmark Testing. In the nuclear calculations, the diffusion and transport codes use the group constant library which is generated by processing the nuclear data files. In this paper, the calculation methods of the reactor group constants and benchmark test are described. Finally, a new group constants scheme is proposed. (author)

  6. Benchmarking of European power network companies

    International Nuclear Information System (INIS)

    A European benchmark has been conducted among 63 grid companies to obtain insight in the degree of efficiency of these companies and to identify the main cost drivers. The benchmark shows that, based on the full distribution cost, the performance differs greatly from company to company. The cost of the companies with the worst performers is five times higher than that of the best performer. Dutch grid operators turn out to work relatively efficient compared to other European companies. Consumers benefit from the consequently lower energy bills. [mk

  7. Shielding integral benchmark archive and database

    International Nuclear Information System (INIS)

    SINBAD (Shielding integral benchmark archive and database) is a new electronic database developed to store a variety of radiation shielding benchmark data so that users can easily and incorporate the data into their calculations. SINBAD is an excellent data source for users who require the quality assurance necessary in developing cross-section libraries or radiation transport codes. The future needs of the scientific community are best served by the electronic database format of SINBAD and its user-friendly interface, combined with its data accuracy and integrity. It has been designed to be able to include data from nuclear reactor shielding, fusion blankets and accelerator shielding experiments. (authors)

  8. Toxicological benchmarks for wildlife: 1996 Revision

    International Nuclear Information System (INIS)

    The purpose of this report is to present toxicological benchmarks for assessment of effects of certain chemicals on mammalian and avian wildlife species. Publication of this document meets a milestone for the Environmental Restoration (ER) Risk Assessment Program. This document provides the ER Program with toxicological benchmarks that may be used as comparative tools in screening assessments as well as lines of evidence to support or refute the presence of ecological effects in ecological risk assessments. The chemicals considered in this report are some that occur at US DOE waste sites, and the wildlife species evaluated herein were chosen because they represent a range of body sizes and diets

  9. Benchmark calculations for fusion blanket development

    International Nuclear Information System (INIS)

    Benchmark problems representing the leading fusion blanket concepts are presented. Benchmark calculations for self-cooled Li17Pb83 and helium-cooled blankets were performed. Multigroup data libraries generated from ENDF/B-IV and V files using the NJOY and AMPX processing codes with different weighting functions were used. The sensitivity of the tritium breeding ratio to group structure and weighting spectrum increases as the thickness and Li enrichment decrease with up to 20% discrepancies for thin natural Li17Pb83 blankets. (author)

  10. Confidential benchmarking based on multiparty computation

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Damgård, Kasper Lyneborg; Nielsen, Kurt;

    We report on the design and implementation of a system that uses multiparty computation to enable banks to benchmark their customers' confidential performance data against a large representative set of confidential performance data from a consultancy house. The system ensures that both the banks......' and the consultancy house's data stays confidential, the banks as clients learn nothing but the computed benchmarking score. In the concrete business application, the developed prototype help Danish banks to find the most efficient customers among a large and challenging group of agricultural customers with too much...

  11. Benchmark testing of 233U evaluations

    International Nuclear Information System (INIS)

    In this paper we investigate the adequacy of available 233U cross-section data (ENDF/B-VI and JENDL-3) for calculation of critical experiments. An ad hoc revised 233U evaluation is also tested and appears to give results which are improved relative to those obtained with either ENDF/B-VI or JENDL-3 cross sections. Calculations of keff were performed for ten fast benchmarks and six thermal benchmarks using the three cross-section sets. Central reaction-rate-ratio calculations were also performed

  12. Toxicological benchmarks for wildlife: 1996 Revision

    Energy Technology Data Exchange (ETDEWEB)

    Sample, B.E.; Opresko, D.M.; Suter, G.W., II

    1996-06-01

    The purpose of this report is to present toxicological benchmarks for assessment of effects of certain chemicals on mammalian and avian wildlife species. Publication of this document meets a milestone for the Environmental Restoration (ER) Risk Assessment Program. This document provides the ER Program with toxicological benchmarks that may be used as comparative tools in screening assessments as well as lines of evidence to support or refute the presence of ecological effects in ecological risk assessments. The chemicals considered in this report are some that occur at US DOE waste sites, and the wildlife species evaluated herein were chosen because they represent a range of body sizes and diets.

  13. Developing and Using Benchmarks for Eddy Current Simulation Codes Validation to Address Industrial Issues

    Science.gov (United States)

    Mayos, M.; Buvat, F.; Costan, V.; Moreau, O.; Gilles-Pascaud, C.; Reboud, C.; Foucher, F.

    2011-06-01

    To achieve performance demonstration, which is a legal requirement for the qualification of NDE processes applied on French nuclear power plants, the use of modeling tools is a valuable support, provided that the employed models have been previously validated. To achieve this, in particular for eddy current modeling, a validation methodology based on the use of specific benchmarks close to the actual industrial issue has to be defined. Nonetheless, considering the high variability in code origin and complexity, the feedback from experience on actual cases has shown that it was critical to define simpler generic and public benchmarks in order to perform a preliminary selection. A specific Working Group has been launched in the frame of COFREND, the French Association for NDE, resulting in the definition of several benchmark problems. This action is now ready for mutualization with similar international approaches.

  14. BN-600 fully MOX fuelled core benchmark analyses (Phase 4). Draft synthesis report - Revision 1

    International Nuclear Information System (INIS)

    A benchmark analysis of a BN-600 fully mixed oxide (MOX) fuelled core design with sodium plenum above the core has been performed as an extension to the study of the BN-600 hybrid uranium oxide (UOX)/MOX fuelled core carried out during 1999-2001. This work was carried out within the the IAEA sponsored Co-ordinated Research Project (CRP) on 'Updated Codes and Methods to Reduce the Calculational Uncertainties of the LMFR Reactivity Effects'. This benchmark analysis retains the general objective of the CRP which is to validate, verify and improve methodologies and computer codes used for the calculation of reactivity coefficients in fast reactors aiming at enhancing the utilization of plutonium and minor actinides. The scope of the benchmark is to reduce the uncertainties of safety relevant reactor physics parameter calculations of MOX fuelled fast reactors and hence to validate and improve data and methods involved in such analyses. In previous benchmark analyses of the BN-600 hybrid core that closely conforms to a traditional configuration, the comparative analyses showed that sufficient accuracy is achieved using the diffusion theory approximation, widely applied in fast reactor physics calculations. With the purpose of investigating a core configuration of full MOX fuel loading, a core model of the BN-600 type reactor, designed to reduce the sodium void effect by installing a sodium plenum above the core, was newly defined for the next benchmark study. The specifications and input data for the benchmark neutronics calculations were prepared by EPPE (Russia). The specifications given for the benchmark describe only a preliminary core model variant and represent only one conceptual approach to BN-600 full MOX core designs. The organizations participating in the BN-600 fully MOX fuelled core benchmark analysis are: ANL from the USA, CEA and SA from EU (France and the UK, respectively), CIAE from China, FZK/IKET from Germany, IGCAR from India, JNC from Japan, KAERI

  15. Using benchmarking for the primary allocation of EU allowances. An application to the German power sector

    Energy Technology Data Exchange (ETDEWEB)

    Schleich, J.; Cremer, C.

    2007-07-01

    Basing allocation of allowances for existing installations under the EU Emissions Trading Scheme on specific emission values (benchmarks) rather than on historic emissions may have several advantages. Benchmarking may recognize early ac-tion, provide higher incentives for replacing old installations and result in fewer distortions in case of updating, facilitate EU-wide harmonization of allocation rules or allow for simplified and more efficient closure rules. Applying an optimization model for the German power sector, we analyze the distributional effects of vari-ous allocation regimes across and within different generation technologies. Re-sults illustrate that regimes with a single uniform benchmark for all fuels or with a single benchmark for coal- and lignite-fired plants imply substantial distributional effects. In particular, lignite- and old coal-fired plants would be made worse off. Under a regime with fuel-specific benchmarks for gas, coal, and lignite 50 % of the gas-fired plants and 4 % of the lignite and coal-fired plants would face an allow-ance deficit of at least 10 %, while primarily modern lignite-fired plants would benefit. Capping the surplus and shortage of allowances would further moderate the distributional effects, but may tarnish incentives for efficiency improvements and recognition of early action. (orig.)

  16. Efeitos do treinamento de corrida em diferentes intensidades sobre a capacidade aeróbia e produção de lactato pelo músculo de ratos Wistar Running training effects in different intensities on the aerobic capacity and lactate production by the muscle of Wistar rats

    OpenAIRE

    Michel Barbosa de Araújo; Fúlvia de Barros Manchado-Gobatto; Fabrício Azevedo Voltarelli; Carla Ribeiro; Clécia Soares de Alencar Mota; Claudio Alexandre Gobatto; Maria Alice Rostom de Mello

    2009-01-01

    São raros os estudos que associam indicadores de capacidade aeróbia e os substratos produzidos pelo metabolismo muscular em ratos. Dessa forma, o objetivo do presente estudo foi verificar o efeito do treinamento de corrida em duas diferentes intensidades sobre a capacidade aeróbia e a produção de lactato pelo músculo sóleo isolado de ratos. Ratos Wistar (90 dias) tiveram a transição metabólica aeróbio-anaeróbia determinada pelo teste de máxima fase estável de lactato (MFEL). Em seguida, os ra...

  17. Generation of 69-group cross section library based on JEF data for TRIGA reactor calculations and its validation by analyzing the benchmark lattices of thermal reactors - 095

    International Nuclear Information System (INIS)

    A new executable, identified as NJOY99.0 has been created to generate the 69-group cross-section library for the reactor lattice transport code WIMS. The new code incorporates modifications in the WIMSR module of NJOY to generate the 69-group library, which will be used for TRIGA reactor calculations. The basic evaluated nuclear data file JEF-2.2 was used to generate the 69-group cross-section library in WIMS format. The results for TRX-1, TRX-2, BAPL-1, BAPL-2, and BAPL-3 benchmarks obtained by using the generated 69-group cross-section library from JEF-2.2 were analyzed. The following integral parameters were considered for the validation of the 69-group library: finite medium effective multiplication factor (keff), Ratio of epithermal to thermal 238U captures (ρ28), Ratio of epithermal to thermal 235U fission (δ25), Ratio of 238U fission to 235U fission (δ28) and Ratio of 238U captures to 235U fissions (C*). The TRX and BAPL benchmark lattices were modeled with optimized inputs, which were suggested in the final report of the WIMS Library Update Project (WLUP) Stage-I by Ravnik. The calculated results of the integral parameters of TRX and BAPL Benchmark Lattices obtained by using the new version of code WIMSD-5B were found to be in good agreement with the experimental values. Besides, The TRX and BAPL calculation results showed that JEF-2.2 is reliable for thermal reactor calculations and validated the 69-group library, which will be used for the neutronic calculation of the TRIGA Mark-II research reactor at AERE, Savar, Dhaka, Bangladesh. (authors)

  18. Prague texture segmentation data generator and benchmark

    Czech Academy of Sciences Publication Activity Database

    Mikeš, Stanislav; Haindl, Michal

    2006-01-01

    Roč. 2006, č. 64 (2006), s. 67-68. ISSN 0926-4981 R&D Projects: GA MŠk(CZ) 1M0572; GA AV ČR(CZ) 1ET400750407; GA AV ČR IAA2075302 Institutional research plan: CEZ:AV0Z10750506 Keywords : image segmentation * texture * benchmark * web Subject RIV: BD - Theory of Information

  19. Cleanroom Energy Efficiency: Metrics and Benchmarks

    Energy Technology Data Exchange (ETDEWEB)

    International SEMATECH Manufacturing Initiative; Mathew, Paul A.; Tschudi, William; Sartor, Dale; Beasley, James

    2010-07-07

    Cleanrooms are among the most energy-intensive types of facilities. This is primarily due to the cleanliness requirements that result in high airflow rates and system static pressures, as well as process requirements that result in high cooling loads. Various studies have shown that there is a wide range of cleanroom energy efficiencies and that facility managers may not be aware of how energy efficient their cleanroom facility can be relative to other cleanroom facilities with the same cleanliness requirements. Metrics and benchmarks are an effective way to compare one facility to another and to track the performance of a given facility over time. This article presents the key metrics and benchmarks that facility managers can use to assess, track, and manage their cleanroom energy efficiency or to set energy efficiency targets for new construction. These include system-level metrics such as air change rates, air handling W/cfm, and filter pressure drops. Operational data are presented from over 20 different cleanrooms that were benchmarked with these metrics and that are part of the cleanroom benchmark dataset maintained by Lawrence Berkeley National Laboratory (LBNL). Overall production efficiency metrics for cleanrooms in 28 semiconductor manufacturing facilities in the United States and recorded in the Fabs21 database are also presented.

  20. Benchmarking 2010: Trends in Education Philanthropy

    Science.gov (United States)

    Bearman, Jessica

    2010-01-01

    "Benchmarking 2010" offers insights into the current priorities, practices and concerns of education grantmakers. The report is divided into five sections: (1) Mapping the Education Grantmaking Landscape; (2) 2010 Funding Priorities; (3) Strategies for Leveraging Greater Impact; (4) Identifying Significant Trends in Education Funding; and (5)…

  1. Benchmarking 2011: Trends in Education Philanthropy

    Science.gov (United States)

    Grantmakers for Education, 2011

    2011-01-01

    The analysis in "Benchmarking 2011" is based on data from an unduplicated sample of 184 education grantmaking organizations--approximately two-thirds of Grantmakers for Education's (GFE's) network of grantmakers--who responded to an online survey consisting of fixed-choice and open-ended questions. Because a different subset of funders elects to…

  2. What Is the Impact of Subject Benchmarking?

    Science.gov (United States)

    Pidcock, Steve

    2006-01-01

    The introduction of subject benchmarking led to fears of increased external intervention in the activities of universities and a more restrictive view of institutional autonomy, accompanied by an undermining of the academic profession, particularly through the perceived threat of the introduction of a national curriculum for higher education. For…

  3. Resolution for the Loviisa benchmark problem

    International Nuclear Information System (INIS)

    In the present paper, the Loviisa benchmark problem for cycles 11 and 8, and reactor blocks 1 and 2 from Loviisa NPP, is calculated. This problem user law leakage reload patterns and was posed at the second thematic group of TIC meeting held in Rheinsberg GDR, march 1989. SPPS-1 coarse mesh code has been used for the calculations

  4. Comparative benchmarks of full QCD algorithms

    International Nuclear Information System (INIS)

    We report performance benchmarks for several algorithms that we have used to simulate the Schroedinger functional with two flavors of dynamical quarks. They include hybrid and polynomial hybrid Monte Carlo with preconditioning. An appendix describes a method to deal with autocorrelations for nonlinear functions of primary observables as they are met here due to reweighting. (orig.)

  5. First CSNI numerical benchmark problem: comparison report

    International Nuclear Information System (INIS)

    In order to be able to make valid statements about a model's ability to describe a certain physical situation, it is indispensable that the numerical errors are much smaller than the modelling errors; otherwise, numerical errors could compensate or over pronounce model errors in an uncontrollable way. Therefore, knowledge about the numerical errors dependence on discretization parameters (e.g. size of spatial and temporal mesh) is required. In recognition of this need, numerical benchmark problems have been introduced. In the area of transient two-phase flow, numerical benchmarks are rather new. In June 1978, the CSNI Working Group on Emergency Core Cooling of Water Reactors has proposed to ICD /CSNI to sponsor a First CSNI Numerical Benchmark exercise. By the end of October 1979, results of the computation had been received from 10 organisations in 10 different countries. Based on these contributions, a preliminary comparison report has been prepared and distributed to the members of the CSNI Working Group on Emergency Core Cooling of Water Reactors, and to the contributors to the benchmark exercise. Comments on the preliminary comparison report by some contributors have subsequently been received. They have been considered in writing this final comparison report

  6. Benchmarking Linked Open Data Management Systems

    NARCIS (Netherlands)

    Angles Rojas, R.; Pham, M.D.; Boncz, P.A.

    2014-01-01

    With inherent support for storing and analysing highly interconnected data, graph and RDF databases appear as natural solutions for developing Linked Open Data applications. However, current benchmarks for these database technologies do not fully attain the desirable characteristics in industrial-st

  7. Benchmarking in radiation protection in pharmaceutical industries

    International Nuclear Information System (INIS)

    A benchmarking on radiation protection in seven pharmaceutical companies in Germany and Switzerland was carried out. As the result relevant parameters describing the performance and costs of radiation protection were acquired and compiled and subsequently depicted in figures in order to make these data comparable. (orig.)

  8. Three-dimensional RAMA fluence methodology benchmarking

    International Nuclear Information System (INIS)

    This paper describes the benchmarking of the RAMA Fluence Methodology software, that has been performed in accordance with U. S. Nuclear Regulatory Commission Regulatory Guide 1.190. The RAMA Fluence Methodology has been developed by TransWare Enterprises Inc. through funding provided by the Electric Power Research Inst., Inc. (EPRI) and the Boiling Water Reactor Vessel and Internals Project (BWRVIP). The purpose of the software is to provide an accurate method for calculating neutron fluence in BWR pressure vessels and internal components. The Methodology incorporates a three-dimensional deterministic transport solution with flexible arbitrary geometry representation of reactor system components, previously available only with Monte Carlo solution techniques. Benchmarking was performed on measurements obtained from three standard benchmark problems which include the Pool Criticality Assembly (PCA), VENUS-3, and H. B. Robinson Unit 2 benchmarks, and on flux wire measurements obtained from two BWR nuclear plants. The calculated to measured (C/M) ratios range from 0.93 to 1.04 demonstrating the accuracy of the RAMA Fluence Methodology in predicting neutron flux, fluence, and dosimetry activation. (authors)

  9. Operational benchmarking of Japanese and Danish hopsitals

    DEFF Research Database (Denmark)

    Traberg, Andreas; Itoh, Kenji; Jacobsen, Peter

    2010-01-01

    This benchmarking model is designed as an integration of three organizational dimensions suited for the healthcare sector. The model incorporates posterior operational indicators, and evaluates upon aggregation of performance. The model is tested upon seven cases from Japan and Denmark. Japanese...

  10. Benchmarking Declarative Approximate Selection Predicates

    CERN Document Server

    Hassanzadeh, Oktie

    2009-01-01

    Declarative data quality has been an active research topic. The fundamental principle behind a declarative approach to data quality is the use of declarative statements to realize data quality primitives on top of any relational data source. A primary advantage of such an approach is the ease of use and integration with existing applications. Several similarity predicates have been proposed in the past for common quality primitives (approximate selections, joins, etc.) and have been fully expressed using declarative SQL statements. In this thesis, new similarity predicates are proposed along with their declarative realization, based on notions of probabilistic information retrieval. Then, full declarative specifications of previously proposed similarity predicates in the literature are presented, grouped into classes according to their primary characteristics. Finally, a thorough performance and accuracy study comparing a large number of similarity predicates for data cleaning operations is performed.

  11. Benchmarking in Identifying Priority Directions of Development of Telecommunication Operators

    Directory of Open Access Journals (Sweden)

    Zaharchenko Lolita A.

    2013-12-01

    Full Text Available The article analyses evolution of development and possibilities of application of benchmarking in the telecommunication sphere. It studies essence of benchmarking on the basis of generalisation of approaches of different scientists to definition of this notion. In order to improve activity of telecommunication operators, the article identifies the benchmarking technology and main factors, that determine success of the operator in the modern market economy, and the mechanism of benchmarking and component stages of carrying out benchmarking by a telecommunication operator. It analyses the telecommunication market and identifies dynamics of its development and tendencies of change of the composition of telecommunication operators and providers. Having generalised the existing experience of benchmarking application, the article identifies main types of benchmarking of telecommunication operators by the following features: by the level of conduct of (branch, inter-branch and international benchmarking; by relation to participation in the conduct (competitive and joint; and with respect to the enterprise environment (internal and external.

  12. Benchmarking – A tool for judgment or improvement?

    DEFF Research Database (Denmark)

    Rasmussen, Grane Mikael Gregaard

    2010-01-01

    Change in construction is high on the agenda for the Danish government and a comprehensive effort is done in improving quality and efficiency. This has led to an initiated governmental effort in bringing benchmarking into the Danish construction sector. This paper is an appraisal of benchmarking as...... it is presently carried out in the Danish construction sector. Many different perceptions of benchmarking and the nature of the construction sector, lead to an uncertainty in how to perceive and use benchmarking, hence, generating an uncertainty in understanding the effects of benchmarking. This...... paper addresses these issues, and describes how effects are closely connected to the perception of benchmarking, the intended users of the system and the application of the benchmarking results. The fundamental basis of this paper is taken from the development of benchmarking in the Danish construction...

  13. Paired cost comparison, a benchmarking technique for identifying areas of cost improvement in environmental restoration projects and waste management activities

    International Nuclear Information System (INIS)

    This paper provides an overview of benchmarking and how the Department of Energy's Office of Environmental Restoration and Waste Management used benchmarking techniques, specifically the Paired Cost Comparison, to identify cost disparities and their causes. The paper includes a discussion of the project categories selected for comparison and the criteria used to select the projects. Results are presented and factors that contribute to cost differences are discussed. Also, conclusions and the application of the Paired Cost Comparison are presented

  14. TINTE transient results for the OECD 400 MW PBMR benchmark

    International Nuclear Information System (INIS)

    The OECD 400 MW PBMR Benchmark was initiated in 2005, and currently several international participants within the HTGR community are participating in the Benchmark activities. The scope of the OECD 400 MW PBMR benchmark is to establish a well-defined problem, based on a common given set of cross sections, to compare methods and tools in HTGR core simulation and thermal hydraulics analysis. This paper presents the results of six transient cases, consisting of three Loss of Forced Cooling (LOFC) transients (Cases 1-3), a 100%-40%-100% power load-follow transient (Case 4) and a helium overcooling transient (Case 6). The focus of this study is an investigation into the PBMR primary system behavior during two Total Control Rod Withdrawal (TCRW) transients. Case 5a consists of a TCRW of all 24 Control Rods at the nominal withdrawal rate of 1 cm/s, while Case 5b is defined as a hypothetical very fast TCRW over 0.1 s, i.e. at a withdrawal rate of 2000 cm/s. Case 5a is based on a PBMR Safety Case Design Based Reactivity Accident, but it should be stressed that Case 5b is designed to investigate prompt criticality effects in the PBMR core, and as such has no realistic physical equivalent in the PBMR design. Since the thermal conductivities and specific heat of the fuel kernel and its associated layers influence the temperature gradient in the fuel, a sensitivity study varying these properties within realistic bounds is performed for Case 5b as part of this work. The results of this study showed that although variances of up to 12% is found between the peak fission power levels for the various sub-cases, the maximum fuel kernel temperatures vary less than 5%, which is within the statistical uncertainty bandwidth of the TINTE 400 MW model. (authors)

  15. Benchmark calculations on residue production within the EURISOL DS project; Part II: thick targets

    CERN Document Server

    David, J.-C; Boudard, A; Doré, D; Leray, S; Rapp, B; Ridikas, D; Thiollière, N

    Benchmark calculations on residue production using MCNPX 2.5.0. Calculations were compared to mass-distribution data for 5 different elements measured at ISOLDE, and to specific activities of 28 radionuclides in different places along the thick target measured in Dubna.

  16. Developing of Indicators of an E-Learning Benchmarking Model for Higher Education Institutions

    Science.gov (United States)

    Sae-Khow, Jirasak

    2014-01-01

    This study was the development of e-learning indicators used as an e-learning benchmarking model for higher education institutes. Specifically, it aimed to: 1) synthesize the e-learning indicators; 2) examine content validity by specialists; and 3) explore appropriateness of the e-learning indicators. Review of related literature included…

  17. Revaluering benchmarking - A topical theme for the construction industry

    OpenAIRE

    Rasmussen, Grane Mikael Gregaard

    2011-01-01

    Over the past decade, benchmarking has increasingly gained foothold in the construction industry. The predominant research, perceptions and uses of benchmarking are valued so strongly and uniformly, that what may seem valuable, is actually abstaining researchers and practitioners from studying and questioning the concept objectively. This paper addresses the underlying nature of benchmarking, and accounts for the importance of focusing attention on the sociological impacts benchmarking has in...

  18. Regression Benchmarking: An Approach to Quality Assurance in Performance

    OpenAIRE

    Bulej, Lubomír

    2005-01-01

    The paper presents a short summary of our work in the area of regression benchmarking and its application to software development. Specially, we explain the concept of regression benchmarking, the requirements for employing regression testing in a software project, and methods used for analyzing the vast amounts of data resulting from repeated benchmarking. We present the application of regression benchmarking on a real software project and conclude with a glimpse at the challenges for the fu...

  19. Destination benchmarking: facilities, customer satisfaction and levels of tourist expenditure

    OpenAIRE

    Metin KOZAK

    2000-01-01

    An extensive review of past benchmarking literature showed that there have been a substantial number of both conceptual and empirical attempts to formulate a benchmarking approach, particularly in the manufacturing industry. However, there has been limited investigation and application of benchmarking in tourism and particularly in tourist destinations. The aim of this research is to further develop the concept of benchmarking for application within tourist destinations and to evaluate its...

  20. On the Extrapolation with the Denton Proportional Benchmarking Method

    OpenAIRE

    Marco Marini; Tommaso Di Fonzo

    2012-01-01

    Statistical offices have often recourse to benchmarking methods for compiling quarterly national accounts (QNA). Benchmarking methods employ quarterly indicator series (i) to distribute annual, more reliable series of national accounts and (ii) to extrapolate the most recent quarters not yet covered by annual benchmarks. The Proportional First Differences (PFD) benchmarking method proposed by Denton (1971) is a widely used solution for distribution, but in extrapolation it may suffer when the...

  1. Benchmarking for major producers of limestone in the Czech Republic

    OpenAIRE

    Vaněk, Michal; Mikoláš, Milan; Bora, Petr

    2013-01-01

    The validity of information available to managers influences the quality of the decision-making processes controlled by those managers. Benchmarking is a method which can yield quality information. The importance of benchmarking is strengthened by the fact that many authors consider benchmarking to be an integral part of strategic management. In commercial practice, benchmarking data and conclusions usually become commercial secrets for internal use only. The wider professional public lacks t...

  2. An Arbitrary Benchmark CAPM: One Additional Frontier Portfolio is Sufficient

    OpenAIRE

    Ekern, Steinar

    2008-01-01

    The benchmark CAPM linearly relates the expected returns on an arbitrary asset, an arbitrary benchmark portfolio, and an arbitrary MV frontier portfolio. The benchmark is not required to be on the frontier and may be non-perfectly correlated with the frontier portfolio. The benchmark CAPM extends and generalizes previous CAPM formulations, including the zero beta, two correlated frontier portfolios, riskless augmented frontier, and inefficient portfolio versions. The covariance between the of...

  3. Benchmarking of corporate social responsibility: Methodological problems and robustness.

    OpenAIRE

    Graafland, J.J.; Eijffinger, S.C.W.; Smid, H.

    2004-01-01

    This paper investigates the possibilities and problems of benchmarking Corporate Social Responsibility (CSR). After a methodological analysis of the advantages and problems of benchmarking, we develop a benchmark method that includes economic, social and environmental aspects as well as national and international aspects of CSR. The overall benchmark is based on a weighted average of these aspects. The weights are based on the opinions of companies and NGO’s. Using different me...

  4. Towards a Benchmark Suite for Modelica Compilers: Large Models

    OpenAIRE

    Frenkel, Jens; Schubert, Christian; Kunze, Günter; Fritzson, Peter; Sjölund, Martin; Pop, Adrian

    2011-01-01

    The paper presents a contribution to a Modelica benchmark suite. Basic ideas for a tool independent benchmark suite based on Python scripting along with models for testing the performance of Modelica compilers regarding large systems of equation are given. The automation of running the benchmark suite is demonstrated followed by a selection of benchmark results to determine the current limits of Modelica tools and how they scale for an increasing number of equations.

  5. Benchmarking the True Random Number Generator of TPM Chips

    CERN Document Server

    Suciu, Alin

    2010-01-01

    A TPM (trusted platform module) is a chip present mostly on newer motherboards, and its primary function is to create, store and work with cryptographic keys. This dedicated chip can serve to authenticate other devices or to protect encryption keys used by various software applications. Among other features, it comes with a True Random Number Generator (TRNG) that can be used for cryptographic purposes. This random number generator consists of a state machine that mixes unpredictable data with the output of a one way hash function. According the specification it can be a good source of unpredictable random numbers even without having to require a genuine source of hardware entropy. However the specification recommends collecting entropy from any internal sources available such as clock jitter or thermal noise in the chip itself, a feature that was implemented by most manufacturers. This paper will benchmark the random number generator of several TPM chips from two perspectives: the quality of the random bit s...

  6. Experimental Criticality Benchmarks for SNAP 10A/2 Reactor Cores

    Energy Technology Data Exchange (ETDEWEB)

    Krass, A.W.

    2005-12-19

    This report describes computational benchmark models for nuclear criticality derived from descriptions of the Systems for Nuclear Auxiliary Power (SNAP) Critical Assembly (SCA)-4B experimental criticality program conducted by Atomics International during the early 1960's. The selected experimental configurations consist of fueled SNAP 10A/2-type reactor cores subject to varied conditions of water immersion and reflection under experimental control to measure neutron multiplication. SNAP 10A/2-type reactor cores are compact volumes fueled and moderated with the hydride of highly enriched uranium-zirconium alloy. Specifications for the materials and geometry needed to describe a given experimental configuration for a model using MCNP5 are provided. The material and geometry specifications are adequate to permit user development of input for alternative nuclear safety codes, such as KENO. A total of 73 distinct experimental configurations are described.

  7. Review of the GMD Benchmark Event in TPL-007-1

    Energy Technology Data Exchange (ETDEWEB)

    Backhaus, Scott N. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Rivera, Michael Kelly [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-07-21

    Los Alamos National Laboratory (LANL) examined the approaches suggested in NERC Standard TPL-007-1 for defining the geo-electric field for the Benchmark Geomagnetic Disturbance (GMD) Event. Specifically; 1. Estimating 100-year exceedance geo-electric field magnitude; The scaling of the GMD Benchmark Event to geomagnetic latitudes below 60 degrees north; and 3. The effect of uncertainties in earth conductivity data on the conversion from geomagnetic field to geo-electric field. This document summarizes the review and presents recommendations for consideration

  8. Taking Stock of Corporate Benchmarking Practices: Panacea or Pandora's Box?

    Science.gov (United States)

    Fleisher, Craig S.; Burton, Sara

    1995-01-01

    Discusses why corporate communications/public relations (cc/pr) should be benchmarked (an approach used by cc/pr managers to demonstrate the value of their activities to skeptical organizational executives). Discusses myths about cc/pr benchmarking; types, targets, and focus of cc/pr benchmarking; a process model; and critical decisions about…

  9. Discovering and Implementing Best Practices to Strengthen SEAs: Collaborative Benchmarking

    Science.gov (United States)

    Building State Capacity and Productivity Center, 2013

    2013-01-01

    This paper is written for state educational agency (SEA) leaders who are considering the benefits of collaborative benchmarking, and it addresses the following questions: (1) What does benchmarking of best practices entail?; (2) How does "collaborative benchmarking" enhance the process?; (3) How do SEAs control the process so that "their" needs…

  10. 40 CFR 141.172 - Disinfection profiling and benchmarking.

    Science.gov (United States)

    2010-07-01

    ... benchmarking. 141.172 Section 141.172 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED... Disinfection-Systems Serving 10,000 or More People § 141.172 Disinfection profiling and benchmarking. (a... sanitary surveys conducted by the State. (c) Disinfection benchmarking. (1) Any system required to...

  11. Overview and Discussion of the OECD/NRC Benchmark Based on NUPEC PWR Subchannel and Bundle Tests

    Directory of Open Access Journals (Sweden)

    M. Avramova

    2013-01-01

    Full Text Available The Pennsylvania State University (PSU under the sponsorship of the US Nuclear Regulatory Commission (NRC has prepared, organized, conducted, and summarized the Organisation for Economic Co-operation and Development/US Nuclear Regulatory Commission (OECD/NRC benchmark based on the Nuclear Power Engineering Corporation (NUPEC pressurized water reactor (PWR subchannel and bundle tests (PSBTs. The international benchmark activities have been conducted in cooperation with the Nuclear Energy Agency (NEA of OECD and the Japan Nuclear Energy Safety Organization (JNES, Japan. The OECD/NRC PSBT benchmark was organized to provide a test bed for assessing the capabilities of various thermal-hydraulic subchannel, system, and computational fluid dynamics (CFDs codes. The benchmark was designed to systematically assess and compare the participants’ numerical models for prediction of detailed subchannel void distribution and department from nucleate boiling (DNB, under steady-state and transient conditions, to full-scale experimental data. This paper provides an overview of the objectives of the benchmark along with a definition of the benchmark phases and exercises. The NUPEC PWR PSBT facility and the specific methods used in the void distribution measurements are discussed followed by a summary of comparative analyses of submitted final results for the exercises of the two benchmark phases.

  12. Benchmark Calculations of Interaction Energies in Noncovalent Complexes and Their Applications.

    Science.gov (United States)

    Řezáč, Jan; Hobza, Pavel

    2016-05-11

    Data sets of benchmark interaction energies in noncovalent complexes are an important tool for quantifying the accuracy of computational methods used in this field, as well as for the development of new computational approaches. This review is intended as a guide to conscious use of these data sets. We discuss their construction and accuracy, list the data sets available in the literature, and demonstrate their application to validation and parametrization of quantum-mechanical computational methods. In practical model systems, the benchmark interaction energies are usually obtained using composite CCSD(T)/CBS schemes. To use these results as a benchmark, their accuracy should be estimated first. We analyze the errors of this methodology with respect to both the approximations involved and the basis set size. We list the most prominent data sets covering various aspects of the field, from general ones to sets focusing on specific types of interactions or systems. The benchmark data are then used to validate more efficient computational approaches, including those based on explicitly correlated methods. Special attention is paid to the transition to large systems, where accurate benchmarking is difficult or impossible, and to the importance of nonequilibrium geometries in parametrization of more approximate methods. PMID:26943241

  13. Review of approaches used to establish sediment benchmarks for PCDD/Fs

    Energy Technology Data Exchange (ETDEWEB)

    Wenning, R.J.; Martello, L. [ENVIRON International, Emeryville, CA (United States); Iannuzzi, T. [BBL Sciences, Annapolis, MD (United States)

    2004-09-15

    At present, regulatory limits for polychlorinated dibenzo-p-dioxins and dibenzofurans (PCDD/Fs) in sediments have not been promulgated in the United States, although various sediment quality guidelines (SQG) and benchmark values have been proposed by several state and federal agencies for specific sediment management purposes. The impetus for developing SQGs or sediment benchmarks for PCDD/Fs is largely driven by concerns regarding the potential for bioaccumulation in fish and benthic invertebrates, uptake by fish-eating birds and wildlife through aquatic food webs, and consumption by recreational and subsistence fishermen. While sediment-based benchmarks cannot be used to assess population-level risks in aquatic systems, there is a need for such benchmarks to enable regulators to screen sites to determine if the levels of PCDD/Fs that are present warrant a risk-based investigation. Building on earlier published work, this review summarizes the different approaches that have been used to identify sediment benchmarks for PCDD/Fs in the U.S. and elsewhere. The approaches most often used are discussed, as well as the data gaps relevant to understanding the fate of PCDD/Fs in aquatic environments and the direction necessary for establishing a scientifically defensible protocol to assess PCDD/Fs in sediment.

  14. Utilizing benchmark data from the ANL-ZPR diagnostic cores program

    International Nuclear Information System (INIS)

    The support of the criticality safety community is allowing the production of benchmark descriptions of several assemblies from the ZPR Diagnostic Cores Program. The assemblies have high sensitivities to nuclear data for a few isotopes. This can highlight limitations in nuclear data for selected nuclides or in standard methods used to treat these data. The present work extends the use of the simplified model of the U9 benchmark assembly beyond the validation of keff. Further simplifications have been made to produce a data testing benchmark in the style of the standard CSEWG benchmark specifications. Calculations for this data testing benchmark are compared to results obtained with more detailed models and methods to determine their biases. These biases or corrections factors can then be applied in the use of the less refined methods and models. Data testing results using Versions IV, V, and VI of the ENDF/B nuclear data are presented for keff, f28/f25, c28/f25, and βeff. These limited results demonstrate the importance of studying other integral parameters in addition to keff in trying to improve nuclear data and methods and the importance of accounting for methods and/or modeling biases when using data testing results to infer the quality of the nuclear data files

  15. BN-600 hybrid core benchmark analyses (phases 1, 2 and 3) (draft synthesis report)

    International Nuclear Information System (INIS)

    UK, respectively), CIAE from China, IGCAR from India, JNC from Japan, KAERI from Rep. of Korea, IPPE and OKBM from the Russian Federation. The benchmark analyses consist of three Phases during 1999 - 2001 : RZ homogeneous benchmark (Phase 1), Hex-Z homogeneous benchmark (Phase 2), and Hex-Z heterogeneous and burnup benchmark (Phase 3). This report presents the results of benchmark analyses of a hybrid UOX/MOX fuelled core of the BN-600 reactor. The results for several relevant reactivity parameters obtained by the participants with their own state-of-the-art basic data and codes, were compared in terms of calculational uncertainty, and their effects on the ULOF transient behavior of the hybrid BN- 600 core were evaluated. Contributions of the participants in the benchmark analyses is shown. This report first addresses the benchmark definitions and specifications given for each Phase and briefly introduces the basic data, computer codes, and methodologies applied to the benchmark analyses by various participants. Then, the results obtained by the participants in terms of calculational uncertainty and their effect on the core transient behavior are intercompared. Finally it addresses some conclusions drawn in the benchmarks

  16. Implementation, capabilities, and benchmarking of Shift, a massively parallel Monte Carlo radiation transport code

    International Nuclear Information System (INIS)

    This paper discusses the implementation, capabilities, and validation of Shift, a massively parallel Monte Carlo radiation transport package developed and maintained at Oak Ridge National Laboratory. It has been developed to scale well from laptop to small computing clusters to advanced supercomputers. Special features of Shift include hybrid capabilities for variance reduction such as CADIS and FW-CADIS, and advanced parallel decomposition and tally methods optimized for scalability on supercomputing architectures. Shift has been validated and verified against various reactor physics benchmarks and compares well to other state-of-the-art Monte Carlo radiation transport codes such as MCNP5, CE KENO-VI, and OpenMC. Some specific benchmarks used for verification and validation include the CASL VERA criticality test suite and several Westinghouse AP1000® problems. These benchmark and scaling studies show promising results

  17. SARNET hydrogen deflagration benchmarks: Main outcomes and conclusions

    International Nuclear Information System (INIS)

    Highlights: • Modelling of hydrogen turbulent flame: effect of turbulence. • Modelling of hydrogen turbulent flame: effect of additional diluents. • Comparison of different models and modelling approaches. • Recommendations for further experimental and modelling work dealing with combustion. - Abstract: In case of a core melt-down accident in a light water nuclear reactor, hydrogen is produced during reactor core degradation and released into the reactor building. This subsequently creates a combustion hazard. A local ignition of the combustible mixture may generate standing flames or initially slow propagating flames. Depending on geometry, mixture composition and turbulence level, the flame can accelerate or be quenched after a certain distance. The loads generated by the combustion process (increase of the containment atmosphere pressure and temperature) may threaten the integrity of the containment building and of internal walls and equipment. Turbulent deflagration flames may generate high pressure pulses, temperature peaks, shock waves and large pressure gradients which could severely damage specific containment components, internal walls and/or safety equipment. The evaluation of such loads requires validated codes which can be used with a high level of confidence. Currently, turbulence and steam effect on flame acceleration, flame deceleration and flame quenching mechanisms are not well reproduced by combustion models usually implemented in safety tools and further model enhancement and validation are still needed. For this purpose, two hydrogen deflagration benchmark exercises have been organised in the framework of the SARNET network. The first benchmark was focused on turbulence effect on flame propagation. For this purpose, three tests performed in the ENACCEF facility were considered. They concern vertical flame propagation in an initially homogenous mixture with 13 vol.% hydrogen content and different geometrical configurations. Three blockage

  18. CFD validation in OECD/NEA t-junction benchmark.

    Energy Technology Data Exchange (ETDEWEB)

    Obabko, A. V.; Fischer, P. F.; Tautges, T. J.; Karabasov, S.; Goloviznin, V. M.; Zaytsev, M. A.; Chudanov, V. V.; Pervichko, V. A.; Aksenova, A. E. (Mathematics and Computer Science); (Cambridge Univ.); (Moscow Institute of Nuclar Energy Safety)

    2011-08-23

    When streams of rapidly moving flow merge in a T-junction, the potential arises for large oscillations at the scale of the diameter, D, with a period scaling as O(D/U), where U is the characteristic flow velocity. If the streams are of different temperatures, the oscillations result in experimental fluctuations (thermal striping) at the pipe wall in the outlet branch that can accelerate thermal-mechanical fatigue and ultimately cause pipe failure. The importance of this phenomenon has prompted the nuclear energy modeling and simulation community to establish a benchmark to test the ability of computational fluid dynamics (CFD) codes to predict thermal striping. The benchmark is based on thermal and velocity data measured in an experiment designed specifically for this purpose. Thermal striping is intrinsically unsteady and hence not accessible to steady state simulation approaches such as steady state Reynolds-averaged Navier-Stokes (RANS) models.1 Consequently, one must consider either unsteady RANS or large eddy simulation (LES). This report compares the results for three LES codes: Nek5000, developed at Argonne National Laboratory (USA), and Cabaret and Conv3D, developed at the Moscow Institute of Nuclear Energy Safety at (IBRAE) in Russia. Nek5000 is based on the spectral element method (SEM), which is a high-order weighted residual technique that combines the geometric flexibility of the finite element method (FEM) with the tensor-product efficiencies of spectral methods. Cabaret is a 'compact accurately boundary-adjusting high-resolution technique' for fluid dynamics simulation. The method is second-order accurate on nonuniform grids in space and time, and has a small dispersion error and computational stencil defined within one space-time cell. The scheme is equipped with a conservative nonlinear correction procedure based on the maximum principle. CONV3D is based on the immersed boundary method and is validated on a wide set of the experimental

  19. Performance Comparison of HPF and MPI Based NAS Parallel Benchmarks

    Science.gov (United States)

    Saini, Subhash

    1997-01-01

    Compilers supporting High Performance Form (HPF) features first appeared in late 1994 and early 1995 from Applied Parallel Research (APR), Digital Equipment Corporation, and The Portland Group (PGI). IBM introduced an HPF compiler for the IBM RS/6000 SP2 in April of 1996. Over the past two years, these implementations have shown steady improvement in terms of both features and performance. The performance of various hardware/ programming model (HPF and MPI) combinations will be compared, based on latest NAS Parallel Benchmark results, thus providing a cross-machine and cross-model comparison. Specifically, HPF based NPB results will be compared with MPI based NPB results to provide perspective on performance currently obtainable using HPF versus MPI or versus hand-tuned implementations such as those supplied by the hardware vendors. In addition, we would also present NPB, (Version 1.0) performance results for the following systems: DEC Alpha Server 8400 5/440, Fujitsu CAPP Series (VX, VPP300, and VPP700), HP/Convex Exemplar SPP2000, IBM RS/6000 SP P2SC node (120 MHz), NEC SX-4/32, SGI/CRAY T3E, and SGI Origin2000. We would also present sustained performance per dollar for Class B LU, SP and BT benchmarks.

  20. Measuring NUMA effects with the STREAM benchmark

    CERN Document Server

    Bergstrom, Lars

    2011-01-01

    Modern high-end machines feature multiple processor packages, each of which contains multiple independent cores and integrated memory controllers connected directly to dedicated physical RAM. These packages are connected via a shared bus, creating a system with a heterogeneous memory hierarchy. Since this shared bus has less bandwidth than the sum of the links to memory, aggregate memory bandwidth is higher when parallel threads all access memory local to their processor package than when they access memory attached to a remote package. But, the impact of this heterogeneous memory architecture is not easily understood from vendor benchmarks. Even where these measurements are available, they provide only best-case memory throughput. This work presents a series of modifications to the well-known STREAM benchmark to measure the effects of NUMA on both a 48-core AMD Opteron machine and a 32-core Intel Xeon machine.

  1. Non-judgemental Dynamic Fuel Cycle Benchmarking

    CERN Document Server

    Scopatz, Anthony Michael

    2015-01-01

    This paper presents a new fuel cycle benchmarking analysis methodology by coupling Gaussian process regression, a popular technique in Machine Learning, to dynamic time warping, a mechanism widely used in speech recognition. Together they generate figures-of-merit that are applicable to any time series metric that a benchmark may study. The figures-of-merit account for uncertainty in the metric itself, utilize information across the whole time domain, and do not require that the simulators use a common time grid. Here, a distance measure is defined that can be used to compare the performance of each simulator for a given metric. Additionally, a contribution measure is derived from the distance measure that can be used to rank order the importance of fuel cycle metrics. Lastly, this paper warns against using standard signal processing techniques for error reduction. This is because it is found that error reduction is better handled by the Gaussian process regression itself.

  2. MHTGR-350 Benchmark Analysis by MCS Code

    International Nuclear Information System (INIS)

    This benchmark contains various problems in three phases, which require the results for neutronics, thermal fluids solutions, transient calculation, and depletion calculation. The Phase-I exercise-1 problem was solved with MCS Monte Carlo (MC) code developed at UNIST. The global parameters and power distribution was compared with the results of McCARD MC code developed by SNU and a finite element method (FEM) - based diffusion code CAPP developed by KAERI. The MHTGR-350 benchmark Phase-I exercise 1 was solved with MCS. The results of MCS are compared with those of McCARD and CAPP. The results of MCS code showed good agreements with those of McCARD code while they showed considerable disagreements with those of CAPP code, which can be attributed to the fact that CAPP is a diffusion code while the others are MC transport codes

  3. Argonne Code Center: benchmark problem book

    International Nuclear Information System (INIS)

    This report is a supplement to the original report, published in 1968, as revised. The Benchmark Problem Book is intended to serve as a source book of solutions to mathematically well-defined problems for which either analytical or very accurate approximate solutions are known. This supplement contains problems in eight new areas: two-dimensional (R-z) reactor model; multidimensional (Hex-z) HTGR model; PWR thermal hydraulics--flow between two channels with different heat fluxes; multidimensional (x-y-z) LWR model; neutron transport in a cylindrical ''black'' rod; neutron transport in a BWR rod bundle; multidimensional (x-y-z) BWR model; and neutronic depletion benchmark problems. This supplement contains only the additional pages and those requiring modification

  4. Toxicological benchmarks for wildlife. Environmental Restoration Program

    Energy Technology Data Exchange (ETDEWEB)

    Opresko, D.M.; Sample, B.E.; Suter, G.W.

    1993-09-01

    This report presents toxicological benchmarks for assessment of effects of 55 chemicals on six representative mammalian wildlife species (short-tailed shrew, white-footed mouse, cottontail ink, red fox, and whitetail deer) and eight avian wildlife species (American robin, woodcock, wild turkey, belted kingfisher, great blue heron, barred owl, Cooper`s hawk, and redtailed hawk) (scientific names are presented in Appendix C). These species were chosen because they are widely distributed and provide a representative range of body sizes and diets. The chemicals are some of those that occur at United States Department of Energy (DOE) waste sites. The benchmarks presented in this report are values believed to be nonhazardous for the listed wildlife species.

  5. KENO-IV code benchmark calculation, (6)

    International Nuclear Information System (INIS)

    A series of benchmark tests has been undertaken in JAERI in order to examine the capability of JAERI's criticality safety evaluation system consisting of the Monte Carlo calculation code KENO-IV and the newly developed multigroup constants library MGCL. The present report describes the results of a benchmark test using criticality experiments about Plutonium fuel in various shape. In all, 33 cases of experiments have been calculated for Pu(NO3)4 aqueous solution, Pu metal or PuO2-polystyrene compact in various shape (sphere, cylinder, rectangular parallelepiped). The effective multiplication factors calculated for the 33 cases distribute widely between 0.955 and 1.045 due to wide range of system variables. (author)

  6. BENCHMARKING – BETWEEN TRADITIONAL & MODERN BUSINESS ENVIRONMENT

    Directory of Open Access Journals (Sweden)

    Mihaela Ungureanu

    2011-09-01

    Full Text Available The concept of benchmarking requires a continuous process of performance improvement of different organizations in order to obtain superiority towards those perceived as market leader’s competitors. This superiority can always be questioned, its relativity originating in the quick growing evolution of the economic environment. The approach supports innovation in relation with traditional methods and it is based on the will of those managers who want to determine limits and seek excellence. The end of the twentieth century is the period of broad expression of benchmarking in various areas and its transformation from a simple quantitative analysis tool, to a resource of information on performance and quality of goods and services.

  7. Shielding benchmark test for JENDL-3T

    Energy Technology Data Exchange (ETDEWEB)

    Hasegawa, Akira (Japan Atomic Energy Research Inst., Tokai, Ibaraki. Tokai Research Establishment)

    1988-03-01

    The results of the shielding benchmark tests for JENDL-3T (testing stage version of JENDL-3), performed by JNDC Shielding Sub-working group, are summarized. Especially, problems of total cross-section in MeV range for O, Na, Fe, revealed from the analysis of the Broomstick's experiment, are discussed in details. For the deep penetration profiles of Fe, which is very important feature in shielding calculation, ASPIS benchmark experiment is analysed and discussed. From the study overall applicability of JENDL-3T data for the shielding calculation is confirmed. At the same time some problems still remained are also pointed out. By the reflection of this feedback information applicability of JENDL-3, forth coming official version, will be greatly improved.

  8. Direct data access protocols benchmarking on DPM

    CERN Document Server

    Furano, Fabrizio; Keeble, Oliver; Mancinelli, Valentina

    2015-01-01

    The Disk Pool Manager is an example of a multi-protocol, multi-VO system for data access on the Grid that went though a considerable technical evolution in the last years. Among other features, its architecture offers the opportunity of testing its different data access frontends under exactly the same conditions, including hardware and backend software. This characteristic inspired the idea of collecting monitoring information from various testbeds in order to benchmark the behaviour of the HTTP and Xrootd protocols for the use case of data analysis, batch or interactive. A source of information is the set of continuous tests that are run towards the worldwide endpoints belonging to the DPM Collaboration, which accumulated relevant statistics in its first year of activity. On top of that, the DPM releases are based on multiple levels of automated testing that include performance benchmarks of various kinds, executed regularly every day. At the same time, the recent releases of DPM can report monitoring infor...

  9. Gd-2 fuel cycle Benchmark (version 1)

    International Nuclear Information System (INIS)

    The new benchmark based on Dukovany NPP Unit-3 history of Gd-2 fuel type utilisation is defined. The main goal of this benchmark is to compare results obtained by different codes used for neutron-physics calculation. Input data are described in this paper including initial state definition. Requested output data format for automatic processing is defined. This paper includes: a) fuel description b) definition of starting point and five fuel cycles with profiled fuel 3.82% only c) definition of four fuel cycles with fuel Gd-2 (enr.4.25%) d) recommendation for calculation e) list of parameters for comparison f) methodology of comparison g) an example of results comparison (Authors)

  10. Efeitos de uma única sessão de atividade motora na atenção visual de pessoas idosas: comparação entre atividade aeróbica e neuromotora

    OpenAIRE

    Canelas, Dora Cristina Calção

    2014-01-01

    Efeitos de uma única sessão de exercício na atenção visual de pessoas idosas: comparação entre exercício aeróbico e neuromotora Resumo Objetivo: O principal objetivo deste estudo foi avaliar os efeitos agudos de uma única sessão de exercício aeróbico e uma única sessão de exercício neuromotor sobre a atenção visual de pessoas idosas. Métodos: Participaram 87 indivíduos de ambos os sexos, com idades acima dos 55 anos (65,65 ± 6,64 anos), residentes no distrito de Évora, in...

  11. FENDL-2 and associated benchmark calculations

    International Nuclear Information System (INIS)

    The present Report contains the Summary of the IAEA Advisory Group Meeting on ''The FENDL-2 and Associated Benchmark Calculations'' convened on 18-22 November 1991, at the IAEA Headquarters in Vienna, Austria, by the IAEA Nuclear Data Section. The Advisory Group Meeting Conclusions and Recommendations and the Report on the Strategy for the Future Development of the FENDL and on Future Work towards establishing FENDL-2 are also included in this Summary Report. (author). 1 ref., 4 tabs

  12. SINBAD: Shielding integral benchmark archive and database

    International Nuclear Information System (INIS)

    SINBAD is a new electronic database developed to store a variety of radiation shielding benchmark data so that users can easily retrieve and incorporate the data into their calculations. SINBAD is an excellent data source for users who require the quality assurance necessary in developing cross-section libraries or radiation transport codes. The future needs of the scientific community are best served by the electronic database format of SINBAD and its user-friendly interface, combined with its data accuracy and integrity

  13. MESURE Tool to benchmark Java Card platforms

    OpenAIRE

    Pierre Paradinas; Julien Cordry; Samia Bouzefrane

    2009-01-01

    The advent of the Java Card standard has been a major turning point in smart card technology. With the growing acceptance of this standard, understanding the performance behavior of these platforms is becoming crucial. To meet this need, we present in this paper a novel benchmarking framework to test and evaluate the performance of Java Card platforms. MESURE tool is the first framework which accuracy and effectiveness are independent from the particular Java Card platform tested and CAD used.

  14. A Simplified HTTR Diffusion Theory Benchmark

    International Nuclear Information System (INIS)

    The Georgia Institute of Technology (GA-Tech) recently developed a transport theory benchmark based closely on the geometry and the features of the HTTR reactor that is operational in Japan. Though simplified, the benchmark retains all the principal physical features of the reactor and thus provides a realistic and challenging test for the codes. The purpose of this paper is two-fold. The first goal is an extension of the benchmark to diffusion theory applications by generating the additional data not provided in the GA-Tech prior work. The second goal is to use the benchmark on the HEXPEDITE code available to the INL. The HEXPEDITE code is a Green's function-based neutron diffusion code in 3D hexagonal-z geometry. The results showed that the HEXPEDITE code accurately reproduces the effective multiplication factor of the reference HELIOS solution. A secondary, but no less important, conclusion is that in the testing against actual HTTR data of a full sequence of codes that would include HEXPEDITE, in the apportioning of inevitable discrepancies between experiment and models, the portion of error attributable to HEXPEDITE would be expected to be modest. If large discrepancies are observed, they would have to be explained by errors in the data fed into HEXPEDITE. Results based on a fully realistic model of the HTTR reactor are presented in a companion paper. The suite of codes used in that paper also includes HEXPEDITE. The results shown here should help that effort in the decision making process for refining the modeling steps in the full sequence of codes.

  15. Decentralized Reliable Control for a Building Benchmark

    Czech Academy of Sciences Publication Activity Database

    Bakule, Lubomír; Papík, Martin; Rehák, Branislav

    Barcelona : CIMNE, 2014 - (Rodellar, J.; Güemes, A.; Pozo, F.), s. 2242-2253 ISBN 978-84-942844-5-8. [World Conference on Structural Control and Monitoring /6./ - 6WCSCM. Barcelona (ES), 15.07.2014-17.07.2014] R&D Projects: GA ČR GA13-02149S Keywords : decenralized reliable control * structural control * building benchmark Subject RIV: BC - Control Systems Theory

  16. WIDER FACE: A Face Detection Benchmark

    OpenAIRE

    Yang, Shuo; Luo, Ping; Loy, Chen Change; Tang, Xiaoou

    2015-01-01

    Face detection is one of the most studied topics in the computer vision community. Much of the progresses have been made by the availability of face detection benchmark datasets. We show that there is a gap between current face detection performance and the real world requirements. To facilitate future face detection research, we introduce the WIDER FACE dataset, which is 10 times larger than existing datasets. The dataset contains rich annotations, including occlusions, poses, event categori...

  17. Benchmarking the Governance of Tertiary Education Systems

    OpenAIRE

    World Bank

    2012-01-01

    This paper presents a benchmarking approach for analyzing and comparing governance in tertiary education as a critical determinant of system and institutional performance. This methodology is tested through a pilot survey in East Asia and Central America. The paper is structured in the following way: (i) the first part highlights the link between good governance practices and the performance of tertiary institutions (ii) the second part introduces the analytical approach underpinning the gove...

  18. Benchmark calculations for MTR type cores

    International Nuclear Information System (INIS)

    The benchmark neutronies design study of MTR cores has been performed for various fuel enrichments. The reactivities and fluxes for fresh core have been evaluated. The reference calculations have been performed for a 10MW(th) reactor but the method is applicable to other power levels. As the results are in good agreement with those obtained at other establishments, the method of analysis used in this report for a fresh core can be relied upon with a fair amount of confidence. (authors)

  19. POLCA-T Neutron Kinetics Model Benchmarking

    OpenAIRE

    Kotchoubey, Jurij

    2015-01-01

    The demand for computational tools that are capable to reliably predict the behavior of a nuclear reactor core in a variety of static and dynamic conditions does inevitably require a proper qualification of these tools for the intended purposes. One of the qualification methods is the verification of the code in question. Hereby, the correct implementation of the applied model as well as its flawless implementation in the code are scrutinized. The present work concerns with benchmarking as a ...

  20. Reactor calculation benchmark PCA blind test results

    International Nuclear Information System (INIS)

    Further improvement in calculational procedures or a combination of calculations and measurements is necessary to attain 10 to 15% (1 sigma) accuracy for neutron exposure parameters (flux greater than 0.1 MeV, flux greater than 1.0 MeV, and dpa). The calculational modeling of power reactors should be benchmarked in an actual LWR plant to provide final uncertainty estimates for end-of-life predictions and limitations for plant operations. 26 references, 14 figures, 6 tables

  1. BN-600 full MOX core benchmark analysis

    International Nuclear Information System (INIS)

    As a follow-up of the BN-600 hybrid core benchmark, a full MOX core benchmark was performed within the framework of the IAEA co-ordinated research project. Discrepancies between the values of main reactivity coefficients obtained by the participants for the BN-600 full MOX core benchmark appear to be larger than those in the previous hybrid core benchmarks on traditional core configurations. This arises due to uncertainties in the proper modelling of the axial sodium plenum above the core. It was recognized that the sodium density coefficient strongly depends on the core model configuration of interest (hybrid core vs. fully MOX fuelled core with sodium plenum above the core) in conjunction with the calculation method (diffusion vs. transport theory). The effects of the discrepancies revealed between the participants results on the ULOF and UTOP transient behaviours of the BN-600 full MOX core were investigated in simplified transient analyses. Generally the diffusion approximation predicts more benign consequences for the ULOF accident but more hazardous ones for the UTOP accident when compared with the transport theory results. The heterogeneity effect does not have any significant effect on the simulation of the transient. The comparison of the transient analyses results concluded that the fuel Doppler coefficient and the sodium density coefficient are the two most important coefficients in understanding the ULOF transient behaviour. In particular, the uncertainty in evaluating the sodium density coefficient distribution has the largest impact on the description of reactor dynamics. This is because the maximum sodium temperature rise takes place at the top of the core and in the sodium plenum.

  2. Direct Simulation of a Solidification Benchmark Experiment

    OpenAIRE

    Carozzani, Tommy; Gandin, Charles-André; Digonnet, Hugues; Bellet, Michel; Zaidat, Kader; Fautrelle, Yves

    2013-01-01

    International audience A solidification benchmark experiment is simulated using a three-dimensional cellular automaton-finite element solidification model. The experiment consists of a rectangular cavity containing a Sn-3 wt pct Pb alloy. The alloy is first melted and then solidified in the cavity. A dense array of thermocouples permits monitoring of temperatures in the cavity and in the heat exchangers surrounding the cavity. After solidification, the grain structure is revealed by metall...

  3. Benchmark testing calculations for 232Th

    International Nuclear Information System (INIS)

    The cross sections of 232Th from CNDC and JENDL-3.3 were processed with NJOY97.45 code in the ACE format for the continuous-energy Monte Carlo Code MCNP4C. The Keff values and central reaction rates based on CENDL-3.0, JENDL-3.3 and ENDF/B-6.2 were calculated using MCNP4C code for benchmark assembly, and the comparisons with experimental results are given. (author)

  4. Aerodynamic Benchmarking of the Deepwind Design

    DEFF Research Database (Denmark)

    Bedona, Gabriele; Schmidt Paulsen, Uwe; Aagaard Madsen, Helge;

    2015-01-01

    the blade solicitation and the cost of energy. Different parameters are considered for the benchmarking study. The DeepWind blade is characterized by a shape similar to the Troposkien geometry but asymmetric between the top and bottom parts: this shape is considered as a fixed parameter in the...... symmetric NACA airfoil family. (C) 2015 Published by Elsevier Ltd. This is an open access article under the CC BY-NC-ND license...

  5. Benchmarking research of steel companies in Europe

    OpenAIRE

    M. Antošová; A. Csikósová; K. Čulková; Seňová, A.

    2013-01-01

    In present time steelworks are at a stage of permanent changes that are marked with still stronger competition pressure. Therefore managers must solve questions of how to decrease production costs, how to overcome competition and how to survive in the world market. Still more attention should be paid to the modern managerial methods of market research and comparison with competition. Benchmarking research is one of the effective tools for such research. The goal of this contribution is to com...

  6. Benchmarking regulatory network reconstruction with GRENDEL

    OpenAIRE

    Haynes, Brian C; Brent, Michael R.

    2009-01-01

    Motivation: Over the past decade, the prospect of inferring networks of gene regulation from high-throughput experimental data has received a great deal of attention. In contrast to the massive effort that has gone into automated deconvolution of biological networks, relatively little effort has been invested in benchmarking the proposed algorithms. The rate at which new network inference methods are being proposed far outpaces our ability to objectively evaluate and compare them. This is lar...

  7. Benchmarking dynamic Bayesian network structure learning algorithms

    OpenAIRE

    Trabelsi, Ghada; Leray, Philippe; Ben Ayed, Mounir; Alimi, Adel

    2012-01-01

    Dynamic Bayesian Networks (DBNs) are probabilistic graphical models dedicated to modeling multivariate time series. Two-time slice BNs (2-TBNs) are the most current type of these models. Static BN structure learning is a well-studied domain. Many approaches have been proposed and the quality of these algorithms has been studied over a range of di erent standard networks and methods of evaluation. To the best of our knowledge, all studies about DBN structure learning use their own benchmarks a...

  8. BENCHMARKING – BETWEEN TRADITIONAL & MODERN BUSINESS ENVIRONMENT

    OpenAIRE

    Mihaela Ungureanu

    2011-01-01

    The concept of benchmarking requires a continuous process of performance improvement of different organizations in order to obtain superiority towards those perceived as market leader’s competitors. This superiority can always be questioned, its relativity originating in the quick growing evolution of the economic environment. The approach supports innovation in relation with traditional methods and it is based on the will of those managers who want to determine limits and seek excellence. Th...

  9. Benchmarking spatial joins à la carte

    OpenAIRE

    Günther, Oliver; Oria, Vincent; Picouet, Philippe; Saglio, Jean-Marc; Scholl, Michel

    1997-01-01

    Spatial joins are join operations that involve spatial data types and operators. Spatial access methods are often used to speed up the computation of spatial joins. This paper addresses the issue of benchmarking spatial join operations. For this purpose, we first present a WWW-based tool to produce sets of rectangles. Experimentators can use a standard Web browser to specify the number of rectangles, as well as the statistical distributions of their sizes, shapes, and locations. Second, using...

  10. SINBAD: Shielding integral benchmark archive and database

    Energy Technology Data Exchange (ETDEWEB)

    Hunter, H.T.; Ingersoll, D.T.; Roussin, R.W. [and others

    1996-04-01

    SINBAD is a new electronic database developed to store a variety of radiation shielding benchmark data so that users can easily retrieve and incorporate the data into their calculations. SINBAD is an excellent data source for users who require the quality assurance necessary in developing cross-section libraries or radiation transport codes. The future needs of the scientific community are best served by the electronic database format of SINBAD and its user-friendly interface, combined with its data accuracy and integrity.

  11. Spas Performances Benchmarking and Operation Efficiency*

    OpenAIRE

    Akarapong Untong; Mingsarn Kaosa-ard

    2014-01-01

    This paper aims to benchmark the performance and operational efficiency of spas by using key performance indicators. Data Envelopment Analysis (DEA) method with SBM super-efficiency model has applied to evaluate the operational efficiency of 21 spas which consist of 7 day spas and 14 hotel and resort spas. The result of the study found that spa with the best performance can use existing resources efficiently and encourage therapists to have the best productivity in service. However, the effic...

  12. Benchmark calculations on simple reactor systems

    International Nuclear Information System (INIS)

    The development of some calculation methods is described. Tests of these and other methods on benchmark problems are reported. The following items are treated: 1) Criticality of spheres and slabs for monoenergetic neutrons with Carlviks method. 2) High precision S sub (n) calculations on critical slabs. 3) Comparison of angular quadrature methods in S sub (n) calculations. 4) Tests of a standard ANISN program. 5) Presence of complex time eigenvalues in a fundamental problem. (Author)

  13. Benchmarking Nature Tourism between Zhangjiajie and Repovesi

    OpenAIRE

    Wu, Zhou

    2014-01-01

    Since nature tourism became a booming business in modern society, more and more tourists choose nature-based tourism destination for their holidays. To find ways to promote Repovesi national park is quite significant, in a bid to reinforce the competitiveness of Repovesi national park. The topic of this thesis is both to find good marketing strategies used by the Zhangjiajie national park, via benchmarking and to provide some suggestions to Repovesi national park. The Method used in t...

  14. The Benchmark Beta, CAPM, and Pricing Anomalies.

    OpenAIRE

    Cheol S. Eun

    1994-01-01

    Recognizing that a part of the unobservable market portfolio is certainly observable, the author first reformulate the capital asset pricing model so that asset returns can be related to the 'benchmark' beta computed against a set of observable assets as well as the 'latent' beta computed against the remaining unobservable assets, and then shows that when the pricing effect of the latent beta is ignored, assets would appear to be systematically mispriced even if the capital asset pricing mode...

  15. Benchmarking of Remote Sensing Segmentation Methods

    Czech Academy of Sciences Publication Activity Database

    Mikeš, Stanislav; Haindl, Michal; Scarpa, G.; Gaetano, R.

    2015-01-01

    Roč. 8, č. 5 (2015), s. 2240-2248. ISSN 1939-1404 R&D Projects: GA ČR(CZ) GA14-10911S Institutional support: RVO:67985556 Keywords : benchmark * remote sensing segmentation * unsupervised segmentation * supervised segmentation Subject RIV: BD - Theory of Information Impact factor: 3.026, year: 2014 http://library.utia.cas.cz/separaty/2015/RO/haindl-0445995.pdf

  16. Exercício aeróbio reduz a hipertensão arterial de mulheres com Doença de Chagas

    Directory of Open Access Journals (Sweden)

    Wania da Silva Lopes

    2014-04-01

    Full Text Available INTRODUÇÃO: Os portadores de doença de Chagas frequentemente apresentam a hipertensão arterial sistêmica (HAS como a principal comorbidade. Em indivíduos hipertensos com e sem doença de Chagas, o controle de HAS geralmente é medicamentoso. Medidas alternativas de intervenção como o exercício físico aeróbio têm sido preconizadas como a maneira mais efetiva para reduzir os níveis de pressão arterial. OBJETIVO: Avaliar a influência do exercício físico sobre a pressão arterial de mulheres hipertensas com e sem doença de Chagas. MÉTODOS: Dezenove voluntárias divididas nos grupos G1 (nove com doença de Chagas e G2 (dez sem doença de Chagas foram submetidas a um programa de treinamento de 12 semanas, com duração de 30 a 60 minutos duas vezes por semana. A pressão arterial sistólica (PAS, diastólica (PAD e a frequência cardíaca (FC foram avaliadas no pré e pós-esforço no início (T0, após seis (T6 e 12 (T12 semanas. RESULTADOS: Em T6, melhora significativa foi observada na PAS pré e pós-esforço e na PAD pós-esforço, para ambos os grupos. No T12, G1 apresentou melhora significativa para todas as variáveis, exceto FC pós-esforço e G2 para PAS pré e pós-esforço e FC pós-esforço. Não houve diferença significativa entre G1 e G2 para as variáveis estudadas. CONCLUSÃO: O exercício físico aeróbio de baixa intensidade reduz significativamente a pressão arterial de mulheres com doença de Chagas, pode ser realizado com segurança, e insere os pacientes com esta enfermidade na prática rotineira de exercícios.

  17. Treinamento Físico Aeróbico como Tratamento não Farmacológico da Síncope Neurocardiogênica

    Directory of Open Access Journals (Sweden)

    Vanessa Cristina Miranda Takahagi

    2014-03-01

    Full Text Available Fundamento: Caracterizada por perda súbita e transitória da consciência e do tônus postural, com recuperação rápida e espontânea, a síncope é causada por uma redução aguda da pressão arterial sistêmica e, por conseguinte, do fluxo sanguíneo cerebral. Os resultados insatisfatórios com o uso de fármacos permitiu que o tratamento não farmacológico da síncope neurocardiogênica fosse contemplado como primeira opção terapêutica. Objetivos: Comparar, em pacientes com síncope neurocardiogênica, o impacto do Treinamento Físico Aeróbico (TFA de moderada intensidade e de uma intervenção controle, na positividade do Teste de Inclinação Passiva (TIP e no tempo de tolerância ortostática. Métodos: Foram estudados 21 pacientes com história de síncope neurocardiogênica recorrente e TIP positivo. Esses foram aleatorizados em: Grupo Treinado (GT, n = 11, e Grupo Controle (GC, n = 10. O GT foi submetido a 12 semanas de TFA supervisionado, em cicloergômetro, e o GC, a um procedimento controle que consistia na realização de 15 minutos de alongamentos e 15 minutos de caminhada leve. Resultados: O GT apresentou efeito positivo ao treinamento físico, com aumento significativo do consumo de oxigênio-pico. Já o GC não apresentou nenhuma mudança estatisticamente significante, antes e após a intervenção. Após o período de intervenção, 72,7% da amostra do GT apresentou resultado negativo ao TIP, não apresentando síncope na reavaliação. Conclusão: O programa de treinamento físico aeróbico supervisionado por 12 semanas foi capaz de reduzir o número de TIP positivos, assim como foi capaz de aumentar o tempo de tolerância na posição ortostática durante o teste após o período de intervenção.

  18. Measurements and ALE3D Simulations for Violence in a Scaled Thermal Explosion Experiment with LX-10 and AerMet 100 Steel

    Energy Technology Data Exchange (ETDEWEB)

    McClelland, M A; Maienschein, J L; Yoh, J J; deHaven, M R; Strand, O T

    2005-06-03

    We completed a Scaled Thermal Explosion Experiment (STEX) and performed ALE3D simulations for the HMX-based explosive, LX-10, confined in an AerMet 100 (iron-cobalt-nickel alloy) vessel. The explosive was heated at 1 C/h until cookoff at 182 C using a controlled temperature profile. During the explosion, the expansion of the tube and fragment velocities were measured with strain gauges, Photonic-Doppler-Velocimeters (PDVs), and micropower radar units. These results were combined to produce a single curve describing 15 cm of tube wall motion. A majority of the metal fragments were captured and cataloged. A fragment size distribution was constructed, and a typical fragment had a length scale of 2 cm. Based on these results, the explosion was considered to be a violent deflagration. ALE3D models for chemical, thermal, and mechanical behavior were developed for the heating and explosive processes. A four-step chemical kinetics model is employed for the HMX while a one-step model is used for the Viton. A pressure-dependent deflagration model is employed during the expansion. The mechanical behavior of the solid constituents is represented by a Steinberg-Guinan model while polynomial and gamma-law expressions are used for the equation of state of the solid and gas species, respectively. A gamma-law model is employed for the air in gaps, and a mixed material model is used for the interface between air and explosive. A Johnson-Cook model with an empirical rule for failure strain is used to describe fracture behavior. Parameters for the kinetics model were specified using measurements of the One-Dimensional-Time-to-Explosion (ODTX), while measurements for burn rate were employed to determine parameters in the burn front model. The ALE3D models provide good predictions for the thermal behavior and time to explosion, but the predicted wall expansion curve is higher than the measured curve. Possible contributions to this discrepancy include inaccuracies in the chemical models

  19. Seleção de substratos padrões para ensaios respirométricos aeróbios com biomassa de sistemas de lodo ativado

    Directory of Open Access Journals (Sweden)

    Heraldo Antunes Silva Filho

    2015-03-01

    Full Text Available Nesta pesquisa investigou-se a influência de diferentes substratos na determinação da taxa específica de consumo de oxigênio de biomassa com cultura mista heterotrófica e autotrófica nitrificante, visando à caracterização do substrato mais adequado no desenvolvimento de ensaios respirométricos aeróbios. Foram utilizadas diferentes biomassas derivadas de quatro variantes de sistemas de lodo ativado. Os grupos heterotróficos e autotróficos nitrificantes foram avaliados em relação à sua velocidade de consumo dos substratos testados, sendo utilizada a técnica da respirometria aeróbia aberta semi-contínua de distintos pulsos, descrita em Van Haandel e Catunda (1982. Um respirometro automático acoplado a um computador foi utilizado em todos os testes respirométricos. Para identificar a taxa de consumo dos organismos heterotróficos, os substratos de fonte de carbono selecionados foram acetato de sódio (C2H3NaO2, acetato de etila (C4H8O2, etanol (C2H6O, glicose (C6H12O6 e fenol (C6H6O. Para o grupo autotrófico nitrificante foram utilizados bicarbonato de amônio (NH4HCO3, cloreto de amônio (NH4Cl e nitrito de sódio (NaNO2. Os resultados referentes ao grupo heterotrófico indicaram significativa diferença da taxa metabólica desses organismos na utilização dos substratos avaliados, exercendo maiores taxas de consumo de oxigênio para o acetato de sódio, enquanto para o grupo nitrificante o bicarbonato de amônio mostrou-se mais adequado. Comparando todos os sistemas estudados, observa-se a mesma tendência de maior biodegradabilidade ou afinidade aos substratos acetato de sódio e bicarbonato de amônio.

  20. OECD/NEA Burnup Credit Criticality Benchmark

    International Nuclear Information System (INIS)

    The report describes the final result of the phase-1A of the Burnup Credit Criticality Benchmark conducted by OECD/NEA. The phase-1A benchmark problem is an infinite array of a simple PWR spent fuel rod. The analysis has been performed for the PWR spent fuels of 30 and 40 GWd/t after 1 and 5 years of cooling time. In total, 25 results from 19 institutes of 11 countries have been submitted. For the nuclides in spent fuel, 7 major actinides and 15 major fission products (FP) are selected for the benchmark calculation. In the case of 30 GWd/t burnup, it is found that the major actinides and the major FPs contribute more than 50% and 30% of the total reactivity loss due to burnup, respectively. Therefore, more than 80% of the reactivity loss can be covered by 22 nuclides. However, the larger deviation among the reactivity losses by participants has been found for cases including EPs than the cases with only actinides, indicating the existence of relatively large uncertainties in FP cross sections. The large deviation seen also in the case of the fresh fuel has been found to reduce sufficiently by replacing the cross section library from ENDF-B/IV with that from ENDF-B/V and taking the known bias of MONK6 into account. (author)